Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

494 Neutral

About sheep19

  • Rank
    Advanced Member
  1.   Hello. I'd like to thank for your help. I have implemented what has been discussed in this thread, and the results are much better than before! The player now is almost exactly where he should by the time the update packet is received! (It's not exactly 100% correct, but it's barely noticeable).   One thing remains to do. Currently the client completely ignores the tick diff reported by the server and just adds +2 to each input's tick (+2 is because that's what I noticed was the best with the server running on localhost). By setting an artificial delay to my network, the results change, because the tick diff changes. With 50 milliseconds delay, the client gets this:   As you can see, it changes just a bit when it does. Here is my idea: Use an average of the last X tick differences (updated each frame) and round to the nearest integer. But what would be a good value for X? Probably it should be a small number - I was thinking of maybe 5.
  2.     Alright, so I'm in the process of doing what has been discussed here. Clients include the target tick in their (input) packets. Server includes tick difference for each state update sent to clients.   The clients will use that difference to appropriately set the target tick: targetTick = currentTick + tickDifference (I haven't implemented this part yet). Where targetTick will be sent to the server (like you said, send for the future).   For now, clients send their currentTick.   On the client, when a new state from the server is received, I print the tick difference. Here are the results: In the first few frames, the tick difference is 1 and increases up to 20~ until the first input from the client is received by the server.   So my question is, how does the client use tick difference information? Should it just calculate the targetTick using the last value of tickDifference? The value won't be correct until the server processes the 1st input from the client, but this will be corrected really soon.   ==== Also, another question regarding this. With this approach, the client sends input "for the future" to the server. The inputs should arrive exactly at that tick and be processed by the server. Let's say that the client sends a input with targetTick = 5. But due to a lag spike, the server receives it at server tick = 7. But now, at server tick = 7, the server has 3 inputs from that clients (because inputs are received in order by my own protocol). So, in that update loop at tick = 7, the server should process all 3 inputs at once, correct?
  3.   Yes, that's what I meant. I don't know how I wrote "tick rate" instead :P   So yes, it also seems easier to implement if the server tells the client the difference to each client instead of the client having to do it manually!   Thanks for your advice!
  4.   That's why I say clients send commands for the future. If the client knows it's 6 steps away from the server, and it's currently client tick 22, the client will send a command for tick 28.   So the client needs to learn how many steps he is away from the server. Would adding a current tick rate variable to client inputs and server world states suffice? The client can make the subtraction and find the answer.
  5.   I don't quite understand this.   If understand correctly, the server will keep a counter _simulationTickCount which will increment on every physics update? Clients will do this as well, and send the current tick count with every input packet. Then, the server applies that input at that tick rate, based on its count (_simulationTickCount).   But, due to latency, the server will always be ahead of the clients, right? So when the server receives an input with tick count 15 (from client A), it might actually be at _simulationTickCount 30. What does it do in that case? Furthermore, another client, B, which has more lag than client A sends his tick count 15 at 30 server tick rate... What should the server do?   ============   I also have another issue. Currently, when an input is received, the server sets the rigid body's velocity of the client to a certain value, updates the physics world and then resets it back to zero. This causes the local client to always be ahead of the server... (and because of corrections, the player's model "jumps" to the corrected position).   But from what I read, what should happen is that the server should be ahead of the client. This make me think that what I'm doing above is wrong. Should the server assume that when an input is received (e.g RIGHT arrow pressed) that it remains active until a packet containing RIGHT as not pressed is received?
  6. Then your game is broken on that server. An occasional timestep that takes longer might be OK, but if this happens with any frequency, then your hardware spec and software meeds mis-match. You should at that point detect the problem, show a clear error message to the user, and end the game.     Alright. So, yes, I can assume that update() takes less than 16. If not, I'll need new hardware :)   So I've added this after the accumulator loop:   std::this_thread::sleep_for(std::chrono::microseconds(static_cast<int>((TIME_STEP - accumulator) * 1'000'0000.0f)));
  7.   You are correct. The physics / collision-detection is only on the server. Clients buffer 2 states from the server and interpolate between them. But this temporary, I will implement it on the client as well, after I'm done with this.   After reading (and hopefully understanding the article), I've modified my code to look like:       This is Game::update(float):   Is it correct this time?   Thanks to everyone for the help.
  8. Not really, for two reasons: 1) Simulation takes some time, so you really want to be using a monotonic clock to calculate "sleep until" time, rather than assume 16 ms per step. 2) This does not synchronize the clients with the servers in any way. The clients need to run the simulation at the same rate (although graphics may be faster or slower.) Separately: What does the server do, then? Wait for the input? That means any player can pause the server by simply delaying packets a bit. In general, you don't want to stop, block, or delay anything in a smooth networked simulation. If you're worried about single packet losses, you can include the commands for the last N steps in each packet -- so, if you send packets at 30 Hz, and simulate at 60 Hz, you may include input for the last 8 steps in the packet. This will use some additional upstream bandwidth, but that's generally not noticeable, and it generally RLE compresses really well. Being able to use the same step numbers on client and server to know "what time" you're talking about is crucial. Until you get to the same logical step rate on client and server, you'll keep having problems with physics sync. Gaffer's article is almost exactly like the canonical game loop article; using either is fine.     1) I understand. I can subtract the time of the update() function like braindigitalis suggested. But let's say the whole update() function took more than 16 ms. The result would be negative, so the thread wouldn't sleep at all. Is this ok? (I don't think this will happen, as now it takes ~0.4 ms. But things might change in the future, so...)   Can I assume that the server's update function won't take more than 16ms? If it takes longer, is it okay that the thread won't sleep at all?     2) Yes, this does not synchronize the clients. I will have to implement this on the client side as well.   No, the server's game loop does not wait for inputs. There is a separate thread that listens for inputs and when they are received they are passed to it.   By the way, the clients gather inputs every frame and send them every 33ms.
  9.   This will only be a 60hz step rate if you subtract the time taken to run update() from the 16ms. Also if your update takes more than 16ms then, you can end up in a "spiral of death".   The links provided about how to fix your time step are the correct approach for these reasons and more     I didn't understand much from that article... But I found this one: I'll work on it tomorrow.
  10.   At the end of update() on the server, I do std::this_thread::sleep_for(16ms); And then update the physics engine using the delta time from the previous frame. Doesn't this guarantee a 60Hz simulation step rate?   About the inputs, I don't this I can do this easily. This is because if packets are lost, the client resends them. This would complicate things a lot. I believe what I did above is sufficient (setting the velocity based on the inputs received for that single frame).
  11. Hello again. I have come up with a solution: // update the world WorldState worldState; for(std::size_t i = 0; i < _entities.size(); ++i) { auto& entity = _entities[i]; auto client = _clients[i]; entity.rigidBody->applyCentralForce(-GRAVITY); float dirX = 0.0f; float dirZ = 0.0f; bool movingForward = false; for(auto& inputState : newInputs[client->id()]) { for(auto& input : inputState.inputs) { if (input.movingForward) { movingForward = true; dirX += input.dir.x; dirZ += input.dir.y; } entity.dir = input.dir; } _lastInputIds[client->id()] =; // this can be optimized } if (movingForward) entity.rigidBody->setLinearVelocity(btVector3(dirX * cfg.playerSpeed(), entity.rigidBody->getLinearVelocity().y(), dirZ * cfg.playerSpeed())); else entity.rigidBody->setLinearVelocity(btVector3(0.0f, entity.rigidBody->getLinearVelocity().y(), 0.0f)); worldState.addEntityData(client->id(), entity); } _world->stepSimulation(dt); Essentially what I am doing is to calculate the sum of the direction vectors for all input packets and set the velocity (for that frame) based on those.   Example: Let's say the player's speed is 20. Two inputs arrive with vectors (1, 0) and (0, 1).   So: dirX = 1 (1 + 0) dirZ = 1 (0 + 1) Speed will be set to Vector3(1, 0, 1).   In other words, instead of moving the player gradually, I am setting a larger velocity which has the same effect.   The other solution is to manually translate the player. But I don't like this because if many input packets are sent together, collisions may be skipped...
  12. Hello. Let me start with a few details about the game I'm developing.   real time Client - Server Server is the authority Game is 3D but played on the XZ plane (topdown view).   The client captures a state of the keyboard every frame (input), and every 33 ms sends a collection of those states to the server. The server receives those inputs and applies them.   So far what I had was this (on the server): for(std::size_t i = 0; i < _entities.size(); ++i) { auto& entity = _entities[i]; auto client = _clients[i]; for(auto& inputState : newInputs[client->id()]) { // for each input of the client, update its position and looking direction for(auto& input : inputState.inputs) { if (input.movingForward) { entity.x += input.dir.x * cfg.playerSpeed() * dt; entity.y += input.dir.y * cfg.playerSpeed() * dt; } entity.dir = input.dir; } _lastInputIds[client->id()] =; } } As you can see, the client does not have the authority on his position or speed - only his direction.   But now I want to use bullet for physics, because I will need collision detection.   I have changed the above code to: for(std::size_t i = 0; i < _entities.size(); ++i) { auto& entity = _entities[i]; auto client = _clients[i]; auto& inputs = newInputs[client->id()]; if (inputs.empty()) entity.rigidBody->setLinearVelocity(btVector3(0.0f, 0.0f, 0.0f)); for(auto& inputState : newInputs[client->id()]) { for(auto& input : inputState.inputs) { if (input.movingForward) { auto velX = input.dir.x * cfg.playerSpeed(); auto velZ = input.dir.y * cfg.playerSpeed(); entity.rigidBody->setLinearVelocity(btVector3(velX, 0.0f, velZ)); } else entity.rigidBody->setLinearVelocity(btVector3(0.0f, 0.0f, 0.0f)); entity.dir = input.dir; } _lastInputIds[client->id()] =; } } ... _world->stepSimulation(dt); // this is the bullet world Now I set the linear velocity of the entity's rigid body and after a while I call _world->stepSimulation. But this is not good because the client sends multiple inputs packed together. By doing this I am ignoring all of this inputs except the last one..!   If I try to update the world inside the loop, I will be updating physics for all other rigid bodies in the world, which I do not want. What I want is to somehow update the same rigid body multiple times but not update any other rigid bodies. Is this the way it is normally done? Does anyone know a way to do this in bullet?   Thanks a lot.   Edit: I can, of course, move the bodies manually using btRigidBody::translate. But is this a good solution?
  13. I am making a client-server multiplayer game, where the server has the authority.   I am currently at the point where I must implement client-side interpolation.   The client sends Input packets to the server. Each has a unique ID, which are increasing. The server responds with WorldState packets. Each has an ID which corresponds to the ID of the latest input packet processed.     Now when the client receives a state update, it needs to interpolate beginning from the latest received state and apply the inputs that have not been acknowldedged yet by the server. According to this, each input command also contains its duration in milliseconds (I haven't implemented this yet).   The client can have a button pressed for some duration. So when button A is pressed an input packet is sent with the button_a flag set to true. When the button is released another input packet is sent with the button_a flag set to false.   My questions is: Is the way I handle inputs correct? Because the way I am doing it, if one of the packets takes more time to arrive than the other one, the server will apply input for more or less time than the actual duration of the command. Also, how can I measure command duration this way? -- I need this for client side interpolation. The way I do it, command duration does not make sense for the button press, only for the release.   What I could do is not send a packet when the button is pressed but only when it is released. This way I can measure the milliseconds for which it was pressed and send it as one command. But this has the downside that if the client presses a button and does not release it for a long time, the server will receive it with a big delay.   Or maybe instead of sending the command duration I could send the number of seconds (milliseconds included of course) elapsed since the game has started? This way it will be easy calculate the command duration if I am not mistaken.   So, what is a good way to handle inputs that can happen for a period of time?
  14. I found the problem. On Server I should have initialized the IPaddress structure using this function: SDLNet_ResolveHost(&ip, nullptr, 12000)   instead of setting the fields manually.
  15. I am making a simple Client-Server application. For now, I am trying to create a server and a client, and make the client connect to the server. When this is done, the server should print a message.   -- Server --   -- Client --   First I run the server. Then I run the client, it prints but the server does not seem to receive the connection request by the client. No errors are printed however (really strange...)   If I try to change the IP that the client connects to to the local IP of the pc the server is running (192.168.x.y) or to localhost I get "Couldn't connect to remote host" from the client... But why? Shouldn't these work in the same way as   What troubles me is that with the client seems to connect to the server (no errors by the client) but the server never receives the request. Maybe it's blocked by a firewall or am I doing something really wrong?
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!