Tonyyyyyyy

Members
  • Content count

    7
  • Joined

  • Last visited

Community Reputation

110 Neutral

About Tonyyyyyyy

  • Rank
    Newbie
  1. Authoritative Server-side Input

    EDIT: Issue solved! Thanks for all the help!
  2. Authoritative Server-side Input

    [quote name='Angus Hollands' timestamp='1351712706' post='4995921'] This may be similar to a problem that I had encountered. I noticed that when remote testing over the internet, [i]local[/i] clients were working correctly, however the remote players were experiencing large prediction errors. It became evident that a factor was affecting the simulation. For me, I wasn't easily able to "rewind" physics simulations. For example, prior to becoming aware of the bug, I was simulating inputs in real time as they were received. The problem here is that the upstream latency will shift the inputs on server and client. The client stores them immediately as the current tick, whereas the server may receive and store them several later, resulting in a discrepancy. For me, as I don't have full exposure of the physics system, fixing this involves forward predicting when the inputs will be received by the server, on the client. So that If the upstream latency is 5 game ticks, the client predicts the results of those inputs and stores them on the current tick + latency (5 ticks). For you, as you're working in C++, one can presume that you've more control over the physics steps, you'd send the intended tick number with the input packet, and the server would "rewind" to that state, and simulate for the time step. This would account for any latency. [/quote] Sorry for this late reply! It mostly works now, except I get some floating-point issues. In order to make my server frame-rate agnostic, the client sends the "simulation time", which is basically the time that the client used to multiply the movement velocity (V*t) to the server. The server then subtracts 0.166667 from this value each frame (the server runs at 60 FPS) or the remaining value if it is less that 0.166667. Unfortunately, this causes some small floating point precision errors. Add in packet drops (if packet #2 was dropped, when I receive packet #3 i just process #2 with the information from packet #1 and then process packet #3), and the client will slowly become inaccurate. Is there something I'm doing wrong here? Is it necessary for the client to also send its predicted position, and have the server decide whether the error is ok, and then reset its position to the client's position? Or should the server just be the absolute decider? Each of my input packets is currently 10 bytes and 5 bits, and adding the predicted position would cause it to be 22 bytes and 5 bits, effectively doubling the original size. Just for movement, I'd be sending out 5.4kbits/second. Thanks!
  3. Authoritative Server-side Input

    [quote name='KnolanCross' timestamp='1351612582' post='4995458'] Hmm, I am sorry but I am not really getting your implementation. Are you receiving the position from the client and assuming it is right? Also, how are you calculating the position? Are you calculating the position assuming the client has a set FPS? I believe you should be using a time difference approach, this way the FPS would not affect the math at all. I always use something like this: nextX = currentPos.fX + (cos(angle) * moveSpeed * timeElapsed); nextY = currentPos.fY + (sin(angle) * moveSpeed * timeElapsed); This way, assuming the latency is nearly constant, the amount of time you move should nearly the same and you will be in about the same position (a small error is OK). [/quote] Hi, Knolan, thanks for the reply! For input, the server sends [b]only[/b] the keyboard state to the server. On the client, I multiply the movement vector by the time elapsed since the last frame, and do the same on the server. Unfortunately, this is where I think the error comes in; the timing on the server and the client may not be exact, and therefore this causes a discrepancy.
  4. After hours of googling and countless re-reads of Valve's networking documentation, I simply can't get it to work. [img]http://public.gamedev.net//public/style_emoticons/default/sad.png[/img] I implement prediction in my game as follows (I'm probably doing something extremely stupid): [CODE] struct InputPacket { private byte _id = 0; bool forward; // > 0 bool forward_zero; // == 0 bool right; // > 0 bool right_zero; // == 0 float pitch; float yaw; public InputPacket(byte id, int forward, int strafe, float pitch, float yaw, bool jumpDown) { _id = id; this.forward = forward > 0; this.forward_zero = forward == 0; this.right = strafe > 0; this.right_zero = strafe == 0; this.jumping = jumpDown; this.pitch = pitch; this.yaw = yaw; } public Packet CreatePacket() { // creates a packet... } } vector3 pastpositions[256]; mainLoop() // 60 FPS { if(shouldUpdate) // input is sampled at 30 FPS { input.sample(); int forward = 0; int right = 0; if(input.forward.down) forward++; if(input.back.down) forward--; if(input.strafe_right.down) strafe++; if(input.strafe_left.down) strafe--; // set player input playerObj.Forward = forward; playerObj.Strafe = right; // set player direction playerObj.Yaw = camera.yaw; playerObj.Pitch = camera.pitch; playerObj.ForwardMove = camera.forwardmove; playerObj.RightMove = camera.rightmove; } // goes before because some values may be changed if(shouldUpdate) sendMessage(new InputPacket(...).CreatePacket()); // exact same code as the server playerObj.Update(elapsedTime); if(shouldUpdate) { pastpositions[counter++] = playerObj.Position; } } receivePos(byte id, Vector3 pos) { if(pastpositions[id] != pos) { // client will interpolate position to match this playerObj.ServerPos = pos; } } [/CODE] This works [b]pretty well[/b], except every single time, the prediction is off from 0 to at most, 1 (units). The error increases with ping, and I get around 0.3 units off at 50ms. After reading the Valve documentation though, it seems that they have it right most of the time. I strongly suspect that what's causing the error for me is that the timing on the server/client may be different. For example, say the client starts moving at time t = 0, and stops moving at time t = 1. The client thinks it has moved for 1t, and advances the position as it should. However, due to network latency, the first packet may arrive at t=1, and the packet that says "I stopped moving" may arrive at t = 2.25. In the server's eyes, the client moved 1.25 seconds, and thus an error has been produced. I noticed that Valve includes a duration in their packet, but wouldn't this make the prediction off if the FPS is variable? Seeing as many players don't have a constant FPS, it boggles my mind as to how they rarely get prediction errors. Even if I were to use the time of the last frame (as Valve states they do), how could I split that time between the player updates? As you can see, my update is sampled at 30 FPS, and my player is updated at 60 FPS, and therefore, I have to somehow intelligently split the time between the extra frames. I guess what worries me the most is that a changing FPS will royally screw over all my guesswork. Thanks in advance!
  5. Rendering Edge on 3D Quad

    [quote name='Tom KQT' timestamp='1336629090' post='4938889'] I'm quite sure in Minecraft it isn't a textured quad but a 3D model. And I cannot think of any way how to "fake" those sides in DX9 (XNA) without actually having them in the model - but that doesn't mean such a way doesn't exist ;) [/quote] [quote name='IceBreaker23' timestamp='1336629184' post='4938890'] There are 2 ways to achieve this: -create a mesh for every weapon -make a voxelsystem: I think in minecraft the texture is getting voxelized(is this a word?^^) I can´t think of any other possible way... [/quote] I took a look in MCP a bit, and it appears that Notch draws quads with a "cross-hatching method". By that, I mean I think he draws 16 horizontal and 16 vertical quads, and that allows for the thickness.
  6. [color="#333333"]I want to use textures on quads in my game instead of an actual model (for the HUD), so what I currently do is I make a quad, draw the texture on to it, and then rotate it a bit. However, this gives the impression that the tool is flat and 2D, and doesn't have any width to it.[/color] [color=#333333]The best example I can think of is probably Minecraft. If you hold a tool in Minecraft (such as a pickaxe), the texture is drawn and it has an "edge". The color of the edge isn't just a random color, but rather it is about the same color of the edge on the actual face of the tool.[/color] [color="#333333"]Images:[/color] [color=#333333]Edge highlighted:[/color] [color=#333333][img]http://i.stack.imgur.com/7j32v.png[/img][/color] [color=#333333]Full image:[/color] [img]http://i.stack.imgur.com/GhXOE.png[/img] [color=#333333]How can I do this with XNA?[/color] [color="#333333"]As stated before, I currently construct a quad and render an image to it.[/color]
  7. I'm trying to render an image of a gun onto a textured quad, which will be part of my HUD (think Minecraft or Ace of Spades). However, I just can't get the image to rotate or have a Z value. Here's some code: [code] public void Render() { //If we have an active screen, don't draw the rest of the HUD, return. if (displayScreen != null) { displayScreen.Draw(spriteBatch); return; } // Test gunBillboardEffect.Parameters["World"].SetValue(Matrix.Identity); gunBillboardEffect.Parameters["View"].SetValue(Matrix.Identity); //gunBillboardEffect.Parameters["Projection"].SetValue(Cameras.CameraManager.ActiveCamera.Projection); gunBillboardEffect.Parameters["Projection"].SetValue(Matrix.CreateOrthographic(Globals.GameInstance.GraphicsDevice.Viewport.Width, Globals.GameInstance.GraphicsDevice.Viewport.Height, 0, 4000f)); gunBillboardEffect.Parameters["BillboardTexture"].SetValue(tempBillboard); foreach (EffectPass pass in gunBillboardEffect.CurrentTechnique.Passes) { pass.Apply(); DrawBillboard(); } //Draw the normal HUD. spriteBatch.Begin(); spriteBatch.End(); } private void DrawBillboard() { VertexBillboard[] billboardVertices = new VertexBillboard[4]; Vector3 v1 = Vector3.Zero; billboardVertices[0] = new VertexBillboard(new Vector3(v1.X + 10, v1.Y + 10, 0), new Vector2(1, 1)); billboardVertices[1] = new VertexBillboard(new Vector3(v1.X + 10, v1.Y - 10, 0), new Vector2(0, 1)); billboardVertices[2] = new VertexBillboard(new Vector3(v1.X - 10, v1.Y + 10, 0), new Vector2(1, 0)); billboardVertices[3] = new VertexBillboard(new Vector3(v1.X - 10, v1.Y - 10, 0), new Vector2(0, 0)); short[] billboardIndices = new short[] { 3, 2, 0, 3, 0, 1 }; Globals.GameInstance.GraphicsDevice.DrawUserIndexedPrimitives<VertexBillboard>(PrimitiveType.TriangleList, billboardVertices, 0, 4, billboardIndices, 0, 2); } [/code] The code above will draw the image on the screen. However, if I add any rotation to the World matrix, or I specify a Z value for the vertices, the image disappears. I'm not sure if this is a problem with my code, or if I'm just doing something wrong. VertexBillboard [code] public struct VertexBillboard : IVertexType { Vector3 _position; Vector2 _textureCoordinate; public static readonly VertexElement[] VertexElements = new VertexElement[] { new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) }; public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration(VertexElements); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexDeclaration; } } public VertexBillboard(Vector3 position, Vector2 texcoords) { _position = position; _textureCoordinate = texcoords; } public Vector3 Position { get { return _position; } set { _position = value; } } public Vector2 TextureCoordinate { get { return _textureCoordinate; } set { _textureCoordinate = value; } } public static int SizeInBytes { get { return sizeof(float) * 5; } } } [/code] GunBillboard.fx [code] float4x4 World; float4x4 View; float4x4 Projection; Texture BillboardTexture; sampler BillboardSampler = sampler_state { texture = <BillboardTexture>; magfilter = POINT; minfilter = POINT; mipfilter = POINT; AddressU = WRAP; AddressV = WRAP; }; struct VertexShaderInput { float4 Position : POSITION0; float2 TextureCoords : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TextureCoords : TEXCOORD0; float4 Color : COLOR0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.TextureCoords = input.TextureCoords; output.Color.rgb = float3(1, 1, 1); output.Color.a = 1; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 texColor = tex2D(BillboardSampler, input.TextureCoords); float4 color; color.rgb = texColor.rgb * input.Color.rgb; color.a = texColor.a; if(color.a == 0) clip(-1); return color; } technique Technique1 { pass Pass1 { // TODO: set renderstates here. VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } [/code] I'm fairly new to writing my own shaders, so please, be patient with me. Thanks for any help -Tony