• Advertisement

mojobojo

Member
  • Content count

    25
  • Joined

  • Last visited

Community Reputation

199 Neutral

About mojobojo

  • Rank
    Member
  1. That appears to have done the trick. Thank you very much.
  2. What you put there is what I am doing. I am saying SDL gives me back a non zero value for more than one frame.   EDIT: Correction, similar to what I am doing. I am setting the rotation inside the update loop, does that matter?
  3. So what you are saying is I should get the relative mouse position on a frame and move a certain amount and once that amount has been moved that is when I look at the mouse relative position again? So even if sdl gives me another xrel and yrel on the second frame, ignore it?
  4. It gets reset to zero before the input poll. internal void ProcessEvents(game_state *GameState) { SDL_Event SdlEvent; player_input *PlayerInput = &GameState->Input.PlayerInputs[0]; PlayerInput->RelMousePos = v3(0.0f, 0.0f, 0.0f); while (SDL_PollEvent(&SdlEvent)) { ProcessSdlInputEvents(PlayerInput, &SdlEvent); switch (SdlEvent.type) { case SDL_QUIT: { GlobalRunning = false; break; } case SDL_WINDOWEVENT: { if (SdlEvent.window.event == SDL_WINDOWEVENT_SIZE_CHANGED) { i32 Width = SdlEvent.window.data1; i32 Height = SdlEvent.window.data2; GameState->CurrentResolution.Width = Width; GameState->CurrentResolution.Height = Height; } break; } default: { break; } } } } I know why I am getting the extra motion I am just drawing a blank on how to compensate for it. For example say I am running 60fps and move the mouse for 1 second, if the input poll puts RelMousePos to 5 that's 60*5 I am going to process, if its 120fps that is 120*5 which means its going to move farther. Initial thought was compensating for time with my frame dt (Units * DeltaTime) which makes the mouse motion feel really bad.
  5. I am polling for input every frame from my application using SDL_PollEvent. When I recieve a mouse motion event I set the relative mouse position // Other input code PlayerInput->RelMousePos.x = (f32)SdlEvent->motion.xrel; PlayerInput->RelMousePos.y = (f32)SdlEvent->motion.yrel; // Other input code In my game loop I handle the input like so f32 MouseXRatio = PlayerInput->RelMousePos.x / GameState->CurrentResolution.Width; f32 MouseYRatio = PlayerInput->RelMousePos.y / GameState->CurrentResolution.Height; f32 Units = 64.0f; v3 Amount = v3(-MouseYRatio, MouseXRatio, 0.0f) * Units; GameState->CameraRot += Amount; // Rotate the view matrix The camera rotates fine, feels good to move, etc. However, when I turn on vsync the sensitivity of the camera goes down. I am a little stuck on how to handle this.
  6.   I set the view and world matrix to an identity matrix. D3DXMATRIX IdentityMatrix; D3DXMatrixIdentity(&IdentityMatrix); Direct3DDevice->SetTransform(D3DTS_WORLD, &IdentityMatrix); Direct3DDevice->SetTransform(D3DTS_VIEW, &IdentityMatrix);
  7. I'm trying out some direct3d stuff coming from opengl and I have a question regarding the projection matrix. I would like the coordinate system to look like so y - 720 ^ | | | | | | | | 0, 0 ---------------------------------> x - 1280 Normally in opengl I would do something like this. // other code.... glMatrixMode(GL_PROJECTION); m4 Projection = Orthographic(0.0f, 1280.0f, 0.0f, 720.0f, 1.0f, -1.0f); glLoadMatrixf(Projection.f); // other code.... internal m4 Orthographic(f32 Left, f32 Right, f32 Bottom, f32 Top, f32 Near, f32 Far) { m4 Result = IdentityMatrix(); Result.m0[0] = 2.0f / (Right - Left); Result.m1[1] = 2.0f / (Top - Bottom); Result.m2[2] = -2.0f / (Far - Near); Result.m3[0] = -(Right + Left) / (Right - Left); Result.m3[1] = -(Top + Bottom) / (Top - Bottom); Result.m3[2] = -(Far + Near) / (Far - Near); return Result; } In direct3d I am doing something similar (or so I think) D3DXMATRIX ProjectionMatrix; // I know opengl is right handed so I make a right handed ortho right? D3DXMatrixOrthoOffCenterRH(&ProjectionMatrix, 0.0f, Width, 0.0f, Height, 1.0f, -1.0f); Direct3DDevice->SetTransform(D3DTS_PROJECTION, &ProjectionMatrix); Problem is I don't see anything on the screen and cant seem to figure out how I oriented it.
  8.   Which timer does GetTimeInSeconds use? GetTickCount can have resolutions as low as 55ms.   PS: Don't use floating point to measure absolute time. You are using double precision which will (appear to) solve many problems, but fixed point is the safest choice.   I use performance counters (SDL_GetPerformanceCounter). I guess I will give fixed point a try.
  9.   It only gets set to that if the time exceeds 250ms. if (FrameTime > 0.25f) { FrameTime = 0.25f; }   You can use shaders without VBOs.  And you can use VBOs without shaders.     I wouldn't have known that. While searching for shader examples all I found was examples using VBOs.
  10.   This, basically.   VBOs are not a magic bullet that you just implement and instantly get improved performance everywhere.  Particularly if you update them at runtime, and particularly with OpenGL, you have to know what you're doing, understand CPU/GPU synchronization, understand why you shouldn't try to read from one (and how the compiler can trip you up with this), and even then it may be the case that vertex submission was not even your bottleneck to begin with.   Immediate mode (glBegin/glEnd) on the other hand just lets you blast vertices at the GPU without having to know or understand these things, and the driver works it all out for you.   So VBOs can be faster, but it's also easier to do a bad implementation where performance just collapses.  I'm not going to read your code (it would have been better if you'd just pastebinned the 2 functions you mentioned) but I would point you at VBO updates as a first place to look.   I'd also suggest that you could work some more to isolate the performance differential.  So you could write a path that uses VBOs with the fixed pipeline, and another path that uses immediate mode with shaders, which would all enable you to better home in on where your problems are coming from.     I chose to use VBOs so I could use shaders. I didn't go into it expecting any performance impacts one way or the other. I see now it is quite easy to use them wrong.       Visible jitter doesn't have to be related to performance at all, but rather to clock/timing issues.       I have determined that my jitter has indeed something to do with my timing for frames. If maybe someone could take a look. I used the suggestion from here. Perhaps I implemented it wrong.   http://gafferongames.com/game-physics/fix-your-timestep/ f64 Accumulator = 0.0; f64 DeltaTime = 1.0 / 60.0; f64 CurrentTime = GetTimeInSeconds(); f64 Time = 0.0f; GameState->Running = true; while (GameState->Running) { f64 NewTime = GetTimeInSeconds(); f64 FrameTime = NewTime - CurrentTime; //printf("Frame Time: %fms \r", FrameTime); if (FrameTime > 0.25f) { FrameTime = 0.25f; } CurrentTime = NewTime; Accumulator += FrameTime; while (Accumulator >= DeltaTime) { GameState->DeltaTime = DeltaTime; ProcessEvents(GameState); // Proceses SDL events like input UpdateGame(&GameMemory); Time += DeltaTime; Accumulator -= DeltaTime; } RenderGame(&GameMemory); } Also I took the day to play around with VBOs and learn some stuff about instancing and drawing in batches. I improved my render code a bit and made it a bit more sane.
  11. Maybe I should mention this isn't about a jitter in the benchmark. This is a visual jitter where I see movement on the screen jitter. And the jittering only occurs with the code that uses VBOs.
  12.   I am aware I was doing it wrong, I just wanted to know why it was wrong and how to fix it. I suspect, if the game loop was responsible for the jittering it would be jittering with both renderers and not just the one.
  13. I am in the process prototyping some ideas and learning some graphics programming. I have 2 separate renderers set up in my project, one uses the Fixed Function Pipeline and one uses VBOs and Shaders to do my 2D rendering. I am having a significant slow down and jittering with the one using VBOs. I notice it will render a few frames pause for a few frames then continue to render. I have attached some source code if someone who is experienced can take a quick look. I included my shaders and both renderers (Render_SDL_GL.cc and Render_SDL_GL_FFP.cc). Both are interchangeable so you can #include one or the other without having to make any changes to anything. I have also included an image of my performance graph I draw and the frame time differences. Also I would like to add on if you see something else wrong with what I am doing please do let me know. At request I can upload the entire source.   NOTE: In my graph the longer and more red each bar is the higher the frame time. The red spike in the graph is me grabbing the window to move it.      The functions of interest are: * DebugDrawTexturedRectangle: Draws a rectangle with a texture * DebugDrawFilledRectangle: Draws a rectangle filled with a color you specify   If there is anything I need to explain or add on to make it easier for the reader please do let me know. Link to source can be found here: https://drive.google.com/file/d/0B2hFWVdJkDXgMmYxTjdoY1BneGs/view?usp=sharing
  14. I had suspected mutlithreading could have fixed that. It seemed odd to me that the game wasn't effected by the polling of events until I added the networking. I will add some multithreading and see how that goes.     Great. Let me know how it goes! It should fix your problem.     It worked great. The server is sending the client the information pretty good, a little bit of choppyness (something I assume I have to compensate for with interpolation) I can tell but overall threading fixed the issue.   Changes can be found here. https://github.com/mojobojo/BrickBreaker/commit/5bf00ed22a49b5a788d828d5ce9e2e828ba1afd9   https://youtu.be/8_aqj4Ncjfk
  15. I had suspected mutlithreading could have fixed that. It seemed odd to me that the game wasn't effected by the polling of events until I added the networking. I will add some multithreading and see how that goes.
  • Advertisement