Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

227 Neutral

About sirlemonhead

  • Rank
  1. What method would you recommend for limiting frame rate? Sleeping?
  2. Thanks, I'll play around with that :)  Yep my D3D render list is being generated in those update calls - I have a render list struct array that basically allows me to group draw calls and things like that, so can try move things around... maybe preserve my lists and vertex buffers to only clear them when I need to actually update them?   I shouldn't have any issues just redrawing the previous frame if nothing game-wise has updated, right?   Yeah that's a very simplified version of the loop - I have some code running in between (checks to see if streaming music has to be updated etc) so that's why the odd redundant looking boolean is there.
  3. hmm, I'm probably flipping an old frame with a new frame when I call present, without updating the back buffer?
  4. Hi,   I have a game codebase from the late 90's that I updated to work with Direct3D9 and modern Windows operating systems.   What I did originally when updating the renderer was set D3D9's presentation parameters PresentationInterval to D3DPRESENT_INTERVAL_ONE, and this works great. I need to do this to limit the framerate as the game starts to act oddly at high framerate (bouncing items that should remain static, player movement becomes janky etc)   When I did this, I had 60hz refresh rates in mind, and this all worked fine. But of course, some users like to overwrite this value in their graphics card display adapters and/or we're now seeing 144hz refresh rates, so this isn't working so well anymore.   So what Ideally I'd like to do is cap the game updates to 60hz and let the game render as much as it needs. Should be ok?   I've seen the few "fix your timestamps" articles but unfortunately I didn't write this game, and tend not to go near the gameplay code, so modifying the underlying methods for how entities and whatnot update their velocities etc etc is beyond me.   Here's what I've been trying, to only update the game every 16ms and allow rendering as fast as possible - only clear the backbuffer and render a new frame every time the game updates, otherwise just redraw the previous frame: uint32_t startTime = timeGetTime(); while (1) { CheckWindowsMessages(); bool doupdate = false; if (currentTime - startTime >= 16) { D3D9_BeginFrame(); UpdateGame(); startTime = timeGetTime(); //currentTime; doupdate = true; } if (doupdate) { D3D_EndFrame(); FrameCounterHandler(); /* FrameCounterHandler() is the games method for handling timer updates, calculting how much time has passed since it was last called, and updating game variables with this value. Lots of fixed point math stuff lurking in here... */ doupdate = false; } D3D9_FlipBuffers(); } This doesn't actually work well in reality, as my 4 year old computer can run a frame of this game at about 3000fps when vsync is off. What happens is sorta like vsync tearing, but instead of a single shear in the middle, there are multiple shears all down the screen (evenly spaced apart)   I guess what I'm asking is there a better way to do what I want, without having to pass any sort of delta time changes to the underlying game code? I might have my logic wrong somewhere here..   I'm really clueless on this timer stuff... Thanks!
  5. Hi, I've got Orthographic projection set up in D3D9 with shaders and also using some code to convert from pixel values (ie 0-1024) to device coordinates (-1.0 to 1.0), taking the current resolution into consideration. Basically, I have a virtual resolution on my game menu of 640x480, and even if the actual D3D device is drawing to a 1280x1024 backbuffer, my code will position and scale all my menu elements correctly. Unfortunately, there's one case where I have to draw 3 quads together, all lined up touching each other, to basically create a single seamless image comprised of 3 different textures. If my D3D resolution is set to the same as my virtual resolution (640x480), these draw perfectly. If my resolution is say, 1280x1024, there are either pixel wide gaps or overlaps where I'm guessing floating point imprecision is coming into play and D3D isn't positioning things quite correctly. Is there any way around this? Would using an old style RHW vertex format (with shaders) still have the same issue? What I have is great for positioning individual items such as menu graphics or text characters, as none of them have to be flush up against another texture, so any imprecision would never be noticed. Appreciate any help I can get, thanks!
  6. I dunno, but don't call ->Release() on your d3d object if Direct3DCreate9() returns NULL would be one thing i'd change for the future anyway
  7. sirlemonhead

    Lib theora Player + Fmod|DirectSound

    I always found DirectSounds buffer system a pain for video. I've my own Theora player and I ended up just using XAudio2 which was a hell of a lot easier to get working. It'll work a lot like OpenAL too.. any reason you're sticking with DirectSound?
  8. And if you want to differentiate between the left/right alt/ctrls etc: case WM_SYSKEYDOWN: { // handle left/right alt keys if (wParam == VK_MENU) { if (lParam&(1<<24)) wParam = VK_RMENU; else wParam = VK_LMENU; } // handle left/right control keys if (wParam == VK_CONTROL) { if (lParam&(1<<24)) wParam = VK_RCONTROL; else wParam = VK_LCONTROL; } // handle left/right shift keys if (wParam == VK_SHIFT) { if ((GetKeyState(VK_RSHIFT) & 0x8000) /*&& (KeyboardInput[KEY_RIGHTSHIFT] == FALSE)*/) { wParam = VK_RSHIFT; } else if ((GetKeyState(VK_LSHIFT) & 0x8000) /*&& (KeyboardInput[KEY_LEFTSHIFT] == FALSE)*/) { wParam = VK_LSHIFT; } }
  9. sirlemonhead

    DInput, XInput & WM_CHAR

    You shouldn't use DInput for either mouse or keyboard. Just use Raw Input
  10. Ok I seem to have it working now, using SetFloat Is there any way to set a single float value with SetValue though? does it have to be padded out to 4 floats? how would I do that correctly if it's possible? Thanks for the help so far guys!
  11. I use two int values for this shader to do a texture animation effect. The ints I pass in are timer values from the game engine. I should be able to use them as floats without much issue. Should I be creating all my single value (ie single floats) globals in my shaders are float4s then? I'm working with shader model 2.0 here so I'd like to do the most compatible/correct thing for that.
  12. I must have had a 4 component vector in my head when I wrote that :) Any thoughts on why int doesn't work?
  13. Ok woops, seems I made a schoolboy error :D case CONST_MATRIX: sizeInBytes = sizeof(float) * 4; break; A D3DMATRIX ain't only 4 floats in size :) So stuff is drawing correctly now, but anything that needs an int passed in isn't.
  14. Hi, I've been using HLSL vertex shaders with a constant table, then using SetInt() or SetMatrix etc to pass my shader constants from my own code to the shader. This worked fine. What I want to do though is pass the variables more like how Directx8 did it - by specifying constants using an index value rather than a string name. So what I do when I load each vertex shader is the below: // now find out how many constant registers our shader uses D3DXCONSTANTTABLE_DESC constantDesc; vertexShader.constantTable->GetDesc(&constantDesc); // we're going to store handles to each register in our std::vector<D3DXHandle> so make it the right size vertexShader.constantsArray.resize(constantDesc.Constants); // loop, getting and storing a handle to each shader constant for (uint32_t i = 0; i < constantDesc.Constants; i++) { vertexShader.constantsArray = vertexShader.constantTable->GetConstant(NULL, i); } which seems to work ok. I don't get any errors produced. Then, I pass the variables: d3d.effectSystem->SetVertexShaderConstant(d3d.mainEffect, 0, CONST_MATRIX, &matProjection); where 0 is the constant index, CONST_MATRIX is an enum type and matProjection is a D3DXMATRIX. This code gets to a function as below: uint32_t sizeInBytes = 0; switch (type) { case CONST_INT: sizeInBytes = sizeof(int); break; case CONST_MATRIX: sizeInBytes = sizeof(float) * 4; break; default: LogErrorString("Unknown shader constant type"); return false; break; } LastError = vertexShader.constantTable->SetValue(d3d.lpD3DDevice, vertexShader.constantsArray[registerIndex], constantData, sizeInBytes); if (FAILED(LastError)) { Con_PrintError("Can't SetValue for vertex shader " + vertexShader.shaderName); LogDxError(LastError, __LINE__, __FILE__); return false; } Which again, completes successfully with no errors logged. The Dx runtime reports no errors either. My problem is that my rendering doesn't work! I just get black screens. I've run through the debugger and the correct values seem to get passed to the SetValue() function. I've debugged in PIX and my calls show up as: ID3DXConstantTable::SetValue(0x02B03ED8, 0xFBA453B8, 0x00000010, 0) ->IDirect3DDevice9::SetVertexShaderConstantF(0, 0x045C30A0, 0) Which looks like the pointer to the value and the size in bytes values aren't making it through to directx correctly. Anything obvious i'm missing?
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!