Nostalgia

Members
  • Content count

    58
  • Joined

  • Last visited

Community Reputation

122 Neutral

About Nostalgia

  • Rank
    Member
  1. Trouble with D3D9-CreateDevice()

    Hey! Thanks for the suggestions. Quote:m_nAdapter is junk. You create device on adapter 0, so why get the display mode for a specific, possibly different, adapter. I actually use m_nAdapter in both cases - I made it 0 just for clarity and missed one. I've checked it in the debugger, and it's 0 in both places. Quote:BackBufferCount of 2. I've never set this. You're not allow anything except 1 (or 0) when using SWAPEFFECT_COPY, but you're not using that. Not likely the problem. Nope. I had already tried it both ways. Quote:Depth format D16 isn't allowed with whatever backbuffer format you're using. Use CheckDepthStencilMatch and CheckDeviceFormat, as per the snippet shown in CheckDepthStencilMatch's help. Be prepared to switch to an alternate depth format. Interesting. I'll check that out. I'd be quite surprised, as I've been using D16 the whole time without trouble until now. Quote:m_hWnd isn't created yet, has no client area, or is minimized Has no client area! I bet that's it...hang on...Brilliant!! I was reading the window size from a previously set registry value, and the data was crap. Thanks for that! -Joe
  2. Hi all. I just had a problem start showing up, and I can't figure out what I did to make it appear. I have an application that worked up until today. I didn't make any changes to the DirectX section of the code, and I can't get enough info out of the DX libs to help me out. I've got the debug libraries installed, the "Debug Output Level" slider all the way to the right. In my output window, I see: Direct3D9: (INFO) :======================= Hal SWVP device selected Direct3D9: (INFO) :HalDevice Driver style 9 Direct3D9: :DoneExclusiveMode Direct3D9: (ERROR) :Failed to create driver surface Direct3D9: (ERROR) :Failed to initialize primary swapchain Direct3D9: (ERROR) :Failed to initialize Framework Device. CreateDevice Failed. D3D9 Helper: IDirect3D9::CreateDevice failed: D3DERR_DRIVERINTERNALERROR Does that mean anything to you guys? Here's the code leading up to that point (error checking removed for clarity - I test all return codes): D3DDISPLAYMODE d3ddm; m_pD3D->GetAdapterDisplayMode(m_nAdapter, &d3ddm); ZeroMemory(&m_d3dppWin, sizeof(m_d3dppWin)); m_d3dppWin.Windowed = TRUE; m_d3dppWin.SwapEffect = D3DSWAPEFFECT_DISCARD; m_d3dppWin.BackBufferFormat = d3ddm.Format; m_d3dppWin.BackBufferCount = 2; m_d3dppWin.EnableAutoDepthStencil = TRUE; m_d3dppWin.AutoDepthStencilFormat = D3DFMT_D16; m_d3dppWin.hDeviceWindow = m_hWnd; m_pD3D->CreateDevice( 0, D3DDEVTYPE_HAL, m_hWnd, D3DCREATE_SOFTWARE_VERTEXPROCESSING, &m_d3dppWin, &m_pd3dDevice); Thanks for any suggestions. -Joe
  3. Audio/video sync and throttling

    Ok, here it is: Link to other thread This is fascinating and frustrating all at the same time :) -Joe
  4. Actually, to be completely fair, I should run the threaded mode WITHOUT allowing frame drops. So it's now precisely the same chain of execution as the unthreaded model. Ouch. It's painful to watch :) The timers were a bit better, though: Synchronous threaded model, 1186 iterations Texture lock/copy/unlock: 25.51ms BeginScene/EndScene: 1.90ms Present: 8.73ms Very interesting. More research is necessary. This is the fun part ;) -Joe
  5. Came across an interesting issue today that should make for some good discussion. First, the short version that may save you some reading: Is there anything special I need to do to use Direct3D in a threaded programming model? If not, read on. Still working on my emulator. First a little history: the original game loop looked like this: while (gameIsRunning) { Game engine runs (emulation logic, sound) Game engine generates a video frame to be rendered by filling in an array (let's call it renderArray). Texture is locked and renderArray is copied into texture. Texture is rendered to the scene. Scene is Present()ed. } This all happened in one thread of execution. What I wanted to do was de-couple the array-filling from the scene rendering to allow the engine to work while DX was waiting on hardware. So I created a thread that does this: while (gameIsRunning) { wait for FrameReady event to be set setBusyEvent call DoDisplay() -> Lock texture, copy in renderArray, render, Present. unsetBusyEvent } The main code was changed to add a second array (let's call them array1 and array2) so we can fill one and render the other. The main thread of execution now does this: fillArray = array1 while (gameIsRunning) { Game engine runs (emulation logic, sound) Game engine generates a video frame to be rendered by filling in fillArray. if BusyEvent is NOT set, we can render this frame { set renderArray = fillArray set FrameReady event set fillArray = fillArray == array1 ? array2 : array1 } else the video display was busy, so drop the frame } Coneptually, this seems like it should de-couple the engine and the video system. In practice, all of the Direct3D calls seemed to take 10x longer. As comparison, I left everything the same and just changed the main code to this, so everything is as similar as possible: fillArray = array1 while (gameIsRunning) { Game engine runs (emulation logic, sound) Game engine generates a video frame to be rendered by filling in fillArray. set renderArray = fillArray call DoDisplay(renderArray) -> Lock texture, copy in renderArray, render, Present. set FrameReady event set fillArray = fillArray == array1 ? array2 : array1 } Here's what my high-resolution timers recorded after 20 seconds of emulator-time elapsed (all times are averages): NON-threaded version - rendered 1186 video frames Texture lock/copy/unlock: 0.38ms BeginScene/EndScene: 0.04ms Present: 6.13ms Threaded version - rendered 122 video frames Texture lock/copy/unlock: 30.02ms (21.44ms in unlock alone) BeginScene/EndScene: 67.72ms (58.88ms in BeginScene) Present: 49.45ms Times were measured as follows: QueryPerformanceCounter(start) DxFunctionCall QueryPerformanceCounter(end) time = (end - start) / timerFreq Again, all of the code is precisely the same, with the only difference being in the first one everything's in one thread of execution, while in the second all of the D3D calls are in a separate thread from the engine. I'm open to suggestions on things I could try and/or reasons for this dramatic performance drop. Thanks, -Joe
  6. Audio/video sync and throttling

    Quote:Original post by pjcast Not really sure why you want video at 60 FPS. Most typical video rates are signifcantly lower. Because this is the rate that was provided by the hardware I'm emulating. When I drop frames to go to say, 30fps, the video looks wrong and people complain :) I'm getting about 42fps with everything running, and it does look a bit "off". Quote:Original post by pjcast Now, if the video starts to get behind the audio, you should drop some video frames (not render them) - it is much better than trying to drop/add audio frames (inserting blank audio or cutting audio clips out sounds really irritating, while missing a few pictures, or a paused video is not as bad). Ok, this is what I'm trying to do now. I tried to do it by de-coupling the video rendering from the video generation engine. The results were very surprising. I'll be starting a new thread, since I think it warrants its own discussion. I'll link to it from here after I get the thread up. Thanks! -Joe
  7. Audio/video sync and throttling

    Quote:Original post by ApochPiQ If you haven't done any multithreading it can be a little intimidating at first, but it's a good skill to master in any case, and definitely one of the more elegant ways to solve this kind of problem. I certainly have no fear of multithreading - I've been writing threaded programs for over 6 years now - I'm just having some trouble wrapping my mind around the logic of synchrnozing 3 asynchronous processes: My emulator engine, which produces the audio samples and video frames DirectSound buffering, running at 22500hz Direct3D flipping, running at a shade less than 60hz So I was just hoping there were some examples of similar things or best practices that I could derive some ideas from. Thanks, -Joe
  8. Hello everyone! I've been trying to find some resources on syncing audio and video in games and haven't come up with much. As background, I've written an emulator (Nostalgia, an Intellivision emulator) and I'm currently doing a complete re-write of the DirectX video section (it used to be DirectDraw). While I'm at it, I want to work on my synching of audio and video, since I've always felt it was just OK. Any suggestions on what I could be doing differently are very welcome. The emulator's engine is providing sound samples on-the-fly at 22500hz. It provides video frames at just a hair under 60hz (something like 59.5fps). So I'm keying off the sound to throttle. I let the whole system run as fast as it can. If everything is running such that it can produce sound samples faster than 22500hz, I keep pushing sound samples into the DS8 buffer until I reach a point that's N samples ahead of where I should be. I then wait for the play pointer to catch up to N/2 samples ahead of real-time, let the engine go and repeat. This works fine if the video system is fast enough to keep up. The problem is when my D3DDevice->Present() is stuck waiting for a VTrace. Then the audio gets behind and I have to move the play pointer back in time some so I don't end up playing stale buffers. This obviously results in choppy sound, and I'm still displaying video frames at less than the 60fps I should be. There must be some happy medium to get this right (perhaps using a separate throttling system?), but I haven't been able to come up with anything on my own. Any resources you fine folks can point me to will be greatly appreciated! Thanks, -Joe
  9. DX9 Backbuffer and Present()

    Quote:Original post by Evil Steve What CPU and video card do you have, and what resolution are you running this at? 200FPS is pretty low for and "empty" scene. It's not an empty scene. I'm doing everything I normally do, except using the PRESENT_IMMEDIATE flag. I'll try turning off everything and render an empty scene and see how fast that goes later. Quote:Original post by Evil Steve One thing to check is that you're using AdjustWindowRect in windowed mode to create a client area that is the same size as the bacbuffer. If you pass 800x600 to CreateWindow, then the window frame will be 800x600, but the client area will only be 792x550 or something similar. That means that when D3D comes to Present(), it has to do a StretchBlt() to blit the backbuffer to the window, instead of a BitBlt, which would be more efficient. I provide zeroes for BackBufferWidth and Height when I create the device in windowed mode. I've checked the return values after calling CreateDevice(), and they do match the Client size of the window. Whenever the window size is changed I change the backbuffer size to match the new client area of the window. The texture is stretched onto the backbuffer, but I can't image 4 vertices being a tough load for the video card. I've got an ATI Radeon 8500 card, 2gb ram, 2.4ghz XP machine. -Joe
  10. DX9 Backbuffer and Present()

    This is interesting - I decided to turn off all throttling and see what happens. When I use the D3DPRESENT_INTERVAL_IMMEDIATE flag, my app runs at 202fps in windowed mode. When I do not use the D3DPRESENT_INTERVAL_IMMEDIATE flag, it runs at 54-56fps in windowed mode. 60 or 70fps in fullscreen mode (depending on what I set the FullScreen_RefreshRateInHz to). So I guess the scene is just taking a while to render. Is there anything I can do to speed it up? Here's my render function: pDev->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER | 0, Color | (0xFF << 24), 1.0f, 0L); // Set the texture to be used pDev->SetTexture(0, m_ScrTex.m_pTexture); // Set the stream source pDev->SetStreamSource(0, m_pScrVert->m_VertexBuffer, 0, sizeof(CUSTOMVERTEX)); // Set the vertex shader pDev->SetFVF(D3DFVF_CUSTOMVERTEX); // Rendering of scene objects happens here. pDev->DrawPrimitive(D3DPT_TRIANGLESTRIP , 0, 2); // End the scene. pDev->m_pd3dDevice->EndScene(); // Present the scene. if (m_pDevice->m_pd3dDevice->Present(NULL, NULL, NULL, NULL) == D3DERR_DEVICELOST) m_VertexBuffer contains 4 vertices. The texture is a 256x256 texture. Thanks, -Joe
  11. Window resizing

    Thanks guys. Worked perfectly. void DxDisplay::SaveWindowStyles() { m_dwWndStyle = GetWindowStyle(m_hWnd); GetWindowRect(m_hWnd, &m_rc); } void DxDisplay::RestoreWindowStyles() { SetWindowLong(m_hWnd, GWL_STYLE, m_dwWndStyle); SetWindowPos( m_hWnd, HWND_NOTOPMOST, m_rc.left, m_rc.top, m_rc.right - m_rc.left, m_rc.bottom - m_rc.top, SWP_FRAMECHANGED | SWP_SHOWWINDOW | 0); } Thanks again, -Joe
  12. DX9 Backbuffer and Present()

    Quote:Original post by janta is like you want your application to move even farther ahead that 2-3 frames ? No, I don't. But if something is blocking the Present(), I'd much rather have my application drop the frame than sit around and wait. My app is throttled by the sound sample rate so the video is just along for the ride, drawing frames whenever it can. Any suggestions on how to test whether I'm too far ahead of the back buffers? I measured the time each Present() call takes, figuring if that was the problem the first 2 calls would be fast and the rest would be slow. To get a baseline, I ran with the _IMMEDIATE flag set. The log looks like this (times in milliseconds): 17:58:43: Present() 0: 00.726 17:58:43: Present() 1: 00.581 17:58:43: Present() 2: 00.806 17:58:43: Present() 3: 00.495 17:58:43: Present() 4: 00.689 Without the _IMMEDIATE flag, we get this: 18:00:19: Present() 0: 01.054 18:00:19: Present() 1: 10.127 18:00:19: Present() 2: 11.821 18:00:19: Present() 3: 11.852 Occasionally the time went down: 18:00:19: Present() 16: 11.695 18:00:19: Present() 17: 00.495 18:00:19: Present() 18: 11.570 18:00:19: Present() 19: 12.991 18:00:19: Present() 20: 13.008 18:00:19: Present() 21: 00.501 18:00:19: Present() 22: 08.983 18:00:19: Present() 23: 12.450 The average over 100 frames was 12.7ms. So I'm not sure what's going on here. Thanks for the advice, -Joe
  13. Window resizing

    Ok...so: WM_SIZE comes in with new Client X/Y. Adjust my vertices to new Client X/Y Set D3DPRESENT_PARAMETERS::BackBufferWidth/Height to match new Client X/Y Release all unmanaged interfaces. Call D3DDevice9::Reset() with the Presentation params... Success! Thanks! I was trying SetViewport(), but there was no love there. This brings up another question: When I go fullscreen I do a similar thing, releasing everything and calling reset. When I come back from fullscreen and do another Reset(), it goes right back to windowed mode and I can switch back and forth all day with no ill effect. UNLESS I had resized the window before swapping to fullscreen. In that case, when I come back to windowed mode, all of the Windows-drawn components of the window (border, buttons, menu) are gone and all I see is my DirectX scene. When I come back from fullscreen mode, I provide Reset() with the same D3DPRESENT_PARAMETERS that I used when either I created the window or I did the last windowed Reset(). Thanks, -Joe
  14. Window resizing

    Hello again :) My application needs to maintain a constant aspect ratio for its screen display. When the application first starts is get the client area of the window, calculate the best-fit rectangle at my aspect ratio, and set the 4 vertices of the texture I'm putting onscreen to those dimensions (using XYZRHW coords). This works a treat. However, when I resize the window, my texture stretches and shrinks with it. I tried trapping WM_SIZE messages and recalculating the best-fit rectangle, but that doesn't work since Direct3D is apparently resizing the drawing surface and making my data invalid. As an example, when my client area is 1016x741, the rectangle I want to display is located from (25,79) to (990,661), and it does display this way initially. When I resize the window to a client area of 612x741, my new rectangle should be from (15, 194) to (596, 545). But after changing the vertices' values, the texture is "squashed" in the X-direction, only filling about half the screen. I noticed when doing GetViewport() that the viewport's size never changes, regardless of the size of the client area of the background window. So is there something I can intercept here to maintain the aspect ratio of my scene? Thanks so much for your help, -Joe
  15. DX9 Backbuffer and Present()

    Quote:Original post by jollyjeffers Have you read the "Accurately Profiling Direct3D API Calls" document in the SDK help files? I have not. But I will now, thanks :) Quote:Original post by jollyjeffers Your timing values could be completely wrong due to the way the command queue works - you're timing how long it takes to add it to the queue not how long it takes to do the work. Well, it was like this: QueryPerformanceCounter() Present() QueryPerformanceCounter() 12ms when using default flags, 3ms when using _IMMEDIATE. That does at least suggest that Present() is waiting for something. Maybe I was 2-3 frames ahead already? Quote:Original post by jollyjeffers Present() is often used as a throttle - it can't flip the buffers until any pending draw operations are out of the queue, and it'll also stop the application getting too far ahead (usually 2-3 frames ahead is the most you'll be allowed)... Since this is an emulator, I'm using the sound engine as a throttle. It gives me samples at a hard 22500 samples/sec, so I can use that for throttling. If I drop a video frame or two, it's no big deal, espeically since it generates them at 60fps. Quote:Original post by jollyjeffers Bare in mind you can make good use of parallelism by managing the swap-chain and passing the D3DPRESENT_DONOTWAIT flag and checking for the D3DERR_WASSTILLDRAWING return code. Instead of the 10ms of busy-waiting you could do 10ms of work (AI/Physics etc..) [wink] That would be awesome :) The D3DPRESENT_DONOTWAIT flag seems to only pertain to IDirect3DSwapChain9::Present(). I'm using IDirect3DDevice9::Present(). Should I look into using the SwapChain instead? Thanks very much for the helpful reply! -Joe