Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

AndyM

DX8 - cpu tasks during 3D scene redraw

This topic is 5855 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am looking at getting as much speed as possible from older 3D cards (i.e those without hardware T&L, like TnT2). Ideally I''d want to process AI etc for the next game frame while the 3D card was finishing drawing the previous frame''s scene, but I don''t know much about doing this with DX8. Does the Present function start the actual polygon drawing or do the Drawprimitive functions? Do these functions return immediately in DX8, leaving time for other tasks to be done while the 3D card finishs said task, do older cards even support such async. behaviour? Realistically, will I get much ''free'' time for AI tasks while the previous frame is drawing polys? thanks A

Share this post


Link to post
Share on other sites
Advertisement
what you are looking for is a concept called multi-threading. Yes, graphics take a while to draw, so you can spawn another task to use the processor while waiting for the render to complete (your AI thread would not need a pause, you should be waiting to start the render for the AI to complete on a given frame, not the other way). Upside: great performance gains on machines equipped where you don''t use SOFTWARE_VERTEX_PROCESSING, Downside: very complex to make sure your threads are both simultaneously working and support for HARDWARE_VERTEX_PROCESSING isn''t unilateral (yet). I have seen several projects where the overhead of creating, maintaining, and syncoronizing threads actually makes them slower than doing a linear 1-thread solution.

So basically if you have some task (such as AI) that is causing a bottleneck that would be a good time to create multiple threads. In order to create that bottleneck though, it would have to be doing A LOT of work. If you know how to view your code in assembly stage you can estimate the bottleneck of your AI routine. Take the lines of code in it and compare to the lines of code in your render routine. If there is about twice as many lines of AI code (and I really doubt that because rendering in DX spouts thousands of lines of asembly code for even a simple scene) then it is likely you could benefit.

Finally, in answer to your question "Will you get free time...?": yes.
In answer to the question that should have been asked "Will it make my game faster?": no.

Brett Lynnes
cheez_keeper@hotmail.com

Share this post


Link to post
Share on other sites
No. You won''t get any "extra" time while your driver is doing stuff (e.g. waiting for a vertical retrace, sending polygons to the card, etc). The driver is already implemented to run in parallel with the graphics card, so the driver does more than you ever could to make rendering and "everything else" run in parallel.

The only time (on a single CPU machine) that you''ll find any benefit in going multithreaded is if you''re doing I/O intensive tasks (loading the next scene, texture, model, etc) in the second thread. And even then, I''d only do it if you really know what you''re doing.

If I had my way, I''d have all of you shot!


codeka.com - Just click it.

Share this post


Link to post
Share on other sites
Ok, to clarify, I was not talking about multithreading - just if there was any ordering of D3D functions that could be done to ensure the graphics card/driver could do as much in parallel as possible.
But just doing a little test myself (geforce 2) - doing a for-loop immediately after calling :resent, the frame rate immediately started dropping, suggesting there was little or no parallel processing going on between cpu and graphics card.
perhaps its more on console architecture or newer 3D cards that parallel processing is more apparent.
I just remember ppl talking about the possibility of doing much of your cpu intensive tasks while the 3D graphics card was taking care of the latest scene redraw, but perhaps its a bit of a myth - at least on your average PC.
A

Share this post


Link to post
Share on other sites
Yeah, nVidia keeps mentioning it in their papers. I don''t understand how it works, either, though.

The thing I''d need to know is how DirectX handles stalls. Say, if you do a SetRenderState(), will the execution halt until the card''s buffer (of outstanding drawing routines) is cleared, or will it just put the render state change at the end of the buffer?

If the latter is the case the best way to do it I can think of would be:

BeginScene()
[draw scene]
.
.
.
[do game logic while gfx card/dx goes through the rendering buffer]
EndScene()
Present()

If renderstates/transform sets/etc. actually do cause stalls, the best thing I can think of are multiple threads.

Also, if I''m doing things like locking textures or changing the render target which apparently DO cause stalls, can I launch a small thread just before the lock so I don''t waste all those cycles?

This is screwing with my head...

- JQ
Full Speed Games. Period.

Share this post


Link to post
Share on other sites
UPDATE
ok try this:

EndScene();
//do cpu tasks here (I tried a simple for loop with some maths)
Present();

And amazingly you get parallel processing, at least on my geforce 2 setup.
Putting my non-graphics code after present caused a big stall while the scene was drawn (so it could be shown/presented), but putting it prior to ''present''ing the scene means the card can blast away at polygons while I do other things.
Not sure what happens if you both try to access system memory, but assuming all textures are in video memory you effectively get some free processing time.

Share this post


Link to post
Share on other sites
Hmm. Strange. As far as I can remember, EndScene() waits for the buffer to clear. (according to the SDK docs) Weird. I guess I''ll post this to the official microsoft DirectX NG when I get home, let''s hear what they have to say about it.

- JQ
Full Speed Games. Period.

Share this post


Link to post
Share on other sites
intersting expiriment. i tried it. dropped a big enough loop to bring me from 85fps (max refresh on my monitor) to 21.3 on my current game. It remained very near 21.3 before the EndScene(), after the EndScene(), and after the Present(): no performance gain or loss anywhere. My computer is a little faster so I assume that is probably why are results are different.

from the docs:
EndScene "is not a synchronous method, so the scene is not guaranteed to have completed rendering when this method returns"

Present waits for the scene to finish rendering before it presents (or swaps if you have a swap chain).

I think the reason your computer did that (performance gain) and not mine is because the scene was still drawing after the EndScene and mine was basically finished as soon as the EndScene was hit (I have quite a bit of matrix math at the end of my scene where the rendering could catch up on all the verts from the start I suppose).

From a friend of mine: "To make the best use of automatic multithreading (his term not mine... it is more of an instruction caching into L1) you should scatter a few instructions inbetween Begin and EndScene for the cpu to work on after it has queued the gpu''s cache." Then we argued for several minutes about the gain/loss of putting more instructions between Begin and End, my point being this just makes the render take longer on computers without gpu''s etc, etc... (and i AM right, of course)

Niether of us have the equipment to disprove each other''s argument because like i mentioned earlier, it made no performance difference where i put my loop.

In conclusion: I guess there is no real point in making seperate threads at all with DirectX if you have a newer graphics card (now they tell me) because the system architecture (Athlon XP, anyway) is good at keeping itself busy caching instructions while waiting for graphics interrupts to finish. and this topic is right back to "is it faster on older cards to..."

Brett Lynnes
cheez_keeper@hotmail.com

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!