Accelerating PC-based gaming?

Started by
12 comments, last by Kylotan 16 years, 6 months ago
Quote:Original post by Kylotan
I really don't think games are a great application for this. There's hardly anything you can effectively run in parallel with the CPU when your system needs to be fully synchronised 60 times a second. Our software just doesn't (currently) suit parallel models.


Forgive my ignorace as I’ve got very little experience as far as game programming goes, but why would the system need to be fully synchronised sixty times a second? I mean – one core can do all the game-related stuff like updating the world while the second core would only render the data the first core computed, sixty times per second... Certain level of atomicity would still need to be preserved of course, but still I don’t see why full synchronisation would be needed...

Is this possible but not yet done, or are there any significant reasons not to do this?
Advertisement
That probably can't be done without two sets of data, like some type of front and back buffer. While the renderer draws the front buffer the physics works on the back buffer. If there were only one set of data then you would have situations where the renderer was waiting for the physics to finish... as in all the time, because until everything is calculated you probably won't be able to render it correctly.

That makes the most sense to me, but there could be better reasons for it.

C++: A Dialog | C++0x Features: Part1 (lambdas, auto, static_assert) , Part 2 (rvalue references) , Part 3 (decltype) | Write Games | Fix Your Timestep!

Quote:Original post by FPGA Jim
I was thinking of using a new type of reconfigurable hardware which now has high bandwidth access to the CPU (which in the past was one of the drawbacks with this type of hardware and desktop computing).
High bandwidth isn't particularly important. Any modern hardware bus technology will supply you with enough bits. The problem is one of latency. You need to get the results back very, very quickly. You can afford to slide by a frame or two, but no more than that. In other words, the results need to be available within 20ms or so of submission for computation.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Quote:Original post by Oxyd
Forgive my ignorace as I’ve got very little experience as far as game programming goes, but why would the system need to be fully synchronised sixty times a second?


Things don't necessarily need to be, but that's pretty much the way that everything works. It's the tried and tested method that developers are familiar with, and it was optimal for pretty much every platform until relatively recently.

Quote:I mean – one core can do all the game-related stuff like updating the world while the second core would only render the data the first core computed, sixty times per second... Certain level of atomicity would still need to be preserved of course, but still I don’t see why full synchronisation would be needed...


Where's this dividing line between 'certain level of atomicity' and 'full synchronisation'? Which part of the data can I choose not to lock when going through the world and deciding what to add into the render queue? Do I lock the whole thing, holding up the other thread? Or do I lock and unlock individual items or areas, incurring massive overheads? (Locking is not cheap.)

As it stands, rendering isn't the problem - most that already occurs in parallel, on the GPU. What remains are things like AI and physics, which tend to operate on a complete game world, or a large area of it, and require consistency to operate. There are documented ways around this, especially for AI, but AI is non-trivial and non-standard, which means there is still some way to go.

This topic is closed to new replies.

Advertisement