high-end graphics engine
Members - Reputation: 178
Posted 03 January 2012 - 08:01 PM
Members - Reputation: 145
Posted 03 January 2012 - 10:02 PM
If anyone has experience or know a system that can handle these requirements, please let me know. Thanks!
Crossbones+ - Reputation: 8766
Posted 03 January 2012 - 11:18 PM
You will need to sort by shaders, textures, and depth from view. You will need a fast way to update particles (on the GPU is one way) and cull objects from view (unless this is for a very specific purpose).
If the objects all use the same attributes you don’t need to do much shader swapping.
Your CPU will have some work to do as well.
However my old engine gave me 240,000,000 triangles per second and it was using the slower fixed-function pipeline and a CPU and GPU that are 2 years old.
With newer cards (many of which are over twice as fast as mine) and using shaders efficiently, there is no reason not to expect that you will be able to get far more than 200,000,000 triangles per second, especially on such a low resolution.
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums
Members - Reputation: 178
Posted 03 January 2012 - 11:25 PM
Assuming triangle:vertex ratio is on average is 2:1, 16MB/frame is what is neede for 1m triangles.
=> 200*16 = ~3GB/s bandwidth. This is roughly 1/2 of what PCIE Gen2 is capable of, bit this is still a bit on the high side - even though the theory pans out, pulling geometry from system memory doesn't always reach peak performance (not to mention that things like particules, dynamic textures, etc. may also be pulled from system memory, and may be impacted similarly).
My suggestion would be to keep the raw data on the GPU and use shaders to modify them from a base representation at run-time or incrementally via stream out or UAVs.
Members - Reputation: 130
Posted 04 January 2012 - 01:04 AM
Why would you like 200Mhz frame rate? 30 frames / sec needed for smooth "movement". That is about 30 Mhz. LCD monitors usually use 75Mhz, they will never run on 200Mhz, even if your program say so (37.5% of the frames will be displayed).
Members - Reputation: 646
Posted 04 January 2012 - 02:51 AM
I'm assuming that LCD's are refreshing at an interval of 60hz in the United states. Besides the whole mega, not mega thing, your post is very accurate.
Crossbones+ - Reputation: 2014
Posted 05 January 2012 - 07:19 AM
1024x1024 pixel at 32bit at 200Hz are about 6.7GBit/s, DVI is specified for 4.9GBit/s.
DVI output at 1024x1024 pixels at 200Hz frame rate?
You might be not able to push that, I suggets you'll use dual-dvi oder hdmi to overcome that limit.
1Mio tri/frame actually means you have mostly 1pixel sized triangles, is that really needed? there are usually other ways to get this detail without a noticeable quality loss.