Sign in to follow this  

dynamic frametime LOD

This topic is 3593 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

most games nowadays have some kind of detail system where the user or computer chooses between low medium and high... i find several problems with this... first of all its hard to find a proper static level since computer will vary and even performance of the same computer will vary depending on drivers etc... secondly it would be hard to do a proper estimation when scene complexity varies... ive been thinking of a solution of this... each geoemtry to be rendered has a different priority (foreground objects could have high priority, background low, distance, error metric etc...) and several "meshes" with different LOD (geometry complexity, texture res etc...) all of these meshes has a "frametime" which essentially is the time it took to render this mesh last time... now lets say u want a steady 60 fps.. which gives a total of 1/60 frametime... when rendering the scene u would put all the objects to be rendered in a list and assign each object a "render frametime" based on its priority/total priority and then pick the "mesh" with the most appropriate frametime... if all meshes have a to long frametime it wont be rendered... this way you are always guaranteed to get the best quality at a steady 60 fps... there are of course alot of issues still but i thought id hear what u guys think before i keep working on the idea...

Share this post


Link to post
Share on other sites
It might be a good idea, but I think that parallelism between the CPU and the GPU breaks it.

I think that when you're drawing something (e.g. calling Draw*Primitive* in DirectX) if the primitive drawn uses resources which are already set, then the function returns, even if the primitive is not finished drawing (sometimes it might even returns before the primitve is started drawing) The is the principle of parallelism between GPU and CPU.

I might be wrong : I'm not really good on that subject, but I'm pretty sure you can't really mesure the rendering time of each object of your scene, except if you render them independently, which is not really feasible :)

Share this post


Link to post
Share on other sites
Quote:

I might be wrong : I'm not really good on that subject, but I'm pretty sure you can't really mesure the rendering time of each object of your scene, except if you render them independently, which is not really feasible :)


Im pretty sure that both directxand opengl have functions/queries that allow you to measure the exact amount of time something takes to be drawn. I dont know exactly how they work or how to use them, but i think you basically issue a query that starts the timer, then when your done rendering whatever you want to profile, you issue another query that stops the timer and returns the time it took to render.

Again, im not completely sure about that, but if it works how i think it does, you should be able to use these queries to reliably measure the time something took to render.


To respond to the original idea; I think its a good start to something; could be difficult to get it all put together, but it could end up working very well. keep working on it :)

Share this post


Link to post
Share on other sites
The original Unreal Tournament had a similar system. In the video configuration one could set the "Desired Framerate" setting to whatever they felt most comfortable with. The engine would then increase or reduce scene complexity or effects according to some arbitrary priority measurement to meet that framerate.

Share this post


Link to post
Share on other sites
thats if it can render all things fast enough.. but what if it doesnt render fast enough.. lets say 50 fps...

one game i know of that has the problem im trying to resolve is oblivion... there are many different enviroments... the forest part is especially unstable... u can walk just fine with good framerate but as soon as u get alot of trees (more than usual) in view the framerate suddenly drops... which is very annoying... even though the game runs at 60+ fps everywhere else in the game...

Share this post


Link to post
Share on other sites
Quote:
Original post by Dragon_Strike
u say "original unreal"... why isnt such a system used anymore?


I don't know. It could be still used. I haven't played UT since the original.

Share this post


Link to post
Share on other sites
Quote:
Original post by KOzymandias
Don't you think they simply choose some predefined lower quality settings then sleep the necessary amount of time to fill the gap and obtain the desired framerate ?


Quote:
From Epic Games' website
Min Desired Framerate: This specifies the frame rate threshold below which Unreal Tournament will start dropping detail - reducing model detail and not drawing optional effects.. If this is set higher than your normal frame rate, then you will never see reduced graphics almost always, but get the best possible performance. We recommend setting it so that your normal frame rate is several fps above this value, allowing you to typically see all the effect in UT. When your frame rate drops because of a heavy firefight, Min Desired Framerate will kick in to minimize the drop in performance.

Share this post


Link to post
Share on other sites
Quote:
Original post by KOzymandias
Don't you think they simply choose some predefined lower quality settings then sleep the necessary amount of time to fill the gap and obtain the desired framerate ?


oooh.. sry i misunderstood you in my reply... just forget what i wrote....

one could do that also... but it wouldnt be optimal and it would be hard to conceal the transition since you will have to have predefined detail steps which obvisouly cant be very small... im also unsure how you would determine which level to use when rendering the frame... you could only do that after having rendered the frame and set those settings for the next frame... which means one frame might lag and the next one might lag even more or be to fast... you wont get optimal either case...

if i take an example... lets say uve rendered your scene and now your on to updating the enviroment maps... lets say you have 5 enviromentmaps and u want to update them all in real-time... if we take your example it would go something like...

frame 10: "update all enviroment maps" ouch.. that took waaay to long time... we gotta make up for that time next frame LAG!
frame 11: "lower detail on other stuff" NOT GOOD!
frame 30: "this time we only have to update 3 env maps"
frame 31: "were fine"

with my suggestion it would go something like

frame 10: we dont have much time, update (env map 1) with high detail and the rest with low
frame 11: weve got some extra time update another env map with high detail and interpolate for smooth transition
frame 11+: "-"
frame 30: weve got time update all 3 with high detail
frame 31: were fine

this would work well in for example a car game... when the cars go fast u want cool motion blur effects and such... also u dont need to update the enviroment maps with as much detail.. which this handles dynamicly... when the car then slows down it automaticly increase detail for the enviroment maps.. and the viewer will think it was always that high quality...

maybe not the best example...

Share this post


Link to post
Share on other sites
You will find it hard to find the balance between making the algorythm sensitive enough, and avoiding a visually disturbing constant switch between one LOD and another. I would have this as an option which can be turned ON but is off by default.

Share this post


Link to post
Share on other sites
yea ofcourse... it would take alot of tweaking... one would need some system to smoothen or hide the transition between the LOD levels... im unsure how this could be done in the best way... i guess at right at the transition one could interpolate the LOD a few frames.. but i guess that would slow it down quite a bit

Share this post


Link to post
Share on other sites
Regarding adjusting LODs based on frame-time, I would just use some type of worst case test scene and then recommend a quality setting.

The example in oblivion with lots of trees is usually solved during level design by sticking to a budget for static objects. In a multi-player game it gets tricky because you can't control things as easily so I suppose that would be the one time you might want to tweak the LOD.

Eric

Share this post


Link to post
Share on other sites
This brings me to a related question. Is there a way to make the GPU stall if an operation doesn't take long enough? Then you could do something like:

Tb = 0
Frame 1: Draw scene and env map A, taking at least Tb, but query real frame time Ta
Frame 2: Draw scene and env map B, taking at least Ta, but query real frame time Tb
Repeat

That would give consistent frame times and there would only be a one frame lag before the time is adjusted for lower scene complexity etc. This is really important for other systems in the game like the physics engine etc.

Unfortunately, I don't know of any mechanism to perform that type of stall/timing operation without timely GPU/CPU synchronization.

Eric

Share this post


Link to post
Share on other sites

This topic is 3593 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this