dynamic frametime LOD

Started by
12 comments, last by WizardOfOzzz 16 years, 2 months ago
most games nowadays have some kind of detail system where the user or computer chooses between low medium and high... i find several problems with this... first of all its hard to find a proper static level since computer will vary and even performance of the same computer will vary depending on drivers etc... secondly it would be hard to do a proper estimation when scene complexity varies... ive been thinking of a solution of this... each geoemtry to be rendered has a different priority (foreground objects could have high priority, background low, distance, error metric etc...) and several "meshes" with different LOD (geometry complexity, texture res etc...) all of these meshes has a "frametime" which essentially is the time it took to render this mesh last time... now lets say u want a steady 60 fps.. which gives a total of 1/60 frametime... when rendering the scene u would put all the objects to be rendered in a list and assign each object a "render frametime" based on its priority/total priority and then pick the "mesh" with the most appropriate frametime... if all meshes have a to long frametime it wont be rendered... this way you are always guaranteed to get the best quality at a steady 60 fps... there are of course alot of issues still but i thought id hear what u guys think before i keep working on the idea...
Advertisement
It might be a good idea, but I think that parallelism between the CPU and the GPU breaks it.

I think that when you're drawing something (e.g. calling Draw*Primitive* in DirectX) if the primitive drawn uses resources which are already set, then the function returns, even if the primitive is not finished drawing (sometimes it might even returns before the primitve is started drawing) The is the principle of parallelism between GPU and CPU.

I might be wrong : I'm not really good on that subject, but I'm pretty sure you can't really mesure the rendering time of each object of your scene, except if you render them independently, which is not really feasible :)
Quote:
I might be wrong : I'm not really good on that subject, but I'm pretty sure you can't really mesure the rendering time of each object of your scene, except if you render them independently, which is not really feasible :)


Im pretty sure that both directxand opengl have functions/queries that allow you to measure the exact amount of time something takes to be drawn. I dont know exactly how they work or how to use them, but i think you basically issue a query that starts the timer, then when your done rendering whatever you want to profile, you issue another query that stops the timer and returns the time it took to render.

Again, im not completely sure about that, but if it works how i think it does, you should be able to use these queries to reliably measure the time something took to render.


To respond to the original idea; I think its a good start to something; could be difficult to get it all put together, but it could end up working very well. keep working on it :)
The original Unreal Tournament had a similar system. In the video configuration one could set the "Desired Framerate" setting to whatever they felt most comfortable with. The engine would then increase or reduce scene complexity or effects according to some arbitrary priority measurement to meet that framerate.
u say "original unreal"... why isnt such a system used anymore?
Don't you think they simply choose some predefined lower quality settings then sleep the necessary amount of time to fill the gap and obtain the desired framerate ?
thats if it can render all things fast enough.. but what if it doesnt render fast enough.. lets say 50 fps...

one game i know of that has the problem im trying to resolve is oblivion... there are many different enviroments... the forest part is especially unstable... u can walk just fine with good framerate but as soon as u get alot of trees (more than usual) in view the framerate suddenly drops... which is very annoying... even though the game runs at 60+ fps everywhere else in the game...
Quote:Original post by Dragon_Strike
u say "original unreal"... why isnt such a system used anymore?


I don't know. It could be still used. I haven't played UT since the original.
Quote:Original post by KOzymandias
Don't you think they simply choose some predefined lower quality settings then sleep the necessary amount of time to fill the gap and obtain the desired framerate ?


Quote:From Epic Games' website
Min Desired Framerate: This specifies the frame rate threshold below which Unreal Tournament will start dropping detail - reducing model detail and not drawing optional effects.. If this is set higher than your normal frame rate, then you will never see reduced graphics almost always, but get the best possible performance. We recommend setting it so that your normal frame rate is several fps above this value, allowing you to typically see all the effect in UT. When your frame rate drops because of a heavy firefight, Min Desired Framerate will kick in to minimize the drop in performance.
Quote:Original post by KOzymandias
Don't you think they simply choose some predefined lower quality settings then sleep the necessary amount of time to fill the gap and obtain the desired framerate ?


oooh.. sry i misunderstood you in my reply... just forget what i wrote....

one could do that also... but it wouldnt be optimal and it would be hard to conceal the transition since you will have to have predefined detail steps which obvisouly cant be very small... im also unsure how you would determine which level to use when rendering the frame... you could only do that after having rendered the frame and set those settings for the next frame... which means one frame might lag and the next one might lag even more or be to fast... you wont get optimal either case...

if i take an example... lets say uve rendered your scene and now your on to updating the enviroment maps... lets say you have 5 enviromentmaps and u want to update them all in real-time... if we take your example it would go something like...

frame 10: "update all enviroment maps" ouch.. that took waaay to long time... we gotta make up for that time next frame LAG!
frame 11: "lower detail on other stuff" NOT GOOD!
frame 30: "this time we only have to update 3 env maps"
frame 31: "were fine"

with my suggestion it would go something like

frame 10: we dont have much time, update (env map 1) with high detail and the rest with low
frame 11: weve got some extra time update another env map with high detail and interpolate for smooth transition
frame 11+: "-"
frame 30: weve got time update all 3 with high detail
frame 31: were fine

this would work well in for example a car game... when the cars go fast u want cool motion blur effects and such... also u dont need to update the enviroment maps with as much detail.. which this handles dynamicly... when the car then slows down it automaticly increase detail for the enviroment maps.. and the viewer will think it was always that high quality...

maybe not the best example...

This topic is closed to new replies.

Advertisement