HDRR and OpenGL

Started by
7 comments, last by CrazyCdn 16 years, 10 months ago
Hey, I'm trying to learn OpenGL using a fully programmable pipeline, rather than any fixed function bits, and I'm trying to get straight in my head a few things about OpenGL, particularly with HDRR. 1. a. Am I right in thinking that HDRR is a rendering technique where lighting is worked out linearly, using high precision floating point values, then one uses a tone mapping algorithm of some sort to convert the calculations to 8 bit RGB integers for display on a monitor, optionally applying a bloom effect to values that are clamped to pure white? b. If so, what support is there in OpenGL for this, in the way of shaders I guess. I mean, what's the 'standard' way of doing HDR rendering? I'm guessing this probably involves frame buffers or something... c. There seems to be far fewer resources for HDR in OpenGL than in DirectX. More specifically, with DirectX everyone seems to talk about Shader Model 3.0, enabling 32-bit floating point support and all that; with GLSL, is fp32 support implied, or does one have to enable it somehow? d. Also, are there any stock shaders/libraries for doing things like global illumination, tone mapping and lighting with an arbitrary number of lights? 2. a. DirectX seems to have loads of these Shader Model 3.0/4.0 graphics engines that support HDR, like X-Ray used in STALKER, Source Engine for HL2, CryEngine 2 for Crysis... Are there any modernish (preferably opensource but not necessary) engines for OpenGL that support modern rendering techniques like HDRR, soft shadows, global illumination, etc. and have no dependency on the fixed function pipeline? b. What versions of OpenGL are roughly equivelant to particular versions of DirectX/Shader Model, if that is even comparable? c. I understand that DirectX 10 is a big step up from 9.0c, is Longs Peak/MtEvans intended to be the true competition to DirectX 10, or are the current APIs a good feature match for each other, graphics wise? Thanks :)
Advertisement
personally im not to great a fan of HDR ( as it doesnt really seem to add much vs its cost ), perhaps things would look better once we have higher bitdepth displays.
check out the nvidia devleoper website theres a couple of examples of HDR using opengl.
yes floating textures are supported with opengl/glsl ( u will need a capable card of doing it though )
Quote:Original post by thecwin
c. I understand that DirectX 10 is a big step up from 9.0c, is Longs Peak/MtEvans intended to be the true competition to DirectX 10, or are the current APIs a good feature match for each other, graphics wise?

Thanks :)


The current APIs have virtually the same features (if you use extensions in OpenGL).

as it is you never really have to wait for a new OpenGL version to be released as the hardware manufacturers (or driver developers) can (and does) extend OpenGL to suit their hardware. (the SM4 feature extensions are supported by all vendors with cards capable of using it)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
Quote:Original post by thecwin
1. a. Am I right in thinking that HDRR is a rendering technique where lighting is worked out linearly, using high precision floating point values, then one uses a tone mapping algorithm of some sort to convert the calculations to 8 bit RGB integers for display on a monitor, optionally applying a bloom effect to values that are clamped to pure white?

Correct.

Quote:
b. If so, what support is there in OpenGL for this, in the way of shaders I guess. I mean, what's the 'standard' way of doing HDR rendering? I'm guessing this probably involves frame buffers or something...

It's independent of shaders, since modern shaders operate at floating point precision anyway. If you're using GLSL, you're on the safe side. Besides that, you need floating point framebuffer objects (or RGBE encoded 32bit ones, but this is an advanced technique). All this is pretty standard, and explained in all those HDR tutorials that float around the net.

Quote:
c. There seems to be far fewer resources for HDR in OpenGL than in DirectX. More specifically, with DirectX everyone seems to talk about Shader Model 3.0, enabling 32-bit floating point support and all that; with GLSL, is fp32 support implied, or does one have to enable it somehow?

The way to achieve HDR is exactly the same in D3D and in OpenGL. All SM3 features are accessed through GLSL in OpenGL, which implies fp16/32 support (in practice, fp16 is often used for performance reasons). Floating point render textures are available through standard extensions. Essentially, you don't have to worry about shader models in OpenGL.

Quote:
b. What versions of OpenGL are roughly equivelant to particular versions of DirectX/Shader Model, if that is even comparable?

There's no direct equivalence. It depends on the available extensions. In practice, almost all features available in D3D9 are available in OpenGL, and vice-versa.

Quote:
c. I understand that DirectX 10 is a big step up from 9.0c, is Longs Peak/MtEvans intended to be the true competition to DirectX 10, or are the current APIs a good feature match for each other, graphics wise?

Many D3D10 features are exposed under current OpenGL implementations through extensions (eg. geometry shaders).

Quote:
personally im not to great a fan of HDR ( as it doesnt really seem to add much vs its cost ), perhaps things would look better once we have higher bitdepth displays.

I completely disagree. This opinion shows that you have never seen a good implementation of realtime HDR. The difference is enormeous. HDR support (with good tonemapping) is absolutely vital to obtain photorealistic 3D images.
Quote:I completely disagree. This opinion shows that you have never seen a good implementation of realtime HDR. The difference is enormeous. HDR support (with good tonemapping) is absolutely vital to obtain photorealistic 3D images

care to point to any demos
Speaking from a photographer point of view, I have to agree with Yann L about HDR being vital for photorealistic scenery.

Admittedly with higher bit-depth/contrast ratio screens we'll see more of the benefit, but HDR-style techniques have been used for hundreds of years in paintings and that's what gives paintings that deeper look in comparison to photographs.

Consider the following scenario (it's a bit simplified, but the idea holds..): You have a low intensity light and a sun, but both are displayed at maximum intensity onscreen (255,255,255) due to the other objects in the scene being far lower intensity. You have a mildly reflective object... this mildly reflective object would reflect both the sun and the light at the same intensity, despite that in reality it'd reflect the sun far more than it'd reflect the low intensity light. With HDR, you could represent the sun as being far more intense than the lightbulb, and use that in all calculations, but have the actual intensities clamped in tone mapping.

Plus gamma, non-linear sampling and all that sort of thing is quite ugly; linear is nicer.

If you look at comparison screenshots, you can see where calculations like this have an impact, for example: http://en.wikipedia.org/wiki/Image:Hl2hdrcomparison.jpg

I'm only starting out with trying HDR in programming; but with photography I've used it quite a while. Another thing that has a large impact is things like motion blurring, since higher intensity lights will leave more of a trail than lower intensity objects, and as far as I know, you can only do that sort of thing with HDR imagery.

http://www.daionet.gr.jp/~masa/rthdribl/
http://www.sync.dk/~khn/research3/index.html

[Edited by - thecwin on May 21, 2007 6:51:49 AM]
yes im aware of those links ( i have studied HDR quite a lot, in fact are looking at supporting logluv32 in my game, i have done a trail )
WRT what i said "as it doesnt really seem to add much vs its cost" it does knock performance quite a bit, its not a cheap addition to make WRT speed + memory, which may stop u from adding other (more imporant) things, eg unified lighting etc.

heres a shot from the masa demo i took
http://www.zedzeek.com/junk/LDRHDR.jpg
true the difference is quite big in the reflections (but this is like a best case senerio for HDR in a more typical ingame scene the difference is gonna be far less than this). now i ask u honestly is a 30% drop in performance worth it?

btw tonemapping can be done with LDR as well
I recommend you download the ATI SDK (March 2006), there's a HDR demo + an article covering various HDR techniques in detail.
while (tired) DrinkCoffee();
Or look at the CryEngine 2 videos. They all have HDR and look amazing for it.

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

This topic is closed to new replies.

Advertisement