Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

126 Neutral

About silikone

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. silikone

    Floating poing luminosity

    This does perhaps explain how I have seen good results from examples of 10-bit float buffers in action. I assumed it just magically looked good. Since 16-bit has the same exponent range, I guess my original question applies to it as well. Thinking about it for a while, using a display brightness of 300 cd/m2 (what you and Wikipedia mention) as a reference for a linear untampered buffer does seem to be way too low. If the sun were to be equivalent to the highest exponent in a buffer (leaving some mantissa overhead), our 300 cd/m2 display brightness would be represented as 2^15 / (1.6 * 10^9 / 300) = 0.006144f, and 1f would be about 49k cd/m2. If the display brightness were instead represented as the intuitive 1f, it's clear that sunlight would face some severe clipping, but is it too severe?
  2. So when using floating point to represent luminosity, one would presumably do so to push the notion of maximum brightness beyond "1" as is the standard in fixed point math. Having no real theoretical limit (other than the technical limit depending on the number of bits used), the question of how those values should correlate with real-world numbers emerges. In the context of an HDR framebuffer, small float formats in particular, one would ideally want a distribution that leverages the characteristics display technology and the human vision. Intuitively, the "1" point should represent the absolute white point of a display, but these of course vary to a high degree, and I doubt that this would offer anything close to an efficient precision distribution of luminosity values that humans are able to discern. I guess the question boils down to "How bright should the 1 value be in an R11G11B10 framebuffer?"
  3. When utilizing S3 compressed textures, how is gamma correction handled? The resulting pixels from the offline tools appear to be mapped to values that should be linearly interpolated in the original sRGB gamma space. It has been suggested that the textures should be decompressed after gamma conversion, however, with the way that the compression is done, this does not seem right. Suppose there is an uncompressed texture with samples that smoothly transition from 0 to 0.5 grey in gamma space within a 4x4 block. Compressing this to DXT1a would map the transitioning samples to that which is ~0.25 after decompression. If the texture is however first converted to linear space before this interpolation takes place, you'd have the explicit color values of 0 and ~0.22, and converting the interpolated ~0.11 back to gamma space would net you more than 0.35, which is far off from the 0.25 one would get from using a tool like nvcompress/nvdecompress.
  4. This is where one may or may not see latency introduced in order to stay deterministic. If you are smart about it, you could detach the game simulation from the player input and feedback. It is crucial that the mouse feels instant, but gunfire and animations don't have to be instant. Of course, for the best programmers, nothing beats having instant everything.
  5. Usually, an engine has to strike a balance between these three factors, sacrificing at least one to maximize another. I'm looking for some information on how various engines deal with each of these factors, helping one to make the right choice depending on the requirements of a game. For example, determinism is imperative for physics puzzle games, low-latency is in high demand for twitchy shooters, and multithreading suits large-scale simulations.
  6. Speaking of Rec.709, what power does it approximate? I see talks about it being 2.4, but I always got different results.
  7. Million to one with dynamic contrast, right? I haven't ever really paid much attention to that part. Have I been using it without realizing all this time?
  8. So HDR and other such novelties are here to provide us with accurate representations of what reality looks like on our screens, and that's nice. But how is all of this managed in SDR? First there is that 1000:1 contrast ratio on most monitors, and a lot has to be crammed inside of that Now, this is just clueless speculation of mine, but with the ubiquitous 2.2 gamma standard, the ratio between the brightest white and the darkest grey as represented in a game should be about 200000:1. If we suppose that we are looking at a perfect albedo surface exposed to direct sunlight, and it is exactly equivalent to 100% screen brightness, the same surface should be able to visibly reflect down to 0.5 lux without performing tonemapping, for sunlight is said to be about 100K lux. So, on a typical display, assuming that the game is physically accurate, what you see is about 0.5% of the contrast you would get in the real world, and scaling down would yield what is equivalent to 500 lux on the monitor Is this logic sound, and is this this what actually occurs in game engines?
  9. When going from gamma on PC with sRGB to consoles that are designed with TVs in mind, does anything change that the developers should address? I read some pages about the need to re-encode textures when porting to a certain console, but I don't remember the specifics.
  10. silikone

    Axis orientation

    I'm wondering what the general rule involving the direction of absolute axes in 3D space. I most commonly see Z being the vertical axis, but I have seen some examples of this not being the case. It of course also depends on the game. If you had six degrees of freedom in a space game, where would you even begin to plot the axes?
  11. silikone

    Gamma correction. Is it lossless?

    Woah, I clearly remember seeing banding in that scene myself. I always thought it was a side effect of trying to run it on an Intel HD 3000 (it ran surprisingly well). What are the performance or memory sacrifices made for using either?
  12. I'm interested in the theory of gamma correction in games. Mainly the process and how it ends up looking just right. Firstly, the guides mention that diffuse textures should be stored gamma-corrected and get converted when needed. When this occurs, how are the textures converted? Does it preserve all the details that the original texture had in floating point or something? When it then is time to gamma-correct again, what exactly happens? All I've read is that one simply applies it to the frame buffer, but since linear images lack detail in dark areas, will this not produce banding of some sort?   Also, do basically all modern engines use gamma-correction? What are some examples of high-profile games that don't?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!