silikone

Members
  • Content count

    7
  • Joined

  • Last visited

Community Reputation

126 Neutral

About silikone

  • Rank
    Newbie

Personal Information

  • Interests
    Audio
    Programming
  1. When utilizing S3 compressed textures, how is gamma correction handled? The resulting pixels from the offline tools appear to be mapped to values that should be linearly interpolated in the original sRGB gamma space. It has been suggested that the textures should be decompressed after gamma conversion, however, with the way that the compression is done, this does not seem right. Suppose there is an uncompressed texture with samples that smoothly transition from 0 to 0.5 grey in gamma space within a 4x4 block. Compressing this to DXT1a would map the transitioning samples to that which is ~0.25 after decompression. If the texture is however first converted to linear space before this interpolation takes place, you'd have the explicit color values of 0 and ~0.22, and converting the interpolated ~0.11 back to gamma space would net you more than 0.35, which is far off from the 0.25 one would get from using a tool like nvcompress/nvdecompress.
  2. This is where one may or may not see latency introduced in order to stay deterministic. If you are smart about it, you could detach the game simulation from the player input and feedback. It is crucial that the mouse feels instant, but gunfire and animations don't have to be instant. Of course, for the best programmers, nothing beats having instant everything.
  3. Usually, an engine has to strike a balance between these three factors, sacrificing at least one to maximize another. I'm looking for some information on how various engines deal with each of these factors, helping one to make the right choice depending on the requirements of a game. For example, determinism is imperative for physics puzzle games, low-latency is in high demand for twitchy shooters, and multithreading suits large-scale simulations.
  4. Speaking of Rec.709, what power does it approximate? I see talks about it being 2.4, but I always got different results.
  5. Million to one with dynamic contrast, right? I haven't ever really paid much attention to that part. Have I been using it without realizing all this time?
  6. So HDR and other such novelties are here to provide us with accurate representations of what reality looks like on our screens, and that's nice. But how is all of this managed in SDR? First there is that 1000:1 contrast ratio on most monitors, and a lot has to be crammed inside of that Now, this is just clueless speculation of mine, but with the ubiquitous 2.2 gamma standard, the ratio between the brightest white and the darkest grey as represented in a game should be about 200000:1. If we suppose that we are looking at a perfect albedo surface exposed to direct sunlight, and it is exactly equivalent to 100% screen brightness, the same surface should be able to visibly reflect down to 0.5 lux without performing tonemapping, for sunlight is said to be about 100K lux. So, on a typical display, assuming that the game is physically accurate, what you see is about 0.5% of the contrast you would get in the real world, and scaling down would yield what is equivalent to 500 lux on the monitor Is this logic sound, and is this this what actually occurs in game engines?
  7. When going from gamma on PC with sRGB to consoles that are designed with TVs in mind, does anything change that the developers should address? I read some pages about the need to re-encode textures when porting to a certain console, but I don't remember the specifics.
  8. Axis orientation

    I'm wondering what the general rule involving the direction of absolute axes in 3D space. I most commonly see Z being the vertical axis, but I have seen some examples of this not being the case. It of course also depends on the game. If you had six degrees of freedom in a space game, where would you even begin to plot the axes?
  9. Gamma correction. Is it lossless?

    Woah, I clearly remember seeing banding in that scene myself. I always thought it was a side effect of trying to run it on an Intel HD 3000 (it ran surprisingly well). What are the performance or memory sacrifices made for using either?
  10. I'm interested in the theory of gamma correction in games. Mainly the process and how it ends up looking just right. Firstly, the guides mention that diffuse textures should be stored gamma-corrected and get converted when needed. When this occurs, how are the textures converted? Does it preserve all the details that the original texture had in floating point or something? When it then is time to gamma-correct again, what exactly happens? All I've read is that one simply applies it to the frame buffer, but since linear images lack detail in dark areas, will this not produce banding of some sort?   Also, do basically all modern engines use gamma-correction? What are some examples of high-profile games that don't?