Jump to content
  • Advertisement

silikone

Member
  • Content Count

    15
  • Joined

  • Last visited

Community Reputation

128 Neutral

About silikone

  • Rank
    Member

Personal Information

  • Interests
    Audio
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. silikone

    (Updated) Struggling With Remembering What I've Learned.

    My message wasn't targeted at one person specifically, and I hope you didn't take it as a source of discouragement. If anything, a quintessential autistic mind's retention of details and systematic perception are a boon for programming, and it shows. I'd wager that the proportion of programmers on the spectrum is notably high relative to the total population. To rephrase, skill and will are both important. If one is lacking, the other can compensate. Finding your weaknesses also means finding your strengths.
  2. silikone

    (Updated) Struggling With Remembering What I've Learned.

    I think it's also worth emphasizing the significance of innate general cognitive ability, something that you are not really able to improve by training. I'd wager that the average IQ of successful game programmers is very high, as it is a field with an extreme demand for logical reasoning as well as a sharp memory that can contain a myriad of details. Intelligence is far from everything, of course, but it is at least almost as important as having the right mindset.
  3. So as I am toying around with lighting shaders, great looking results can be achieved. However, I struggle to fully grasp the idea behind it. Namely, the microfacet BRDF doesn't line up with how I intuitively understand the process. Expectedly, the perceived brightness on a surface is highest at NdotH, but this gets to be increased two-fold by the denominator as the L and V angles diverge. The implicit geometry term would cancel this out, but something like Smith-Schlick with a low roughness input would not do much in that department, making gracing angles very bright despite there being no fresnel involved. The multiplication of the whole BRDF with NdotL then only partially cancels it out. Am I missing something, or should a relatively smooth metallic surface indeed have brighter highlights when staring at it with a punctual light near the horizon of said surface?
  4. silikone

    Metallic VS specular reflectance

    I realize that the apparent brightness is very much determined by roughness, but I am thinking of the total specular energy reflected from a material. An engine like Unreal lets you set both metallic and specular in addition to the very ubiquitous parameter that is roughness. I was thinking, to simplify the workflow, what if the two former slider inputs were combined into one? So for dielectrics, the low range would account for the refraction variance found in reality, and the highest value would account for zero albedo chromatic specular materials, or pure metal. I am not sure about the BRDF of Unreal, so there could very well be factors that rely on these separate inputs.
  5. So some popular PBR workflows parameterize both metalness and reflectance. Trying to grasp the paradigm myself, I wondered if the latter is really necessary. Since the variance of reflectance of non-metals is pretty low, and the reflectance of metals is always high, what if these parameters were merged? As it currently is, they ostensibly do the same thing, with the caveat that metallness also affects the specular color, which could perhaps be compensated for.
  6. So the foolproof way to store information about emission would be to dedicate a full RGB data set to do the job, but this is seemingly wasteful, and squeezing everything into a single buffer channel is desirable and indeed a common practice. The thing is that there doesn't seem to be one de facto standard technique to achieve this. A commonly suggested solution is to perform a simple glow * albedo multiplication, but it's not difficult to imagine instances where this strict interdependence would become an impenetrable barrier. What are some other ideas?
  7. silikone

    Floating poing luminosity

    This does perhaps explain how I have seen good results from examples of 10-bit float buffers in action. I assumed it just magically looked good. Since 16-bit has the same exponent range, I guess my original question applies to it as well. Thinking about it for a while, using a display brightness of 300 cd/m2 (what you and Wikipedia mention) as a reference for a linear untampered buffer does seem to be way too low. If the sun were to be equivalent to the highest exponent in a buffer (leaving some mantissa overhead), our 300 cd/m2 display brightness would be represented as 2^15 / (1.6 * 10^9 / 300) = 0.006144f, and 1f would be about 49k cd/m2. If the display brightness were instead represented as the intuitive 1f, it's clear that sunlight would face some severe clipping, but is it too severe?
  8. So when using floating point to represent luminosity, one would presumably do so to push the notion of maximum brightness beyond "1" as is the standard in fixed point math. Having no real theoretical limit (other than the technical limit depending on the number of bits used), the question of how those values should correlate with real-world numbers emerges. In the context of an HDR framebuffer, small float formats in particular, one would ideally want a distribution that leverages the characteristics display technology and the human vision. Intuitively, the "1" point should represent the absolute white point of a display, but these of course vary to a high degree, and I doubt that this would offer anything close to an efficient precision distribution of luminosity values that humans are able to discern. I guess the question boils down to "How bright should the 1 value be in an R11G11B10 framebuffer?"
  9. When utilizing S3 compressed textures, how is gamma correction handled? The resulting pixels from the offline tools appear to be mapped to values that should be linearly interpolated in the original sRGB gamma space. It has been suggested that the textures should be decompressed after gamma conversion, however, with the way that the compression is done, this does not seem right. Suppose there is an uncompressed texture with samples that smoothly transition from 0 to 0.5 grey in gamma space within a 4x4 block. Compressing this to DXT1a would map the transitioning samples to that which is ~0.25 after decompression. If the texture is however first converted to linear space before this interpolation takes place, you'd have the explicit color values of 0 and ~0.22, and converting the interpolated ~0.11 back to gamma space would net you more than 0.35, which is far off from the 0.25 one would get from using a tool like nvcompress/nvdecompress.
  10. This is where one may or may not see latency introduced in order to stay deterministic. If you are smart about it, you could detach the game simulation from the player input and feedback. It is crucial that the mouse feels instant, but gunfire and animations don't have to be instant. Of course, for the best programmers, nothing beats having instant everything.
  11. Usually, an engine has to strike a balance between these three factors, sacrificing at least one to maximize another. I'm looking for some information on how various engines deal with each of these factors, helping one to make the right choice depending on the requirements of a game. For example, determinism is imperative for physics puzzle games, low-latency is in high demand for twitchy shooters, and multithreading suits large-scale simulations.
  12. Speaking of Rec.709, what power does it approximate? I see talks about it being 2.4, but I always got different results.
  13. Million to one with dynamic contrast, right? I haven't ever really paid much attention to that part. Have I been using it without realizing all this time?
  14. So HDR and other such novelties are here to provide us with accurate representations of what reality looks like on our screens, and that's nice. But how is all of this managed in SDR? First there is that 1000:1 contrast ratio on most monitors, and a lot has to be crammed inside of that Now, this is just clueless speculation of mine, but with the ubiquitous 2.2 gamma standard, the ratio between the brightest white and the darkest grey as represented in a game should be about 200000:1. If we suppose that we are looking at a perfect albedo surface exposed to direct sunlight, and it is exactly equivalent to 100% screen brightness, the same surface should be able to visibly reflect down to 0.5 lux without performing tonemapping, for sunlight is said to be about 100K lux. So, on a typical display, assuming that the game is physically accurate, what you see is about 0.5% of the contrast you would get in the real world, and scaling down would yield what is equivalent to 500 lux on the monitor Is this logic sound, and is this this what actually occurs in game engines?
  15. When going from gamma on PC with sRGB to consoles that are designed with TVs in mind, does anything change that the developers should address? I read some pages about the need to re-encode textures when porting to a certain console, but I don't remember the specifics.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!