Hodgman

Moderators
  • Content count

    14484
  • Joined

  • Last visited

  • Days Won

    21

Hodgman last won the day on October 16

Hodgman had the most liked content!

Community Reputation

51421 Excellent

3 Followers

About Hodgman

  • Rank
    Moderator - APIs & Tools

Personal Information

Social

  • Twitter
    @BrookeHodgman
  • Github
    hodgman

Recent Profile Visitors

72562 profile views
  1. Does violence stem from video games

    Not definitely. Speculatively. You could also speculate that he was a violent person and used airsoft and counter-strike as a coping strategy to deal with his impulses, and that they've helped to avoid similar incidents elsewhere. But that's just speculation... Be careful confusing your intuition with certainty.
  2. If your GPU time is normally 1.66ms and you turn on vsync with a 60Hz display, your GPU frame time will get rounded up to 16.6ms and so the CPU will start getting too far ahead of the GPU, so D3D will block the CPU inside present for 15ms more than it used to (about 10x more). What you're describing could be normal/expected. Can you post some timings, your swap chain configuration, and what you expect to happen?
  3. There's no reason you cant have vsync and high fps. Can you explain your current setup/configuration of the swap chain, present mode, etc, what you expect to happen and what is happening? Triple buffering doesn't improve framerates in general; it just allows it to smooth out a jittering framerate / absorb occasional framerate spikes. Do you know what your current GPU time per frame is, and CPU time per frame not including Present?
  4. Tone Mapping

    If you're using this format, you're declaring that it was authored in sRGB, which actually isn't a gamma curve at all Gamma 2.2 is approximately equal to sRGB, but they're actually quite different down close to black (sRGB is superior). sRGB is the standard for PC and web graphics / displays and the "default" assumption when you open any image file that doesn't contain gamma/colour-space info, so if you're trying to standardize all your artists displays to a common standard, sRGB is the obvious choice. I would probably only recommend storing colours in a different space if you know that you're not going to use the full black to white range and have more than 8bit precision source data. e.g. Crytek have a workflow that stores the per channel minimum and range in a constant, and they do a MAD after fetching to retrieve the original data. I can't remember, but they might've used a gamma curve too. On a modern PC you probably won't notice the perf difference. The per pixel pow is probably under a dozen MADs times the resolution times overdraw. Maybe 20 or 30 million flops, which a modern GPU can eat up. On the PS3's terrible GPU we had this on our "must do before we can ship" list, and really did notice the change. On some older games we actually used Gamma 2.0 instead of Gamma 2.2 because the math is way cheaper! However the difference between current low end and current high end is still like 10x different in FLOPs, so if you're optimizing for low end, you still will be trying to claw back every clock cycle you can, and saving 20 million FLOPs would be very welcome
  5. Tone Mapping

    For colour assets authored by your artists (which are stored in sRGB space), it gives you free sRGB->Linear decoding when you sample from them, so that you don't need to do your own gamma-encoding with manual shader code.
  6. Before I start blindly following this rigid ideology, what about cases where swapping is just as expensive as copying?
  7. Forward and Deferred Buffers

    No. To be deliberately simplify -- let's say that radiance is what the sensor is measuring. Exposure would determine what the maximum measurement could possibly be. The sensor isn't measuring radiance really though - Radiance is energy (emitted or received) per angle, per area, per time. Shutter speed increases time. Aperture increases angle. ISO increases efficiency (the percentage of energy actually captured and not lost as heat, reflected, etc). Exposure is the combination of these three -- so increasing exposure actually allows the same radiance input to act over a longer period of time / larger angle, which allows more energy to be delivered to the sensor. You can have a very high exposure value (large shutter time, large aperture, large ISO value), but if you don't shine any light (radiance) through the lens, the sensor won't report any values Likewise if you shine a constant radiance value through the lens, but vary the exposure value, the sensor will report higher or lower energy values. tl;dr - it's basically an arbitrary multiplier that you can use to make everything brighter or darker.
  8. Forward and Deferred Buffers

    In real life, it's basically the amount of light that the sensor/film is exposed to in order to create the image. Increasing the aperture (the size of the hole that lets in light) will increase the amount of light coming in, but will also increase the strength of depth of field effects / narrow the depth of focus. Increasing the shutter time (the time that the hole is open) will increase the amount of light coming in, but will also increase the strength of motion blur effects. Increasing the ISO value (the sensitivity of the sensor/film) will increase the amount of light that is captured, but will also increase the strength of film-grain / noise effects. Many engines are now trying to model real cameras, so that people trained with real-world tools will be immediately comfortable in engine, and also so that in-game post-processing effects look more natural / more like cinema. If you're not trying to emulate a real camera though, then "exposure" is just an arbitrary multiplier that you use to rescale the input data as the first step of a tonemapper. You can either pick an exposure value manually for each scene in your game, or use an auto-exposure technique where you look at the average brightness of the lighting buffer (or more commonly - the geometric mean of the luminance in the lighting buffer) and use that average value to pick a suitable exposure value where the final image won't be too bright or too dark.
  9. Forward and Deferred Buffers

    Half floats were literally invented for HDR As above, using full floats is unheard of as doubling your memory bandwidth really will impact performance a lot. Half's go from 0 to around 60k, which is a pretty massive dynamic range. You can exceed the maximum value quite easily with a lot of bright lights, which results in 'inf' being stored in that pixel, so you do need to check for that / clamp at 60k. If you need more range, you can move the multiply by 'exposure' out of your tone mapping shader and into every forward/lighting shader instead. As above, this is how people manage to get away with using even 11_11_10_FLOAT. They had different flags. Making a resource UAV compatible may affect the memory layout compared to just SRV compatible, so you should use the minimum set of flags during resource creation. Likewise, some had mips while others didn't, etc. When allocating from the pool you'd specify size, formats, usage, capability, etc.
  10. Forward and Deferred Buffers

    In some of the games I've worked on, we've used a render-target-pool. When doing a pass that temporarily needs some buffers, you ask the pool for them (e.g. bloom might request 1x ping-pong with HDR bit-depth, 1x half-resolution with 8-bit bit-depth, and 2x quarter resolution with 8-bit bit-depth), then when you're finished using them, you return them to the pool so that subsequent rendering passes can possibly reuse them. In D3D12 and consoles, you can actually allocate multiple textures over the top of each other. E.g. maybe 2x 8-bit textures and 1x 16 bit texture in the same bit of memory. You can then start the frame by using the 2x 8-bit textures, and when you're finished with them, you can mark them as invalid, mark the 16-bit texture as valid, and then start using that 16-bit texture for ping-ponging, etc... This kind of this is not possible on older API's though. This doubles the size of your colour data in the GBuffer. Often GBuffer passes are bottlenecked by memory bandwidth, so you want to keep the GBuffer size as low as possible. In some API's, reading from the swap-chain is disallowed. Make sure that it's allowed in the APIs that you're using You don't even need a swap-chain depth buffer at all. The monitor doesn't display depth
  11. Gamma Correction

    If you use one of the blah_SRGB texture formats, then yeah, the hardware will to sRGB->Linear when you read from them, and also do Linear->sRGB when you write to them. So if you've got sRGB texture maps on your objects, and are copying them into an sRGB GBuffer, the hardware will do [SourceTexture]-sRGB->Linear-[Pixel shader]-Linear->sRGB-[GBuffer]... This seems wasteful, but there's transistors in the HW dedicated to the tasks, so I don't think it's actually that harmful. I'm not sure what coefficient you mean? Any per-material parameters that you're going to put into a cbuffer, but are source from an artist's colour picker -- yeah, you can do sRGB->Linear conversion on the CPU before you place the value into the cbuffer. So, the source data you control. You buy all of your artists an sRGB monitor and use a colour calibrator to ensure that they're all fairly well calibrated. Then you know that your source data is sRGB. Internally you store sRGB data in your GBuffer, because there's magic HW support for it, and your source data is sRGB (so storing linear data in your GBuffer would either be lossy, or require a 16bit channel format...). For final output (the swap chain), you can create this as a *_SRGB texture and get automatic conversion from linear->sRGB when writing to it... but yeah, this assumes that the user actually has an sRGB / gamma 2.2 monitor. If you want to give your users a "gamma" slider, which allows them to see the image as you intended, even though they've got a silly gamma 1.8 or gamma 2.4 monitor, then you've got to do a bit of work Instead of making the swapchain with an *_SRGB, make a non-SRGB swap chain. This disables the automatic linear->sRGB conversion when writing data into it. Instead, in all of your shaders that directly write into the swap-chain, implement gamma encoding yourself in the shader code -- e.g. return pow( result, rcp_outputGamma )
  12. Gamma Correction

    "Gamma correction" refers to both the encoding and decoding step. To be clear, I find it best to refer to the Linear->sRGB step and the sRGB->Linear step. Typically anything that's authored by eye requires a sRGB->Linear conversion. This is because the artist who created the data was viewing it on an sRGB monitor. So when they looked at 187/255 on their monitor, they saw a value that appeared to them to be about 50% as bright as 255/255, meaning they're directly painting in sRGB format. Moreover, anything that represents a colour that will be viewed by humans benefits from great compression in sRGB format. When storing linear RGB you need more than 10 bits per channel to not perceive any colour banding, while sRGB achieves this with only 8 bits per channel. So yes - base colour needs a sRGB->Linear conversion before doing your lighting calculations. Normal maps definitely do not -- the source data is linear (127 is in the middle), so attempting to decode it when it's not encoded will just bias the normals off to the side Alpha masks, metalness masks, roughness values, probably don't need it... but you can try it to see if artists like the "incorrect" bias. e.g. in some PBR papers you'll see that they choose to expose roughness-squared to their artists instead of roughness, simply because the artists liked that parameter better.
  13. How To Make a Video Game Press Kit?

    If you put the effort into making one there's not much point in hiding it; you may as well make a public link to it on your site. It should be digital. Unless maybe you're meeting someone in person, in which case you could give them a single page print that also includes a link to the digital files. What constitutes "the press" is quite different to what it was a decade ago to... e.g. Getting covered by a major streamer is about the best thing that can happen, and getting covered by a physical magazine is nearly useless... You want to make it as easy as possible for these people to get their hands on a few well written paragraphs and high quality screenshots and videos. If you haven't come across it, lots of people use http://dopresskit.com to help create clear/clean press kits.
  14. Mobile Game Studio XP = "lndustry"?

    I wouldn't be surprised if the mobile portion of the games industry is actually bigger than console+PC combined. As above, most indies that I know make mobile games rather than Console/PC. The definition of AAA is pretty loose. These days it pretty much means that you spent over $10M on your game... Games like Clash of Clans definitely fit into that category despite looking a whole lot simpler than a console / pc game!