Jump to content
  • Advertisement

knarkowicz

Member
  • Content Count

    79
  • Joined

  • Last visited

Community Reputation

2406 Excellent

2 Followers

About knarkowicz

  • Rank
    Member

Personal Information

Social

  • Twitter
    knarkowicz
  • Github
    knarkowicz

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. knarkowicz

    Blending local-global envmaps

    Thanks! Global ones weren't attached to prefabs - they were attached to "levels". Usually there were only 1-3 global envs per 250x250m level chunk. I don't remember if we shipped with any blending at all or just with fade to black current global env, switch and fade in new one. Anyway, idea was to blend globally, so just a quick time based blend, without any kind of world space position or per pixel operations.
  2. knarkowicz

    Blending local-global envmaps

    In my solution global env maps are separated from the local ones. Global ones should capture mostly sky and generic features, and be used very sparsely (few per entire level). This way it's enough to blend just two of those to have perfect transitions and far away reflections will look fine. I actually used this system for Shadow Warrior 2, just with a small twist - probes were generated and cached in real-time. If you are interested you can check out some slides with notes: “Rendering of Shadow Warrior 2”.
  3. knarkowicz

    Blending local-global envmaps

    In your specific case I would go with a very simple solution: 1. Set of global probes. Nearest one covers entire scene. 2. On top of that blend your local probes.
  4. You may be interested in the following GDC 2015 presentation: "Mesh Cutting in Farming Simulator 15", http://www.dtecta.com/files/GDC15_VanDenBergen_Gino_FS15.pdf
  5. knarkowicz

    Technical art vs graphics programming

    Don't worry, I know some technical artists, who started in the industry as programmers, but later successfully switched to tech art. In every company tech art means something a bit different, but usually you are responsible for some combination of: Scripting inside 3ds Max/Maya/Photoshop/Substance Designer Making procedural stuff inside Houdini Writing special tools for enforcing some art rules (e.g. to check which assets are to heavy) Special material shaders (usually using a graph based editor) Apart from those topics technical artists also know how to artist's job. E.g. most of them can do some solid surface modelling inside 3ds Max/Maya and make some textures inside Substance Designer / Painter. As Hodgman mentioned, they don't have to be great artists, but you need to know artists' workflow and mentality very well in order to become a glue between artists and programmers.
  6. knarkowicz

    Automated debug GUI for shader constants

    We have a very simple workflow at work: shader reloading and a global cbuffer with some debug params. Global cbuffer is attached to every shader and includes debug params, which can be controlled by UI (sliders, color picker etc.) and console commands. When you want to test something inside a shader, you just replace some variable with a debug var and you are ready to toggle changes with a keyboard shortcut (bind key to a console command) or play with a value using a slider or color picker.
  7. knarkowicz

    Gamma correction - sanity check

    No. When you read from R8G8B8A8_unorm_srgb HW automatically does sRGB->linear conversion for you. Similar case with writes - you get linear->sRGB at the end. Basically, this means that you can use either R10G10B10A2_unorm or R8G8B8A8_unorm_srgb as your intermediate main color target without any observable difference in case of outputting that directly to the screen. If you don't need HDR then you don't have to use a fp render target - you can use R8G8B8A8_unorm_srgb or R10G10B10A2_unorm. EDIT: BTW custom gamma is usually done on top of HW sRGB, as sRGB is more complicated than simple x^1/2.2.
  8. knarkowicz

    Gamma correction - sanity check

    sRGB is linear in a sense of brightness perception and sRGB->linear correction is obviously non linear. To store linear values you need at least 10 bits of precision, as after SRGB->linear conversion you need more precision in the bottom of the color range. All this means that if you don't do HDR (or pre-expose inside shaders) then you need to use at least 8888_srgb as your main color target. If you want to expose at the end of the pipeline, then you need also some extra range and precision and need to use at least 11_11_10_float as your main color target.
  9. knarkowicz

    Optimizing POM shader texture fetches.

    There are two good ways: 1. Brute force - read 4 samples in a loop (4 samples per loop iteration are a sweat spot on most GPUs). You may want to finish your iterations with a linear interpolation between two last samples for better quality. 2. CSM (Cone Step Mapping) - much less samples than bruteforce, but every sample is slower as you can do only 1 tfetch per loop iteration and you are fetching "fatter" texels. It also requires a special pre-computed texture with height and cone angle, which may be an issue for your asset pipeline. In any way, first you should calculate tex LOD using GLSL function or emulating it inside the shader (faster, but requires to pass tex size to shader). Then derive number of steps from tex LOD. Finally, inside the loop just use previously calculated tex LOD level.
  10. knarkowicz

    S3TC and gamma correction

    HW interpolation doesn't work directly on BC data - first entire BC (S3) 4x4 block is decoded and stored in texture cache. BC decoding is obviously based on BC specs - so during BC decoding interpolation is always done in sRGB space. Next, filtering is done in linear space using values from texture cache (on post DX9 HW).
  11. knarkowicz

    Tone map theory questions.

    Exposure is just a simple color mult before the tonemapping. Auto exposure means automatically calculating this exposure value based on scene avg luminance, histogram or something else, but still it doesn't change the tonemapping. It works the same way as for a real-world camera - exposure is manual/auto "brightness setting" and tonemapping is analog film. Shameless plug: some time ago I wrote a lengthy post about auto-exposure in games: https://knarkowicz.wordpress.com/2016/01/09/automatic-exposure/ .  
  12. IMHO in this case (mip map generation) difference between ^2.2 and proper sRGB is negligible, so you shouldn't worry about that. As for NVTT and mip map generation - just do a simple test. Provide some random data in mips (e.g. one mip level red, second green etc.) and check if NVTT outputs the same texture or if NVTT will generate new mips from the top level.
  13. knarkowicz

    Lightmapping and UVs

    I would recommend generating a second UV set just for lightmaps. This way you can fix light bleeding (cut UV charts at hard edges), light seams (stitch or use "seamless" UV mapping), control density, tightly pack generated charts etc. If you decide to use an external renderer, keep in mind that it needs to be able to support all your custom materials. This may be hard to do with something like 3dsMax. Additionally, it may be hard to extend it to use smarter lightmaps with normal map support (like SH or dir+ambient). Personally I would go with UVAtlas (+bruteforce UV chart packer and object chart bin packer) for UVs and either custom GPU lightmapper or custom CPU lightmapper using Embree for rendering. BTW some time ago I wrote a lengthy post about my lightmapping implementation (UV gen, packing, rendering) which may interest you: https://knarkowicz.wordpress.com/2014/07/20/lightmapping-in-anomaly-2-mobile/
  14. knarkowicz

    Tonemapping questions

    1. Log/exp are used in order to calculate geometric mean. Imagine a very dark scene with a single extremely bright pixel. Without log you would get a very high average luminance. 2. It looks like that your skybox simply isn't bright enough compared to the rest of the scene.
  15. With single pass you need to process every cubemap per pixel (if you don't use compute shader with tiles). With deferred you process only required pixels per cubemap, but with some overhead. It can be faster or slower depending on the number of cubemaps.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!