Jump to content
  • Advertisement

LandonJerre

Member
  • Content count

    31
  • Joined

  • Last visited

Community Reputation

1005 Excellent

About LandonJerre

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Social

  • Github
    LandonJerre

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. LandonJerre

    Material and Shader Systems

    The trick is that you could create runtime those "structs" that are used to update a constrant buffer. If you think about it, the hard coded structs used for this are really simple things, aligned pieces a continous memory, where each field is in reality an offset to the beginning address of that piece of memory. Knowing this, and knowing how the constant buffer layout of the shader looks, plus the field names, you could do the whole thing runtime. Basically you allocate enough memory to hold the CB, then based on the order of the fields and their types, you could calculate the offset for each one, and store it in a list with their name. When you want to change a value in the buffer, you look up it's name, add the offset to the beginning address of the memory chunk, and then write the value into the buffer at that starting position. You then "upload" this buffer to the GPU CB, and thats about it. What you have to be careful about is the alignment requirements, and if you use a tool to generate your shaders and their CBs, then that has to create a CB layout that respects the packing rules for CBs.
  2. Do you use some kind of prefiltered mip chain to look up the color info during resolve, or just the backbuffer? ( The slides mention prefiltering to reduce noise, it should help with rougher surfaces, and places where the intersection is further away. ) Another thing that would be interesting to try out, is to slowly drop ray reuse in cases where the reflection is close to being mirror like. The idea is in these cases the original ray already contains all the info about the reflection that is needed, using the neighbouring pixels data wouldn't contribute to the result in any useful way. ( If we would be doing some kind of cone tracing to calculate reflections, in these cases the resulting cone would be "raylike", because the low roughnes would result in low cone angles, and the close by intersection would make the cone short. ) The idea is similar to reflection probe prefiltering, where the top mip level's prefiltering pass could be skipped, because it represents mirror like reflections. I don't know if this would work/help in this case, but in my mind it makes perfect sense. This github repo could help also: https://github.com/cCharkes/StochasticScreenSpaceReflection It's unity implementation of the technique, you could find result shots in the unity forum somewhere too. ( I'm planning to implement SSSR too, this repo is one of the sources I use to plan out my implementation. )
  3. LandonJerre

    C++ IDE for Linux

    Try this link. As far as I know the whole qt package is still free with an LGPL license, they just got exceptionally good at hiding it.
  4. LandonJerre

    PSO Management in practice

    This is kind of what I do too, although I still have separate blend/raster/depth state objects, you have to build a PSO out of them to use them in a draw command. In D3D11 they actually hold the native state objects, and the PSO basically just holds a reference to them, this would turn around in the planned D3D12 implementation, where the separate state objects would only contain the descriptors, and the PSO would have the native stuff. It's kind of a mix of both worlds.
  5. LandonJerre

    Tone Mapping

    RGBA16_FLOAT format for HDR should be more than enough. You should probably be running your post processing on non-multisampled textures. You should resolve or even downsample your g-buffers before post processing for faster access in the shaders. The output render targets should definetly be non multisampled for post processing. Small addition: if you use physically based sky/sun stuff, you could easily end up with values in your scene, that are bigger than what a half can store. In that case you either need to do some preexposition at the end of the lighting/sky passes (the frostbite pbr course notes mention how they do this), or use 32bit RTs. (I do the latter at the moment, preexposing would be the preferable way to solve this, but I haven't yet had the patience to rewire my frame graph to support it.)
  6. LandonJerre

    Exponential VSM problem

    It is an expected behaviour, with VSM (and any other related techniques) you have to render both the occluders and the receivers into the shadow map. If you think about it, when you blur the VSM without the receivers in it (or even if you just sample that map with proper texture filtering), you blur between the occluder depths, and the maximum z value, meaning that the blurred values will strongly tend towards the max z. Tending towards max z also means that the blurred values will be "behind" the receiver pretty quickly, and that messes up the whole thing. (This explanation is exactly mathematically precise, but hopefully it's enough to illustrate the problem.)
  7. LandonJerre

    Diffuse lighting for water shader

    I could be wrong, but from the images you linked it looks like you're using a single color for the entire sky. Before hacking the water lighting calculations, I would try to replace it with something more "interesting", like a proper texture of a clear sky. It should help at least somewhat, because if you use a homogenous sky, the direction of the reflection vector won't really matter, you will fetch the same color regardless. With a proper sky texture that won't be the case, because different parts of the sky are colored different kinds of blue.
  8. AFAIK the PS4 Pro has some special hardware extensions to make checkerboarding easier, but the R6 Siege version doesn't depend on any special capabilities. http://www.gdcvault.com/play/1022990/Rendering-Rainbow-Six-Siege The slides say that the resolve pass adds about 1.4ms, but the whole stuff still saves around 8-10ms, which is kind of awesome. (Ever since I first saw the slides describing how they do this, I'm toying with the idea of implementing it in my hobby renderer, but the resolve shaders flow graph, and the fact that they admit their implementation was done by trial and error, scares the hell out of me.)
  9. LandonJerre

    setInterval

    The truth is somewhere inbetween the two. :) setInterval afaik doesn't care about how long the passed callback runs, and it won't fire exactly on given delays either. With the given example, the browser will queue up the passed callback for execution every 150ms, but it will only execute when the execution "thread" doesn't have anything else important to run. (This is why the age old setTimeout with zero delay hack works, a 0 as a delay in this case doesn't mean it will execute immediately, it just means that it will run as soon as possible when nothing else is running.) Sidenote: it is perfectly possible to do the second option you described with recursive setTimeouts, e.g: (function func(){     setTimeout(function(){         //code         func();     }, delay); })();
  10. Me too use RenderDoc (https://renderdoc.org/) for graphics debugging, it does everything you described and much more. Debugging really large shaders can bit a bit of a pain, but other than that I haven't had any significant problems with it.
  11. Try changing the line "i /= base" to something like "i = math.floor(i / base)". ( https://en.wikipedia.org/wiki/Halton_sequence if you look at the pseudocode, it has the floor operation too. )
  12. LandonJerre

    Soft Particles Vanishing in the Distance

    If your depth buffer isn't linear, then the depth distance between things at a unit distance from eachother in the world, will become shorter as you get further away from the camera. ( https://developer.nvidia.com/content/depth-precision-visualized the article isn't particularily about this, but the figures illustrate well what happens here. ) So effectively your particles get "closer" to other stuff, that would trigger the softening the furter away from the camera they are. The fastest fix in this case would be linearizing the two depths before you compare them.
  13. LandonJerre

    Resolution scale, good thing ?

    They create the render target on the fly if not one is available for the asked size using a render target pool ? Is it ok to downscale/upscale using the hardware filtering ? With dynamic scaling you have a minimum and a maximum resolution multiplier, so it's easier to just create a rendertarget that fits the size of the maximum scale, and then render with the viewport set to the actual scale used at the moment. I used this (https://software.intel.com/en-us/articles/dynamic-resolution-rendering-article) article as a reference to implement this kind of stuff in my framework, it discusses filtering options too.
  14. LandonJerre

    Resolution scale, good thing ?

    Well, from an end user perspective it's either never used, or it's a lifesaver. For example without a resolution scaling option, I would've had a hard time playing Mass Effect: Andromeda, to reach a framerate between 30-60FPS I needed to put everything to low and set the scale to x0.8. (My i7 920 + GTX670 rig isn't the beast it used to be :) ) If you think about it, back in the old days we set lower resolutions to get some extra needed perf, resolution scaling achieves about the same effect, but provides more fine control over the rendering resolution, and does this while still outputing a backbuffer in the native resolution of the screen. (Not to mention that no matter what the set scale value is, you can still render your ui at the native resolution, so it doesn't become a blurry unreadable mess.)
  15. LandonJerre

    Is there a inline assembler overhead?

    MSVC only lets you to use inline asm for x86, it isn't supported for ARM or x64 targets at all. (https://msdn.microsoft.com/en-us/library/4ks26t93.aspx) So even if you find a case, where you absolutely need to use inline asm, MSVC probably won't really cooperate with you.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!