Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Yesterday, 08:38 PM

Posts I've Made

In Topic: Blur compute shader

25 August 2016 - 04:58 PM

In D3D11 all out-of-bounds reads and writes are well-defined for buffers and textures. Reading out-of-bounds returns 0, and writing out-of-bounds has no effect. Read/writing out-of-bounds on thread group local storage is not defined, so be careful not to do that.

 

Note that in D3D12 if you use root SRV's or UAV's there is no bounds checking, and so reading/writing out-of-bounds will have undefined behavior.


In Topic: [SOLVED]HLSL Syntax Clarification

25 August 2016 - 03:56 PM

So by default a global variable in HLSL is considered to have the uniform storage class, as if you put the "uniform" keyword before the variable. The uniform modifier works the same as it does in GLSL: it indicates that the value is constant for the lifetime of the shader program, and the value of that variable is specified by the app code. Global variables are also considered to be "const" by default, so it's a bit superfluous in this particular case. The "const" basically enforces that the variable is read-only, and generates a compiler error if the shader code tries to assign a value to the variable. In D3D9 your global uniform variables are mapped to constant registers, which you set from your app code using functions like SetVertexShaderConstantF. In D3D10 and higher, global uniform variables must be placed in contiguous buffers called constant buffers.

 

The only other common use case for global variables is for for "truely" const values, where the value is always the same for the program and isn't specified by the app code. To do this you need to use the "static" modifier.


In Topic: Particles in idTech 666

21 August 2016 - 08:22 PM

I don't know about DOOM specifically, but many engines (including ours) will expose different options for what kind of normals you want. That way you can choose between flat normals, "round" normals, normal maps, etc. depending on what kind of particle system you're authoring. I'm not familar with the processes that our FX artists use for baking out normal maps, but I can ask them if you're curious.


In Topic: Theory of PBR lighting model[and maths too]

13 August 2016 - 11:36 AM

A. Applicable only for mirror like surface. For rough surface, the angle of incident and the angle of reflection will not be same, is it?


Right. In traditional microfacet theory, a "rough" surface is assumed to be made up of microscopic mirrors that point in different directions. The rougher the surface, the more likely that a given microfacet will not point in the same direction as the surface normal, which means that the reflections become "blurrier" since light does not reflect uniformly off the surface. 
 

I sense the whole book is about generating rays at the camera, and shooting them towards the scene and letting them to propagate. And calculate the final pixel value when it hits back the camera. The idea is, we see things only when the light comes into our eyes? So the ray must come into our camera, camera is our eyes. Is this the scheme of the whole book?
 
Then I must say, it has near zero practical value to my field as I will have to implement it on GPU. I know how fragment-vertex shader works roughly. I also know how we can GPGPU things via direct compute. But to implement a lighting model I have to understand the model. Lambartian lighting model is easy, but this PBR is making my life hell  :mellow:


Yes, that particular book is aimed at offline rendering and always uses ray-tracing techniques, primarily path tracing. In that regard it's probably not the best resource for someone looking to learn real-time rendering, although a lot of the concepts are still applicable (or might be in the future). If you're looking to get started right away with real-time techniques then Real-Time Rendering 3rd edition is probably a better choice, although it's a bit out of date at this point. It covers some of the same ground as the PBR book, but always does it from the perspective of rasterization and real-time performance constraints. If you do decide to switch books I would recommend coming back to the PBR book at some point once you have a better grasp of the basics: I think you'll find that learning the offline techniques can give you a more holistic understanding of the general problesm being solved, and can open your horizons a bit. Plus I think it's fun to do a bit of offline work every once in a while, where you don't have to worry about cramming all of your lighting into a few milliseconds. :)
 

The book describe the whole system in its weird language, not event c++, --- ramsey(can't remember the name). I do not want the codes. I just want the theory to be explained in a nice manner so that I can implement this on the engine of my choice, platform of my choice, graphics library of my choice.


It uses what's called "literate programming". The code is all C++, and the whole working project is all posted on GitHub. The literate programming language just lets them interleave the code with their text, and put little markers into the code that tell you how the code combines into a full program.


In Topic: Theory of PBR lighting model[and maths too]

13 August 2016 - 11:16 AM

One thing to be aware of is that physically based rendering isn't just a single technique that you implement, despite what the marketing teams for engines and rendering packages might have you believe. PBR is more of a guiding principle than anything else: it's basically a methodology where you try to craft rendering techniques whose foundations are based in our knowledge of physics. This is in opposition to a lot of earlier work in computer graphics, where rendering techniques were created by roughly attempting to match the appearance of real-world materials.

 

If you really want to understand how PBR is applied to things like shading models, then there's a few core concepts that you'll want to understand. Most of them come from the field of optics, which deals with how visible light interacts with the world. Probably the most important concepts are reflection and refraction, which are sort-of the two low-level building blocks that combine to create a lot of the distinctive visual phenomena that we try to model in graphics. In addition, you'll want to make sure that you understand the concepts of radiance and irradiance since they come up frequently when talking about shading models. Real-Time Rendering 3rd edition has a good introduction to these topics, and Naty Hoffman (one of the authors of RTR 3rd edition) has covered some of the same material in his introductions for the Physically Based Shading Course at SIGGRAPH.

 

Understanding the above concepts will definitely require some math background, so you may need to brush up on that as well. You'll really want to understand at least basic trigonometry, linear algebra (primarily vector and matrix operations, dot products, cross products, and projections), and some calculus (you'll at least want to understand how a double integral over a sphere or hemisphere works, since those pop up frequently). You might want to try browsing Khan Academy if you feel you're a bit weak on any of these subjects, or perhaps MIT's OpenCourseware


PARTNERS