Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 12:16 AM

#5311232 Set vertex position in Domain Shader with Height Map texture

Posted by on 17 September 2016 - 05:01 PM

How the GPU interprets the texture data depends on the DXGI format used for the shader resource view. Specifically, the suffix at the end of the format (UNORM, FLOAT, etc.). You can see a full list and a description for each towards the end of this page (scroll down to "Format Modifiers"). In your case you were likely creating your texture with a UNORM format, which means that the GPU will interpret the 0-255 integer data as a 0.0-1.0 floating point value. 




#5311139 Set vertex position in Domain Shader with Height Map texture

Posted by on 16 September 2016 - 05:30 PM

You definitely want to do this in the domain shader, not the pixel shader. Have you tried using a debugging tool like RenderDoc to make sure that you've correctly bound the height map texture to the domain shader stage? RenderDoc also supports shader debugging (although not for tessellation shaders unfortunately), which you may find useful.




#5310839 How to calculate normals for deferred rendering + skinned mesh?

Posted by on 14 September 2016 - 03:20 PM

You need to apply both transforms:

 

float3 normal = input.Normal;
normal = mul(normal, (float3x3)SkinTransform);
normal = mul(normal, (float3x3)InverseTransposeWorld);
normal = normalize(normal);



#5310382 Color Correction

Posted by on 11 September 2016 - 05:15 PM

Usually the 3D LUT is generated by "stacking" (combining) a set of color transforms, such that the result of applying the LUT gives you the result of applying the set of transforms to an input color. So if you don't want to use a lot, you could implement the transforms directly in a pixel or compute shader and directly apply them to the input color. However a LUT may be quite a bit cheaper, depending on how many transforms you use. Another nice thing about LUT's is that your engine doesn't need to necessarily know or care about how that LUT is generated. So you might have in-engine UI for generating the LUT (like Source Engine), or have your own external tool that generates the LUT, or you might use an off-the-shelf third-party tool like Fusion, Nuke, or OpenColorIO to generate your LUT (like Uncharted 4).




#5310149 Particles in idTech 666

Posted by on 09 September 2016 - 01:16 PM

Sorry, I forgot to follow up on this. I talked to our lead FX artist, and at our studio they often just generate normal maps directly from their color maps using a tool like CrazyBump. There are a few cases where they will run a simulation in Maya, in which cases they may have Maya render out both color and normal maps to use for the particle system.




#5310059 Render terrain with Tessellation when my 2D text disappears

Posted by on 08 September 2016 - 08:00 PM

Are you setting the hull and domain shader back to NULL after you're done using tessellation?




#5309424 Screenspace Normals - Creation, Normal Maps, and Unpacking

Posted by on 04 September 2016 - 02:30 PM

I know this doesn't address the underlying question, but I thought I'd throw it out there just in case (food for thought). Perhaps You could store the azimuth/altitude of the vector as opposed to the cartesian coordinates, thereby reducing the dimension to 2?

 

Using spherical coordinates have been brought up several times in the past, and it's included in Aras's comparison. It turns out it's not particular fast to encode/decode, and the straightforward version also doesn't have a great distribution of precision.




#5309422 Cannot enable 4X MSAA Anti-Aliasing DirectX11

Posted by on 04 September 2016 - 02:23 PM

Pass the D3D11_CREATE_DEVICE_DEBUG flag when creating your D3D11 device: it will cause the runtime to output warning and error messages when you do something wrong with the API. It's a good idea to always use that for your debugging builds, and fix any issues that it brings up.

 

If you'd like, you can also tell the runtime to break into the debugger right when an error occurs so that you know exactly which line of code is causing the problem:

 

ID3D11InfoQueue* infoQueue = nullptr;
device->QueryInterface(__uuidof(ID3D11InfoQueue), reinterpret_cast<void**>(&infoQueue));
if(infoQueue)
{
    infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_WARNING, TRUE);
    infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_ERROR, TRUE);
    infoQueue->Release();
    infoQueue = nullptr;
}



#5307929 Blur compute shader

Posted by on 25 August 2016 - 04:58 PM

In D3D11 all out-of-bounds reads and writes are well-defined for buffers and textures. Reading out-of-bounds returns 0, and writing out-of-bounds has no effect. Read/writing out-of-bounds on thread group local storage is not defined, so be careful not to do that.

 

Note that in D3D12 if you use root SRV's or UAV's there is no bounds checking, and so reading/writing out-of-bounds will have undefined behavior.




#5305641 Theory of PBR lighting model[and maths too]

Posted by on 13 August 2016 - 11:36 AM

A. Applicable only for mirror like surface. For rough surface, the angle of incident and the angle of reflection will not be same, is it?


Right. In traditional microfacet theory, a "rough" surface is assumed to be made up of microscopic mirrors that point in different directions. The rougher the surface, the more likely that a given microfacet will not point in the same direction as the surface normal, which means that the reflections become "blurrier" since light does not reflect uniformly off the surface. 
 

I sense the whole book is about generating rays at the camera, and shooting them towards the scene and letting them to propagate. And calculate the final pixel value when it hits back the camera. The idea is, we see things only when the light comes into our eyes? So the ray must come into our camera, camera is our eyes. Is this the scheme of the whole book?
 
Then I must say, it has near zero practical value to my field as I will have to implement it on GPU. I know how fragment-vertex shader works roughly. I also know how we can GPGPU things via direct compute. But to implement a lighting model I have to understand the model. Lambartian lighting model is easy, but this PBR is making my life hell  :mellow:


Yes, that particular book is aimed at offline rendering and always uses ray-tracing techniques, primarily path tracing. In that regard it's probably not the best resource for someone looking to learn real-time rendering, although a lot of the concepts are still applicable (or might be in the future). If you're looking to get started right away with real-time techniques then Real-Time Rendering 3rd edition is probably a better choice, although it's a bit out of date at this point. It covers some of the same ground as the PBR book, but always does it from the perspective of rasterization and real-time performance constraints. If you do decide to switch books I would recommend coming back to the PBR book at some point once you have a better grasp of the basics: I think you'll find that learning the offline techniques can give you a more holistic understanding of the general problesm being solved, and can open your horizons a bit. Plus I think it's fun to do a bit of offline work every once in a while, where you don't have to worry about cramming all of your lighting into a few milliseconds. :)
 

The book describe the whole system in its weird language, not event c++, --- ramsey(can't remember the name). I do not want the codes. I just want the theory to be explained in a nice manner so that I can implement this on the engine of my choice, platform of my choice, graphics library of my choice.


It uses what's called "literate programming". The code is all C++, and the whole working project is all posted on GitHub. The literate programming language just lets them interleave the code with their text, and put little markers into the code that tell you how the code combines into a full program.




#5305640 Theory of PBR lighting model[and maths too]

Posted by on 13 August 2016 - 11:16 AM

One thing to be aware of is that physically based rendering isn't just a single technique that you implement, despite what the marketing teams for engines and rendering packages might have you believe. PBR is more of a guiding principle than anything else: it's basically a methodology where you try to craft rendering techniques whose foundations are based in our knowledge of physics. This is in opposition to a lot of earlier work in computer graphics, where rendering techniques were created by roughly attempting to match the appearance of real-world materials.

 

If you really want to understand how PBR is applied to things like shading models, then there's a few core concepts that you'll want to understand. Most of them come from the field of optics, which deals with how visible light interacts with the world. Probably the most important concepts are reflection and refraction, which are sort-of the two low-level building blocks that combine to create a lot of the distinctive visual phenomena that we try to model in graphics. In addition, you'll want to make sure that you understand the concepts of radiance and irradiance since they come up frequently when talking about shading models. Real-Time Rendering 3rd edition has a good introduction to these topics, and Naty Hoffman (one of the authors of RTR 3rd edition) has covered some of the same material in his introductions for the Physically Based Shading Course at SIGGRAPH.

 

Understanding the above concepts will definitely require some math background, so you may need to brush up on that as well. You'll really want to understand at least basic trigonometry, linear algebra (primarily vector and matrix operations, dot products, cross products, and projections), and some calculus (you'll at least want to understand how a double integral over a sphere or hemisphere works, since those pop up frequently). You might want to try browsing Khan Academy if you feel you're a bit weak on any of these subjects, or perhaps MIT's OpenCourseware




#5305575 Article On Texture (Surface) Formats?

Posted by on 12 August 2016 - 09:22 PM

If your normals are in a cone (don't go all the way to the edge), you can also use "partial derivative normal maps", where you store x/z and y/z, and reconstruct in the shader with normalize(x_z, y_z, 1).
One advantage of this representation is that you get great results by simply adding together several normal maps (e.g. Detail mapping) *before* you decode back into a 3D normal. The alternative of adding together several normal maps (and renormalising) loses a lot of detail / flattens everything.

 

We use Reoriented Normal Mapping for combining normal maps, but derivative normal maps are nice alternative if you want to save cycles.




#5305154 Article On Texture (Surface) Formats?

Posted by on 10 August 2016 - 01:13 PM

This is a pretty good presentation from a few years back about block compression formats in D3D11: http://download.microsoft.com/download/D/8/D/D8D077EF-CB8C-4CCA-BC24-F2CDE3F91FAA/Block_Compression_Smorgasbord_US.zip. Make sure you that you read the MSDN docs as well.

 

For file types, DDS is clunky but can store data for any kind of texture or texture format that's supported by D3D. So you can store cubemaps, texture arrays, mipmaps BC-compressed formats, etc. So it's good if you want to store your data in way that's just about ready to be loaded into your engine, but you don't want to use your own file format. Any of the "usual" image formats link PNG/TGA/etc. will usually require more processing before they can be used as textures, either for compression, mipmap generation, creating a texture array, etc.

 

Here's a quick rundown of the available block compression formats in D3D11:

 

  • BC1 - low-quality RGB data with 1-bit alpha, 1/8 compression ratio vs. R8G8B8A8, (use this for color maps with no alpha)
  • BC2 - low-quality RGB data with explicit 4-bit alpha, 1/4 compression ratio vs. R8G8B8A8 (almost nobody uses this format)
  • BC3 - low-quality RGB data with interpreted 8-bit alpha, 1/4 compression ratio vs. R8G8B8A8 (use for color maps with an alpha channel)
  • BC4 - just the alpha channel from BC3 but stored in the red channel, 1/8 compression ratio vs. R8G8B8A8, 1/2 compression ratio vs. R8 (use for monochrome maps)
  • BC5 - two BC4 textures stuck together in the red and green channels, 1/4 compression ratio vs. R8G8B8A8, 1/2 compression ratio vs. R8G8 (use for normal maps)
  • BC6H - floating-point RGB data with no alpha, 1/8 compression ratio vs R16G16B16A16 (use for HDR lightmaps, sky textures, environment maps, etc.)
  • BC7 - high-quality RGB data with optional alpha, 1/4 compression ratio vs R8G8B8A8 (use for any color map that needs quality, but compression can be slow)



#5304948 Dx11 Createtexture From Char*

Posted by on 09 August 2016 - 01:31 PM

You can just look at the tail end of the WIC texture loader from DirectXTex for an example.




#5304519 Conceptual Question On Separate Ps/vs Files

Posted by on 07 August 2016 - 01:37 PM

You can use whatever extension you'd like, the compiler doesn't care. Personally I like to use .hlsl for all files containing shader code, but that's just preference. Putting shared structure and constant buffer definitions in a shared header file is definitely a good idea, since it will ensure that if you change a structure all of the shaders will see that change.






PARTNERS