Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 29 Mar 2007
Offline Last Active Today, 12:28 AM

#5310149 Particles in idTech 666

Posted by on 09 September 2016 - 01:16 PM

Sorry, I forgot to follow up on this. I talked to our lead FX artist, and at our studio they often just generate normal maps directly from their color maps using a tool like CrazyBump. There are a few cases where they will run a simulation in Maya, in which cases they may have Maya render out both color and normal maps to use for the particle system.

#5310059 Render terrain with Tessellation when my 2D text disappears

Posted by on 08 September 2016 - 08:00 PM

Are you setting the hull and domain shader back to NULL after you're done using tessellation?

#5309424 Screenspace Normals - Creation, Normal Maps, and Unpacking

Posted by on 04 September 2016 - 02:30 PM

I know this doesn't address the underlying question, but I thought I'd throw it out there just in case (food for thought). Perhaps You could store the azimuth/altitude of the vector as opposed to the cartesian coordinates, thereby reducing the dimension to 2?


Using spherical coordinates have been brought up several times in the past, and it's included in Aras's comparison. It turns out it's not particular fast to encode/decode, and the straightforward version also doesn't have a great distribution of precision.

#5309422 Cannot enable 4X MSAA Anti-Aliasing DirectX11

Posted by on 04 September 2016 - 02:23 PM

Pass the D3D11_CREATE_DEVICE_DEBUG flag when creating your D3D11 device: it will cause the runtime to output warning and error messages when you do something wrong with the API. It's a good idea to always use that for your debugging builds, and fix any issues that it brings up.


If you'd like, you can also tell the runtime to break into the debugger right when an error occurs so that you know exactly which line of code is causing the problem:


ID3D11InfoQueue* infoQueue = nullptr;
device->QueryInterface(__uuidof(ID3D11InfoQueue), reinterpret_cast<void**>(&infoQueue));
    infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_WARNING, TRUE);
    infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_ERROR, TRUE);
    infoQueue = nullptr;

#5307929 Blur compute shader

Posted by on 25 August 2016 - 04:58 PM

In D3D11 all out-of-bounds reads and writes are well-defined for buffers and textures. Reading out-of-bounds returns 0, and writing out-of-bounds has no effect. Read/writing out-of-bounds on thread group local storage is not defined, so be careful not to do that.


Note that in D3D12 if you use root SRV's or UAV's there is no bounds checking, and so reading/writing out-of-bounds will have undefined behavior.

#5305641 Theory of PBR lighting model[and maths too]

Posted by on 13 August 2016 - 11:36 AM

A. Applicable only for mirror like surface. For rough surface, the angle of incident and the angle of reflection will not be same, is it?

Right. In traditional microfacet theory, a "rough" surface is assumed to be made up of microscopic mirrors that point in different directions. The rougher the surface, the more likely that a given microfacet will not point in the same direction as the surface normal, which means that the reflections become "blurrier" since light does not reflect uniformly off the surface. 

I sense the whole book is about generating rays at the camera, and shooting them towards the scene and letting them to propagate. And calculate the final pixel value when it hits back the camera. The idea is, we see things only when the light comes into our eyes? So the ray must come into our camera, camera is our eyes. Is this the scheme of the whole book?
Then I must say, it has near zero practical value to my field as I will have to implement it on GPU. I know how fragment-vertex shader works roughly. I also know how we can GPGPU things via direct compute. But to implement a lighting model I have to understand the model. Lambartian lighting model is easy, but this PBR is making my life hell  :mellow:

Yes, that particular book is aimed at offline rendering and always uses ray-tracing techniques, primarily path tracing. In that regard it's probably not the best resource for someone looking to learn real-time rendering, although a lot of the concepts are still applicable (or might be in the future). If you're looking to get started right away with real-time techniques then Real-Time Rendering 3rd edition is probably a better choice, although it's a bit out of date at this point. It covers some of the same ground as the PBR book, but always does it from the perspective of rasterization and real-time performance constraints. If you do decide to switch books I would recommend coming back to the PBR book at some point once you have a better grasp of the basics: I think you'll find that learning the offline techniques can give you a more holistic understanding of the general problesm being solved, and can open your horizons a bit. Plus I think it's fun to do a bit of offline work every once in a while, where you don't have to worry about cramming all of your lighting into a few milliseconds. :)

The book describe the whole system in its weird language, not event c++, --- ramsey(can't remember the name). I do not want the codes. I just want the theory to be explained in a nice manner so that I can implement this on the engine of my choice, platform of my choice, graphics library of my choice.

It uses what's called "literate programming". The code is all C++, and the whole working project is all posted on GitHub. The literate programming language just lets them interleave the code with their text, and put little markers into the code that tell you how the code combines into a full program.

#5305640 Theory of PBR lighting model[and maths too]

Posted by on 13 August 2016 - 11:16 AM

One thing to be aware of is that physically based rendering isn't just a single technique that you implement, despite what the marketing teams for engines and rendering packages might have you believe. PBR is more of a guiding principle than anything else: it's basically a methodology where you try to craft rendering techniques whose foundations are based in our knowledge of physics. This is in opposition to a lot of earlier work in computer graphics, where rendering techniques were created by roughly attempting to match the appearance of real-world materials.


If you really want to understand how PBR is applied to things like shading models, then there's a few core concepts that you'll want to understand. Most of them come from the field of optics, which deals with how visible light interacts with the world. Probably the most important concepts are reflection and refraction, which are sort-of the two low-level building blocks that combine to create a lot of the distinctive visual phenomena that we try to model in graphics. In addition, you'll want to make sure that you understand the concepts of radiance and irradiance since they come up frequently when talking about shading models. Real-Time Rendering 3rd edition has a good introduction to these topics, and Naty Hoffman (one of the authors of RTR 3rd edition) has covered some of the same material in his introductions for the Physically Based Shading Course at SIGGRAPH.


Understanding the above concepts will definitely require some math background, so you may need to brush up on that as well. You'll really want to understand at least basic trigonometry, linear algebra (primarily vector and matrix operations, dot products, cross products, and projections), and some calculus (you'll at least want to understand how a double integral over a sphere or hemisphere works, since those pop up frequently). You might want to try browsing Khan Academy if you feel you're a bit weak on any of these subjects, or perhaps MIT's OpenCourseware

#5305575 Article On Texture (Surface) Formats?

Posted by on 12 August 2016 - 09:22 PM

If your normals are in a cone (don't go all the way to the edge), you can also use "partial derivative normal maps", where you store x/z and y/z, and reconstruct in the shader with normalize(x_z, y_z, 1).
One advantage of this representation is that you get great results by simply adding together several normal maps (e.g. Detail mapping) *before* you decode back into a 3D normal. The alternative of adding together several normal maps (and renormalising) loses a lot of detail / flattens everything.


We use Reoriented Normal Mapping for combining normal maps, but derivative normal maps are nice alternative if you want to save cycles.

#5305154 Article On Texture (Surface) Formats?

Posted by on 10 August 2016 - 01:13 PM

This is a pretty good presentation from a few years back about block compression formats in D3D11: http://download.microsoft.com/download/D/8/D/D8D077EF-CB8C-4CCA-BC24-F2CDE3F91FAA/Block_Compression_Smorgasbord_US.zip. Make sure you that you read the MSDN docs as well.


For file types, DDS is clunky but can store data for any kind of texture or texture format that's supported by D3D. So you can store cubemaps, texture arrays, mipmaps BC-compressed formats, etc. So it's good if you want to store your data in way that's just about ready to be loaded into your engine, but you don't want to use your own file format. Any of the "usual" image formats link PNG/TGA/etc. will usually require more processing before they can be used as textures, either for compression, mipmap generation, creating a texture array, etc.


Here's a quick rundown of the available block compression formats in D3D11:


  • BC1 - low-quality RGB data with 1-bit alpha, 1/8 compression ratio vs. R8G8B8A8, (use this for color maps with no alpha)
  • BC2 - low-quality RGB data with explicit 4-bit alpha, 1/4 compression ratio vs. R8G8B8A8 (almost nobody uses this format)
  • BC3 - low-quality RGB data with interpreted 8-bit alpha, 1/4 compression ratio vs. R8G8B8A8 (use for color maps with an alpha channel)
  • BC4 - just the alpha channel from BC3 but stored in the red channel, 1/8 compression ratio vs. R8G8B8A8, 1/2 compression ratio vs. R8 (use for monochrome maps)
  • BC5 - two BC4 textures stuck together in the red and green channels, 1/4 compression ratio vs. R8G8B8A8, 1/2 compression ratio vs. R8G8 (use for normal maps)
  • BC6H - floating-point RGB data with no alpha, 1/8 compression ratio vs R16G16B16A16 (use for HDR lightmaps, sky textures, environment maps, etc.)
  • BC7 - high-quality RGB data with optional alpha, 1/4 compression ratio vs R8G8B8A8 (use for any color map that needs quality, but compression can be slow)

#5304948 Dx11 Createtexture From Char*

Posted by on 09 August 2016 - 01:31 PM

You can just look at the tail end of the WIC texture loader from DirectXTex for an example.

#5304519 Conceptual Question On Separate Ps/vs Files

Posted by on 07 August 2016 - 01:37 PM

You can use whatever extension you'd like, the compiler doesn't care. Personally I like to use .hlsl for all files containing shader code, but that's just preference. Putting shared structure and constant buffer definitions in a shared header file is definitely a good idea, since it will ensure that if you change a structure all of the shaders will see that change.

#5304457 Create Dx Texture

Posted by on 06 August 2016 - 11:22 PM

I've used DirectXTex quite a bit. stb_image is also quite popular, since it's a single header file.

#5302864 D3D11_Create_Device_Debug Question

Posted by on 27 July 2016 - 10:24 PM

Let's try and keep it friendly and on-topic here.  :)


To get back to the question being asked...have you tried forcing the an error from the debug layer? It should be pretty easy to this: just bind a texture as both a render target and a shader resource simultaneously, or use some incorrect parameters when creating a resource. You can also tell the debug layer to break into the debugger on an error or warning, which will ensure that you're not somehow missing the message:


ID3D11InfoQueue* infoQueue = nullptr;
DXCall(device->QueryInterface(__uuidof(ID3D11InfoQueue), reinterpret_cast<void**>(&infoQueue)));
infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_WARNING, TRUE);
infoQueue->SetBreakOnSeverity(D3D11_MESSAGE_SEVERITY_ERROR, TRUE);

#5302223 Directx 11, 11.1, 11.2 Or Directx 12

Posted by on 23 July 2016 - 03:36 PM

So there's two separate concepts here that you need to be aware of: the API, and the supported feature set. The API determines the set of possible D3D interfaces you can use, and the functions on those interfaces. Which API you can use is primarily dictated by the version of Windows that your program is running on, but it can also be dependent on the driver. The feature set tells you which functionality is actually supported by the GPU and its driver. In general, the API version dictates the maximum feature set that can be available to your app. So if you use D3D11.3 instead of D3D11.0, there are more functions and therefore more potential functionality available to you. However using a newer API doesn't guarantee that the functionality will actually be supported by the hardware. As an example, take GPU's that run on Nvidia's Kepler architecture: their drivers support D3D12 if you run on Windows 10, however if you query the feature level it will report as FEATURE_LEVEL_11_0. This means that you can't use features like conservative rasterization, even though the API supports it.


So to answer your questions in order:


1. You should probably choose your minimum API based on the OS support. If you're okay with Windows 10 only, then you can just target D3D11.3 or D3D12 and that's fine. If you want to run on Windows 7, then you'll need to support D3D11.0 as your minimum. However you can still support different rendering paths by querying the supported API and feature set at runtime. Either way you'll probably need fallback paths if you want to use new functionality like conservative rasterization, because the API doesn't guarantee that the functionality is supported. You need to query for it at runtime to ensure that your GPU can do it. This is true even in D3D12.


Regarding 11.3 vs 12: D3D12 is very very different from D3D11, and generally much harder to use even for relatively simple tasks. I would only go down that route if you think you'll really benefit from the reduced CPU overheard and multithreading capabilities, or if you're looking for an educational experience in keeping up with the latest API's. And to answer your follow up question "does 11.3 hardware support 12 as well", there really isn't any such thing as "11.3 hardware". Like I mentioned earlier 11.3 is just an API, not a mandated feature set. So you can use D3D11.3 to target hardware with FEATURE_LEVEL_11_0, you'll just get runtime failures if you try to use functionality that's not supported.


2. You can QueryInterface at runtime to get one interface version from another. You can either do it in advance and store separate pointers for each version, or you can call it as-needed.


3. Yes, you can still call the old version of those functions. Just remember that the new functionality may not be supported by the hardware/driver, so you need to query for support. In the case of the constant buffer functionality added for 11.1, you can query by calling CheckFeatureSupport with D3D11_FEATURE_D3D11_OPTIONS, and then checking the appropriate members of the returned D3D11_FEATURE_DATA_D3D11_OPTIONS structure.

#5302009 How To Suppress Dx9 And Sdk 8 Conflict Warnings?

Posted by on 22 July 2016 - 01:33 PM

See this.