Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 06:00 PM

#5263320 Render GUI elements with projection matrix

Posted by MJP on 23 November 2015 - 02:21 PM

I think that you mean "perspective projection matrix", not "projection matrix" (orthographic is still a type of projection).

If you have a 2D point and you want to know where that would be in 3D, you can "unproject" the 2D point to get the original world-space or view-space coordinate that would result in a given 2D position once projected. DirectXMath even has a helper function for doing it. Basically it converts from window coordinates to normalized device coordinates, transforms by the inverse of the combined world * view * projection matrix, and performs the perspective divide-by-w. If you want the coordinates in world space, pass an identity matrix as the "World" parameter. If you want the coordinates in view space, pass an identity matrix for the both the "World" and the "View" parameters.


#5262700 Diffuse Light

Posted by MJP on 19 November 2015 - 12:34 AM

I took a quick look, and I didn't notice anything immediately wrong with your code. Are you definitely binding the correct input layout when you're drawing the cube? It's probably worth capturing a frame in RenderDoc or the VS graphics debugger and making sure that all of your state is correctly set up at the time of your draw call.


#5262699 PBR precision issue using GBuffer RGBA16F

Posted by MJP on 19 November 2015 - 12:26 AM

It looks like you're getting precision issues from the normals in your G-Buffer. Fixed point formats will work much better than floating point formats for normals, so you should use that if you can (16-bit SNORM formats are convenient for normals, since they store values of the [-1, 1] range). You can also get better precision with less storage by encoding the normals in a special format.


#5262698 Temporal AA

Posted by MJP on 19 November 2015 - 12:21 AM

How handle transparent objects correctly for the velocity buffer ?


To do it "correctly" you'd need to store multiple layers in both your velocity buffer *and* your history buffer, and then reproject separately per-layer. This is both impractical and expensive.

I don't think that anybody has a good solution for that at the moment. This basically means that you won't reproject transparent pixels correctly, but if you use neighborhood color analysis you can still do a decent job of preventing ghosting.


#5262507 Post-Process HDR or LDR nowadays

Posted by MJP on 17 November 2015 - 07:29 PM

For lens effects like bloom/glare/DOF/motion blur/etc. you really want to use linear HDR if you can. Doing MSAA after tone mapping is possibly, but expensive and rather difficult. It essentially requires you to do your post-processing at MSAA resolution, which is both tedious and costly. I'd recommend doing MSAA and/or temporal AA before your post-processing, since this will provide a more temporally stable input for your post-processing chain. This really helps reduce flickering from bloom and DOF.


#5262215 Resident Evil 2 "Color masks"?

Posted by MJP on 16 November 2015 - 12:53 AM

I don't think I'd call it "software rendering", exactly. Basically the PS1 CPU had an extra processor (the GTE) with some vector instructions that you'd use for transforming/lighting vertices and setting up + clipping triangles. And then the "GPU" would take the primitives and texture them, and write the result out to a framebuffer in VRAM. There's some info on the GTE here, and some info on the GPU here. Either way the GPU didn't have a Z buffer, and so most PS1 games would sort their polygons by depth. If you go watch or play a PS1 game you can usually see this causing popping artifacts all over the place: it's usually really obvious on animated characters in areas like the shoulders, where the limbs intersect the body. It did however support a very simple masking system, which I think worked by setting and detecting the MSB of a pixel in the framebuffer. So it's possible that they somehow made use of that to selectively render (or re-render) the background so that it would appear to occlude the player model.


#5261837 Temporal AA

Posted by MJP on 12 November 2015 - 04:16 PM

Long time ago I read HRAA could be there with D3D12 and Vulkan because of their low level API, maybe the mix of techniques can solve issues.


HRAA relied on two things that are not available across all GPU's: decoupled coverage and color samples for MSAA, and programmable MSAA sample points. The first one is called 'EQAA' on AMD hardware and 'CSAA' on Nvidia hardware, but unfortunately you can't get low-level access to either of them with PC API's. Nvidia also removed CSAA from their most recent line of GPU's based on Maxwell 2.0. The second part, programmable MSAA sample points, is accessible on certain hardware through vendor extensions. Unfortunately there are currently no vendor extensions for DX12, and it's not part of its supported feature set. As far as Vulkan, there's no public spec yet so I have no idea what it supports.


#5261836 Temporal AA

Posted by MJP on 12 November 2015 - 04:10 PM

Some links for you to read:

http://advances.realtimerendering.com/s2012/CCP/Malan-Dust_514_GI_reflections(Siggraph2012).pptx

http://advances.realtimerendering.com/s2013/Sousa_Graphics_Gems_CryENGINE3.pptx

http://advances.realtimerendering.com/s2014/epic/TemporalAA.pptx

http://advances.realtimerendering.com/s2014/drobot/hraa.pptx

http://advances.realtimerendering.com/s2015/rad_siggraph_advances_2015.pptx

https://github.com/TheRealMJP/MSAAFilter (code + demo app)

If you have specific questions I can try and help out, since I've spent some time on this.


#5261441 Directional lightmapped specular

Posted by MJP on 10 November 2015 - 05:24 PM

The short answer is: yes, you can definitely do better than that. Even if you just want some hacky specular out of a Half-Life 2-style lightmap, I don't think you want to do what he or she was doing in the post that you linked. You'll probably get better results out of just treating the lighting map info as 3 directional lights oriented about the basis vectors, and then computing Phong or Blinn-Phong specular from that. You could also try computing a "dominant direction" from your lightmap info and treating that as a directional light, but these approaches can sometimes suffer from interpolation issues where the dominant direction doesn't match up from texel to another. To compute the dominant direction you could probably just use a weighted average: compute a single luminance value for each of your 3 lightmap values, multiply each basis direction by the luminance, sum the results, and then normalize the result. You can then create a virtual directional light oriented in that dominant direction, and for the color you could look up the irradiance in that direction (exactly the same way that you look up irradiance for diffuse: dot the direction with each basis vector, and multiply the result of that dot product with the lightmap value that corresponds to that basis vector). This is somewhat similar to what Naughty Dog did for The Last of Us, except that they pre-computed the dominant direction and a color for that dominant direction, and combined that with a "flat" ambient term that didn't depend on the direction at all.

If you're interested in achieving a better approximation of environment specular, you'll want to use a technique that at least attempts to approximate the integral of a specular BRDF with the radiance of the environment. One way to do this is to represent your incoming radiance using spherical harmonics, which provide a framework for computing the integral by means of frequency domain convolution. This was the approach taken by Bungie for Halo 3, and was discussed in their SIGGRAPH presentation as well as their course notes.

More recently, we presented some info at SIGGRAPH about our lightmap baking for The Order. Instead of representing incoming radiance with spherical harmonics, we used a fixed-size set of spherical gaussians. With SG's you have analytical formulas for integrating the product of two SG's, and so you can compute a specular term as long as you can approximate your BRDF using a gaussian.


#5261082 error using resource barrier from multiple commandlists for same resource

Posted by MJP on 09 November 2015 - 12:35 AM

As I understand it, the different command list types (graphics/direct, compute, and copy) can only deal with resource states that they can understand. So In order to transition to or from a pixel shader resource, you need to use a graphics command list. So you'll need to transition to and from the UAV state on your graphics command list, rather than on your compute command list.


#5261080 RenderTargetView* error

Posted by MJP on 09 November 2015 - 12:28 AM

The reason that OMSetRenderTargets takes a pointer to a pointer (instead of just a pointer), is because it expects an array of pointers to render target views. This is so that you can set multiple render targets simultaneously with 1 API call. It's the same for all of the functions that set shader resource views, samplers, constant buffers, and unordered access views. I usually prefer to make an array on the stack to pass into those functions, like this:

ID3D11RenderTargetView* rtViews[] = { it->RenderTargetSelect() };
d3dContext->OMSetRenderTargets(1, rtViews, nullptr);



#5260938 compile & send me exe

Posted by MJP on 07 November 2015 - 04:11 PM

You should read through this: Where's DXERR.LIB?


#5260200 [D3D12] Minimal Tiled Resources implementation

Posted by MJP on 02 November 2015 - 03:36 PM

You should put that request here! https://github.com/Microsoft/DirectX-Graphics-Samples/issues


That's a good idea! I actually went to go do that, and noticed that there's a new reserved resources sample that was added 7 days ago.


#5260072 Problem with D3D11_INPUT_ELEMENT_DESC

Posted by MJP on 02 November 2015 - 01:12 AM

Since you're using DXGI_FORMAT_R32G32B32A32_FLOAT, the input assembler is going to interpret your color values as-is. In other words, your vertex shader is going to get values of (255, 0, 0), (0, 255, 0), etc. You probably want to change all of those 255.0f's to 1.0f's.

If you were using DXGI_FORMAT_R8G8B8A8_UNORM, then you want want your RGBA values to be 1-byte unsigned integers from 0 to 255. In that case, it would make sense to values of 255, but for float values it probably doesn't make sense.


#5259681 [D3D12] Placed resources

Posted by MJP on 29 October 2015 - 10:53 PM

With committed resources, each resource creates its own heap that only contains that 1 resource. With places resources you create the heap separately, and then specify where a resource is located within that heap. This lets you have multiple resources packed into a single heap, potentially even overlapping with each other. The overlapping part is pretty useful, since it lets potentially save quite a bit of memory for render targets and other resource that are only needed for a portion of a frame. For example, say you have a low-resolution render target that you render your SSAO into, and then later gets applied to your ambient lighting. Once you're done applying the AO, you can re-use that memory for say a post-processing render target by placing the latter resource into the same heap at the same memory location.

Some things to watch out for with placed resources:

* Resources have alignment requirements that must be honored when you place them into a heap, and also affect the expected size of a given resource. For buffers the alignment is always 64KB, but for textures it varies from 4KB to 4MB. You can ask the device for the alignment and size of a resource using GetResourceAllocationInfo.

* There are two "resource heap" tiers that determine what kinds of resources can be mixed within a single tier. See the documentation for D3D12_RESOURCE_HEAP_TIER for more info.

* When two placed resources share the same memory within a heap, you need to use transition barriers before the memory can be re-used. See the documentation on memory aliasing and CreatePlacedResource for more info.

* Residency can be only be controlled on a per-heap basis. This means that you can call Evict/MakeResident on a committed resource, but not on a placed resource. So if you want to evict a placed resource, you have to evict the entire heap. See the docs regarding residency for more info.




PARTNERS