Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


MJP

Member Since 29 Mar 2007
Offline Last Active Yesterday, 10:31 PM

#5220760 Difference between SDSM and PSSM?

Posted by MJP on 01 April 2015 - 12:19 PM

SDSM works by rasterizing to a depth buffer on the GPU, and then using a compute shader to analyze the depth samples in order to come up with optimal projects for the shadow map splits. This can me much more accurate than using object bounding boxes, especially when consider that it will handle occlusion.

The SDSM paper demo proposes a few different technique. The simplest one is to just compute the min and max Z value visible to the camera using the depth buffer, which you can then use to compute optimal split distances. It also proposes taking things a step further by transforming every depth buffer position into the local space of the directional light, and then fitting a tight AABB per split.




#5220316 Mip-chain generation compute shader

Posted by MJP on 30 March 2015 - 04:56 PM

That's all legal. Read/write hazards are tracked per-subresource, so you can read from one mip level and write to another. I would just make sure that you run the code with the debug layer active: it will emit an error message if you accidentally introduce a read/write hazard.




#5220097 The Atomic Man: Are lockless data structures REALLY worth learning about?

Posted by MJP on 29 March 2015 - 11:30 PM

Just wanted to quickly give a +1 to what Hodgman mentioned in the second half of his post: you can often get better performance *and* have less bugs by removing the need for mutable shared resources.




#5220072 Copy 2D array into Texture2D?

Posted by MJP on 29 March 2015 - 08:09 PM

How is it crashing, and where? When you get the crash while running in the debugger, you should be getting some information on the unhandled exception that lead to the crash. You should also be able to break into the debugger, and find out the callstack of the thread that encountered the exception. These two things can potentially give you valuable information for tracking down the crash.




#5220046 Copy 2D array into Texture2D?

Posted by MJP on 29 March 2015 - 05:24 PM

First of all...are you looking to create a static texture that you initialize once and use for a long time? Or are you looking to create a dynamic texture, whose contents are updated frequently with new data from CPU memory? You're setting things up for the latter case, and I just want to make sure that this is what you intended.

Those last 4 lines, where you map the texture and update its contents, are definitely wrong. You're performing a memcpy using a pointer to your D3D11_SUBRESOURCE_DATA struct as your souce, and then copying "width" bytes. This isn't copying the right data, and will surely result in the memcpy reading garbage data off the stack. In fact you don't need that D3D11_SUBRESOURCE_DATA struct at all, that's only something that you use for initializing a texture with data. You're also using the address of mappedResource.pData as your destination, which means you're passing a pointer to a pointer to your mapped data, rather than passing a pointer to your mapped data. This will result in your memcpy stomping over your stack, which will result in very bad things happening.

You want something like this:

 

D3D11_TEXTURE2D_DESC desc2;
ZeroMemory(&desc2, sizeof(desc2));
desc2.Width = width;
desc2.Height = height;
desc2.ArraySize = 1;
desc2.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
desc2.Usage = D3D11_USAGE_DYNAMIC;
desc2.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
desc2.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc2.MipLevels = 1;
desc2.SampleDesc.Count = 1;
desc2.SampleDesc.Quality = 0;
 
device->CreateTexture2D(&desc2, NULL, &textureBuffer);
 
const uint32_t texelSize = 16;  // size of DXGI_FORMAT_R32G32B32A32_FLOAT
 
D3D11_MAPPED_SUBRESOURCE mappedResource;
deviceContext->Map(textureBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
uint8_t* dstData = reinterpret_cast<uint8_t*>(mappedResource.pData);
const uint8_t* srcData = reinterpret_cast<const uint8_t*>(grid);
for(uint32_t i = 0; i < height; ++i)
{
    memcpy(dstData, srcData, width * texelSize);
    dstData += mappedResource.RowPitch;
    srcData += width * texelSize;
}
deviceContext->Unmap(textureBuffer, 0);



#5218986 Documentation for PSGL and GNMX

Posted by MJP on 25 March 2015 - 12:32 AM

The GNM/GNMX docs, like any other console-specific documentation, is not public and only available to registered developers who sign NDA's. Same goes for PSGL and PSSL (PSGL was an incomplete OpenGL implementation for PS3, PSSL is the official shading language for PS4).

 

Unfortunately I am covered by said NDA's, so I can't really tell you anything about them specifically. I will just say that in general, console API's can be really nice since they're tailored to exactly 1 GPU, and can therefore expose them completely (although this can sometimes make certain things a bit harder, since certain things that are normally abstracted away by PC API's will instead have to be manually managed).




#5218553 Particle Z Fighting

Posted by MJP on 23 March 2015 - 01:05 PM

The order in which elements are added to an append buffer is non-deterministic, since it uses a global atomic increment under the hood. If you're using standard alpha blending, you'll need to sort your particles by Z order if you want to get the correct result.




#5218343 Link shader code to program

Posted by MJP on 22 March 2015 - 04:21 PM

Most rendering engines treat their shaders as data, similar to how you would treat textures or meshes. For rendering API's that support offline compilation (D3D, Metal) this data will consist of pre-compiled bytecode, and for GL the data will be GLSL source code (typically generated from HLSL or another meta-language, with optimizations and pre-processing already applied).

 

For simple projects embedding shaders right in the executable can be convenient, but it doesn't scale up very well. For GL it's also pretty annoying to write and edit your shaders inside of string literals.




#5218189 Couple of questions regarding graphics API?

Posted by MJP on 21 March 2015 - 08:34 PM


The reason that I asked this question is I want to implement a complete new ray tracing rendering engine instead of making a hybrid (DX with ray tracing) rendering engine. So that's why I wanted to know that if I can access the GPU directly(on PC and consoles) instead of going through an API ?

 

You absolutely don't need "direct" access to a GPU in order to write a ray tracing engine. Plenty of people have already written excellent ray tracers and path tracers on top of the current PC API's. You can go ahead an look at the docs for AMD GPU's if you'd like: you'll see very quickly that the majority of command buffer packets and registers are for setting up hardware states related to rasterization. For a ray tracer on a GPU you're just going to be using compute shader functionality, and the PC API's aren't going to hold you back too much in that regard. 




#5218165 SH directional lights, what am I missing?

Posted by MJP on 21 March 2015 - 05:39 PM

Tasty Texel is pretty much correct: SH can't exactly represent the signal you're trying to approximate, which in this case a single clamped cosine lobe oriented in the direction of your directional light. If you go to page 8 of this paper you can see a plot of 3rd/5th order approximation vs. an actual clamped cosine lobe, and you'll see that even for the 3rd and 5th order case there can be significant error. For 2nd order the error will be worse, and that error can result in rather extreme artifacts for lights with very high intensities. An even worse problem with SH (in my opinion) is that you end up with a negative contribution in the direction opposite of your directional light. This means that adding very bright directional lights can essentially "suck the light" out on the opposite side of the sphere, which can look really bad in practice.

By the way, the code you've listed will actually give you the irradiance of your light source. If you're trying to use properly-balanced diffuse and specular BRDF's, then you'll want to make sure that you multiply your irradiance by the Lambertian diffuse BRDF, which is DiffuseAlbedo / Pi. A lot of people will just multiply by diffuse albedo, and you'll end up with diffuse that's too bright by a factor of Pi.

 

EDIT: forgot the link!




#5217740 Data structure with bool field. How to set correctly?

Posted by MJP on 19 March 2015 - 02:51 PM

phil_t is correct: the "bool" type in HLSL is 32bit/4 bytes. Same with "int" and "uint". I would suggest that you use types with well-defined sizes for your corresponding C++ struct, so that there's no ambiguity. stdint.h has types such as "uint32_t" and "int32_t" that work well for this purpose.




#5217498 How 3D engines were made in the 80's and 90's without using any graph...

Posted by MJP on 18 March 2015 - 06:45 PM


I would really love to know how to do that. I need to see some code.

 

For example software rasterizer code, just google something like "software rasterizer" or "software rasterizer C++" and you should be able to find plenty of examples, articles, and tutorials.

 

As for writing to a buffer a pixels and getting it on the screen, there's an older library called PixelToaster that can make this quite easy for you.




#5216751 Visual studio 2013 and DirectX.

Posted by MJP on 15 March 2015 - 06:30 PM

Most of the removed functionality can be found in other open libraries, such as DirectXTK though.

 

Indeed. To help with that, many of the MSDN pages for deprecated D3DX functionality indicate that they are deprecated, and link to their replacements in DirectXTK/DirectXTex.

TBH I like the DirectTX/DirectXTex replacements better than the old D3DX stuff, since it doesn't use COM interfaces and you can also step through the code.




#5216750 ConstantTable.SetValue problems

Posted by MJP on 15 March 2015 - 06:28 PM

That function isn't changing the shader bytecode, it's modifying the device state. When you have those kinds of constants in a shader, they get mapped to a set of 256 constant registers (named c0-c255) that are supposed to contain the values that the shader can use. Those registers are part of the device and are shared by all shaders of the same type. So the idea is that when you're ready to draw with a particular shader, you bind the shader, set the constant registers to the appropriate values, and then draw. 




#5216749 ShaderResourceView sRGB format having no effect on sampler reads

Posted by MJP on 15 March 2015 - 06:25 PM

I'm not a user of SharpDX, but the documentation indicates that Texture2D.FromFile is just a wrapper around D3DX11CreateTextureFromFile. One of the quirks of that function is that if you specify an SRGB format, it will assume that the image file is in linear space (not sRGB!), and will thus perform a linear->sRGB conversion on the image data before creating the texture. So basically your texture gets an sRGB->Linear transformation applied to it, and then the texture unit performs Linear->sRGB conversion applied which ends up giving you a very similar to result to what you would get if you never used an sRGB format in the first place. wacko.png 

To tell the loader that the image file is already in sRGB space and that it shouldn't convert anything, you need to set the "Filter" field to "FilterFlags.SRgbIn | FilterFlags.SRgbOut". Or alternatively, you can use "FilterFlags.SRgb" which is the same as using the "In" and "Out" flags together. 






PARTNERS