Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


MJP

Member Since 29 Mar 2007
Online Last Active Today, 09:50 PM

#4997351 Questions about physically based shading

Posted by MJP on 04 November 2012 - 06:09 PM

Well not necessarily at just grazing angles, but for whatever viewing angles that cause your specular BRDF to produce a higher intensity. If you're using a physically-based BRDF the specular should be quite bright relative to the diffuse contribution, even for materials with a low specular response at F(0). Here's a picture showing a surface with F(0) = 0.05 with a Beckmann NDF with a nearly head-on angle, and another picture showing the response at a grazing angle:

Specular.png

Grazing.png

Using a light intensity should be totally fine, as long as you use the same intensity for your diffuse and your specular.


#4997343 Sorry if this is a stupid question. What would it take to get DirectX on Linux?

Posted by MJP on 04 November 2012 - 05:43 PM

Well I think this thread has run its course in terms of actually discussing technical issues regarding using DirectX on Linux. If you guys would like to continue discussing Linux and Windows, feel free to start a new thread in the lounge.


#4997341 How to use Texture Samplers

Posted by MJP on 04 November 2012 - 05:40 PM

I think Samplers are part of Effects Framework. I'm not sure why that code even compiles.


Yes, declaring sampler states this way requires using Effects. It compiles because the shader compiler also handles compiling effects, so some of the effect syntax is part of HLSL (which is pretty unfortunate, IMO).


#4996998 Questions about physically based shading

Posted by MJP on 03 November 2012 - 04:35 PM

1. The point of "normalization" with regards to a BRDF is to ensure that the energy being reflected off a surface (radiant exitance) is less than or equal to the amount of energy that's hitting the surface (irradiance). If that inequality isn't true you've violated energy conservation, and it will possible for a surface to "gain" energy. Now keep in mind that doesn't mean that the energy reflecting off the surface has to be less than or equal to the intensity of an analytical light source, which seems to what you're thinking. The radiant exitance is calclulated by integrating the exit radiance for all possible viewing directions about the hemisphere surrounding a point's surface normal. When you calculate the specular reflection for a surface in a shader, you're just calculating the radiance for one possible viewing direction. So for instance if you have a directional light of intensity 1.0 that's pointing straight at a surface, the resulting irradiance is equal to 1.0 * N dot L, which is 1.0. With a normalized specular term you will actually get values that are way higher than 1.0 for certain viewing directions, however for most viewing directions you will get a much lower values. But if it's properly normalized, you should end up with a value less than 1.0 if you were to integrate the specular result for all possible viewing directions.

If you're not familiar with this yet, I would suggest reading the relevant sections of Real-Time Rendering, 3rd edition and Physically-Based Rendering. There's Principles of Digital Image Synthesis (which is free), which has an overview of radiometry and the relevant integrals in chapter 13.

2. Real metallic surfaces don't have a "diffuse" response to light. Their response is purely specular, although it may require multiple specular lobes to properly approximate the real-world response. To render this sort of material and have it look good, you really need to include specular reflections from the entire environment and not just from analytically light sources. Otherwise it will just be black with a few highlights, which doesn't look very metallic. Typically games will use reflection probes stored as cubemaps to approximate environment reflections, perhaps pre-convolved with a kernel that's meant to approximate the specular BRDF. For low-intensity specular values, you may want to consider rescaling the texture value in your shader. So for instance if you were to multiply by 0.04, then you specular intensity of 0.02 would be represented by a texture value of 0.5 and you'd have a lot of precision available for small variations.


#4996929 Sorry if this is a stupid question. What would it take to get DirectX on Linux?

Posted by MJP on 03 November 2012 - 12:18 PM

M$ is losing popularity but I doubt it will be Linux that takes the place of Windows if M$ goes under. I love the GNU community to death, but it's a sandbox. End users want completed products. If M$ gives up the desktop OS struggle their place will be taken by some other company that stands on he shoulders of giants (steals GNU work) and charges enormous sums of cash for defective and/or corporate-interest driven products.


Normally this is the sort of forum where we can have an adult discussion about a Microsoft product without having to insert a dollar sign into their name. Perhaps you could help keep it that way? Pretty please?


#4996776 error X4502: invalid vs_4_0 output semantic 'SV_TARGET'

Posted by MJP on 02 November 2012 - 11:40 PM

You're compiling the shader with the wrong profile. You're compiling it with vs_4_0, and you want ps_4_0.


#4996676 DirectX11 Shader file problem

Posted by MJP on 02 November 2012 - 03:51 PM

Saying your program is "crashing" isn't particularly helpful. If it crashes in the debugger, you will have access to plenty of useful information about the crash. For instance the current stack trace, or the exception code, or the address of memory that it was attempting to access. For debugging D3D, you should also ensure that you create the device with the D3D11_CREATE_DEVICE_DEBUG flag specified for debug builds. This will cause the runtime to output messages when you screw something up.

Anyway your problem is that you're also specifying an additional input (color) for your vertex shader and not just an output. Your input layout only has a position element, and when you attempt to draw with a vertex shader that expects position + color it will fail. I was able to figure this out just by looking at your code, but in general it always helps to give more information. And enabling the debug device is just good practice, you will catch a lot of errors with that in the future.


#4996458 Do we need anisotropic filtering for MagFilter?

Posted by MJP on 02 November 2012 - 01:54 AM

Anisotropic filtering is only applicable for minification.


#4996430 XNA's Future

Posted by MJP on 01 November 2012 - 10:57 PM

Actually Mike, XNA 4 did remove XBox 360 features, such as point sprites. It was bullshit reasoning to do so, because the 360 hardware was static. But they abandoned that, because they were busy looking forward to their next half baked idea, WP7.


I remember when I was first told about XNA, one of the developers told me that in all of their profiling just drawing quads was faster than using point sprites on the 360. Not that I'm defending the removal of any features in the name of cross-platform support. PC was the platform that *really* got the shaft in when it came to cross-platform stuff, between the eDRAM emulation, lack of proper depth buffer support, and forcing floating-point textures to use point filtering just because the 360 didn't support it. It was also pretty weird that HiDef essentially required DX10-capable hardware, but couldn't support any DX10-level features.


#4996095 Sorry if this is a stupid question. What would it take to get DirectX on Linux?

Posted by MJP on 31 October 2012 - 11:54 PM

The stuff that goes on between a D3D11 app and the GPU is actually pretty complicated. There's the D3D runtime provided by Microsoft, the user-mode and kernel-mode components of the driver, and a complex display driver model that forms the backbone of DXGI and D3D11. You'd have to somehow replace all of that if you wanted a D3D11 app to run on Linux, which is a monumental task if you're not doing it with the help of the hardware vendors. Intercepting D3D calls and translating them into OpenGL is a lot easier, but less efficient. But even then it's not always a simple mapping between the two API's.


#4995932 Is DirectX Necessary?

Posted by MJP on 31 October 2012 - 02:30 PM

Probably not necessary, but learning it will definitely make you a more capable programmer. As a gameplay programmer you probably wouldn't interact with graphics very much, but it could definitely happen. And if you do need to so something graphics-related, you'll be a lot better at if you have at least some background knowledge on how graphics API's and GPU's work.


#4995572 How do you make an images transparent areas appear transparent in game?

Posted by MJP on 30 October 2012 - 02:54 PM

You probably want to use a blend state that has alpha blending enabled. When you initialize your program, create a blend state with the following settings:

D3D11_BLEND_DESC blendDesc;
    blendDesc.AlphaToCoverageEnable = false;
    blendDesc.IndependentBlendEnable = true;
    for (UINT i = 0; i < 8; ++i)
    {
	    blendDesc.RenderTarget[i].BlendEnable = true;
	    blendDesc.RenderTarget[i].BlendOp = D3D11_BLEND_OP_ADD;
	    blendDesc.RenderTarget[i].BlendOpAlpha = D3D11_BLEND_OP_ADD;
	    blendDesc.RenderTarget[i].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
	    blendDesc.RenderTarget[i].DestBlendAlpha = D3D11_BLEND_ONE;
	    blendDesc.RenderTarget[i].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
	    blendDesc.RenderTarget[i].SrcBlend = D3D11_BLEND_SRC_ALPHA;
	    blendDesc.RenderTarget[i].SrcBlendAlpha = D3D11_BLEND_ONE;
    }
    blendDesc.RenderTarget[0].BlendEnable = true;

Then before you draw, set that blend state on the context using OMSetBlendState.


#4995272 Mapping ID3D11Texture2D problem

Posted by MJP on 29 October 2012 - 08:37 PM

You have to copy the data into the pointer provided by D3D11_MAPPED_SUBRESOURCE, not just copy the pointer itself. This is different then when you initialize a texture with data, where you give it a pointer. You also need to mind the pitch of the texture, which could be padded due to hardware requirements. Usually you do something like this for a 2D texture:

const uint32_t pixelSize = 12; // sizeof(DXGI_FORMAT_R32G32B32_FLOAT)
const uint32_t srcPitch = pixelSize * textureWidth;
uint8_t* textureData = reinterpret_cast<uint8_t*>(data.pData);
const uint8_t* srcData = reinterpret_cast<const uint8_t*>(mBlendMap.data());
for(uint32_t i = 0; i < textureHeight; ++i)
{
    // Copy the texture data for a single row
    memcpy(textureData, srcData, srcPitch);
   
    // Advance the pointers
    textureData += data.RowPitch;
    srcData += srcPitch;
}



#4995171 Creating custom SpriteBatch class in SlimDX. Pseudo-code or design notes ques...

Posted by MJP on 29 October 2012 - 03:48 PM

The ideal implementation depends a bit on what hardware and D3D version you're targeting. Dynamic vertex buffers work fine, but on newer GPU's you can also make use of instancing and geometry shaders to allow the CPU to do less work.

I know that you said you don't want code, but if you'd like you can take a look at the SpriteBatch implementation that Shawn Hargreaves made for DirectXTK. It could at least give you idea of the kinds of optimizations that are possible.


#4995129 Problem Setting Constant Buffers

Posted by MJP on 29 October 2012 - 12:57 PM

XMMATRIX alignment requirements only apply if you're going to use it with SSE instructions. Using it in a constant buffer layout struct isn't really a great idea, but it won't actually cause problems in this case. The only thing you'd have to be aware of is that it will get aligned to a 16-byte boundary in your C++ struct, so you'd have to make sure that it ends up being at the same offset as the variable in your constant buffer.




PARTNERS