Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 13 Nov 2007
Offline Last Active Mar 10 2016 02:45 AM

#5260125 Can't Link Shader Program

Posted by pcmaster on 02 November 2015 - 09:31 AM

This should give you an answer: http://stackoverflow.com/questions/5366416/in-opengl-es-2-0-glsl-where-do-you-need-precision-specifiers


The default precision in fragment and vertex shaders isn't the same smile.png Also be aware that you aren't using GLSL but "OpenGL ES Shading Language" (GLSL ES, if you want). Not the same.

#5143487 Is SuperSampling really a bad choice when going deferred ?

Posted by pcmaster on 31 March 2014 - 09:09 AM

You could do a custom adaptive SSAO as HW MSAA does. That is render into 2x2, 3x3 (ha!) or 4x4 bigger target (and perhaps a 1x1 target, too, for faster look-up? or just a custom "down-sampled" version?) and before doing any lighting, just identify blocks that need detailed lighting/shading... Performance won't be as good as with HW MSAA, of course :(

#5132575 Port HLSL to x86 Assembly

Posted by pcmaster on 19 February 2014 - 03:31 AM

DX11 WARP runs very well (and slow but that's not the problem, right?), however that's not quite just what OP asked, I guess..

#5129317 Power of normal mapping and texture formats?

Posted by pcmaster on 06 February 2014 - 09:06 AM

DXT (DX9) / BC (DX10+) are block compression schemes. They take 4x4 blocks and instead of representing them with 4x4=16 RGB(A) color values, they represent them with only 2 degraded colors (5:6:5-bit instead of 8:8:8-bit) and 16 interpolation coordinates (each 3-bits). DXT1/BC1, for example, will save you always exactly 7/8 of memory, compared to raw R8G8B8A8_UNORM. The cost is quality, obviously.


The details vary: http://msdn.microsoft.com/en-us/library/windows/desktop/hh308955%28v=vs.85%29.aspx


For example for your normal maps, you might want to use non-compressed formats. But it all depends on how it looks, it has to first look bad enough to reach for a higher quality solution.

#5128389 RWStructuredBuffer read and write ?

Posted by pcmaster on 03 February 2014 - 05:16 AM

And as of DX 11.1 (or DX 11.2?) you can also bind UAVs to ALL other shader stages (vertex, hull, ...). When I was looking into this, however, I found a great lack of examples of how to do this exactly. I mean examples of C++ and HLSL showing both SRVs, RTVs and UAVs of all kinds together in a pixel shader. But I guess you'll figure out according to the debug layer error messages :)

#5125330 SwapChain resize questions

Posted by pcmaster on 21 January 2014 - 07:49 AM

No, you misunderstand. IDXGISwapChain::ResizeBuffers only resizes the swapchain. It knows nothing about your depth/stencil buffers. It is DXGI, not D3D. You can have dozens of various depth/stencil target buffers and dozens of color targets but you only have one swap chain (with two buffers, here). OMSetRenderTargets sets a combination for subsequent D3D11 rendering commands and you can (and eventually will) call it with different combinations during a frame. Do you understand, what it does?


So, you must also resize (meaning release and recreate) all other textures you're using as rendering targets with the swap chain buffers. That is, resize your depth/stencil, too! smile.png

#5124998 SwapChain resize questions

Posted by pcmaster on 20 January 2014 - 03:19 AM

Looks like you don't set a depth-buffer (depth-stencil view, the final parameter of OMSetRenderTargets) anymore. Provided that you show us all.

#5118140 Problem mapping a DXGI_FORMAT_BC3_UNORM texture

Posted by pcmaster on 19 December 2013 - 09:50 AM

First of all, turn on the debug layer (D3D11_CREATE_DEVICE_DEBUG) and see the output for a more specific error!!!


EDIT: I'm getting:

D3D11 ERROR: ID3D11Device::CreateTexture2D: A D3D11_USAGE_DYNAMIC Resource must have MipLevels equal to 1. [ STATE_CREATION ERROR #102: CREATETEXTURE2D_INVALIDMIPLEVELS]

So, you cannot have 2+ mipmap levels for a dynamic resource. Use a default resource for that and update individual mipmaps with UdpateResource, I'd say.

#5117345 Is GPU-to-CPU data transfer a performance bottle-neck?

Posted by pcmaster on 16 December 2013 - 10:19 AM

And do you need to use doubles? Computing your v expression with doubles is slower than with floats (the double division will be really slow) and also converting it to float takes some time.


Maybe you could sort on GPU and keep the sorted array on GPU and use it as an indirect parameter to your rendering, perhaps as an index buffer. That way, you wouldn't have to stall at all.

#5116673 Why "double declare" classes? (Lack of better terminology)

Posted by pcmaster on 13 December 2013 - 07:29 AM

Frob, this was the single most funny thing I read here in weeks :D ROFL

#5114062 Its all about DirectX and OpenGL?

Posted by pcmaster on 03 December 2013 - 09:52 AM

The "western" console uses an enhanced version of their known graphics API and the "oriental" one uses its own completely new API, nobody can disclose more, I'm afraid:D Dunno about the other "oriental" console. However the concepts are really the same, also the shading languages are extremely similar, so you can wrap it and port it quite easily.

#5106915 Which programmer is responsible?

Posted by pcmaster on 04 November 2013 - 08:05 AM

Yes, the company's main working languages (e.g. C++ plus Lua, or a similar combo, don't start the flamewar :)). Valid for all programmers.

#4981650 Why are most games not using hardware tessellation?

Posted by pcmaster on 19 September 2012 - 06:05 AM

It isn't completely straightforward to implement if you have to account for "non-standard" meshes - e.g. non-quads, too many adjacent faces/edges, etc. (then you need a lot of pre-computation). Otherwise from that, I don't understand its absence either but Ashaman has a point. Unfortunately :-(

#4971379 f32tof16 confusion

Posted by pcmaster on 20 August 2012 - 02:21 AM

Or you can use f32to16 to pack two halfs into an uint. Like this:
float2 toBeQuantised(333.333, 666.666);
uint half1 = f32to16(toBeQuantised.x);
uint half2 = f32to16(toBeQuantised.y);
uint twoHalfs = half1 | (half2 << 16);

But this doesn't make that much sense or use, in addition to what Kauna said :-)

#4946972 Structured buffer float compression

Posted by pcmaster on 07 June 2012 - 01:13 AM

Hyunkel, yes, I'm familiar with DX11 compute shaders. You don't necessarily need to use StructuredBuffer UAV. You can use several UAVs as outputs of your compute shaders. So instead of a stream (array) of packed interleaved struct data, you might have streams (arrays) of individual struct members. Instead of 1 RWStructuredBuffer, you'd have 4 RWBuffers as targets of your compute shader. The main disadvantage I see is that you use 4 target slots instead of 1 (there should always be at least 8 supported, if I recall well). I believe you can have texture/buffer UAVs as well in cs_5_0 (unlike cs_4_1) but I've actually used RWStructuredBuffer just like you.