-
Advertisement
-
Content count
813 -
Joined
-
Last visited
Community Reputation
2835 ExcellentAbout belfegor
-
Rank
Advanced Member
Personal Information
-
Interests
Art
Recent Profile Visitors
11801 profile views
-
I kinda made some progress. Found their original material and inspect it, i was missing transform ReflectionVector to tangent space. In Unreal : In my test program: i have seam error somehow (might be problems with my mesh exporter). And could not get correct result with CLAMP addressing (above image is WRAP) void main_ps(in VERTEX_OUT IN, out PIXEL_OUT OUT) { float3x3 tangentBasis = float3x3(normalize(IN.Tangent), normalize(IN.Binormal), normalize(IN.Normal)); float2 uv = IN.TexCoord0; static const float AddR = 8.0f; static const float MulR = 0.6f; float3 toEye = camPos.xyz - IN.WorldPos.xyz; float3 v = normalize(toEye); float3 n = normalize(IN.Normal); float3 R = reflect(v, n); float3 rvec = mul(R, tangentBasis); float rx = rvec.x; float rz = sqrt( (rvec.z + AddR) * MulR ); float xcoord = (rx / rz) + 0.5f; float2 coord = float2(xcoord, uv.y); float3 lightFalloff = tex2D(lightSamp, coord); float3 lightCol = float3(1.0f, 1.0f, 1.0f); OUT.Color = float4(lightCol, lightFalloff.r); }
-
I have downloaded Unreal engine to test it. Cant get it to work even there: I even created same material as they have shown on tutorial:
-
I am interested in this as well. I tried to create hlsl by what is going on in their "material node graph" image: void main_ps(in VERTEX_OUT IN, out PIXEL_OUT OUT) { static const float AddR = 8.0f; static const float MulR = 0.8f; float3 vvec = camPos.xyz - IN.WorldPos; float3 v = normalize(vvec); float3 n = normalize(IN.Normal); float3 rvec = reflect(v, n); float3 rx = rvec.x; float rz = sqrt((rvec.z + AddR) * MulR); float xcoord = (rx / rz) + 0.5f; float2 coord = float2(xcoord, IN.TexCoord0.y); float3 lightFalloff = tex2D(lightSamp, coord); float3 lightCol = float3(1.0f, 1.0f, 1.0f); OUT.Color = float4(lightCol, lightFalloff.r); } but wrong result: I copied their image to use as light texture If you got it solved please share.
-
DX11 How to make downsampling with directx 11 ?
belfegor replied to theScore's topic in Graphics and GPU Programming
I am confused as what to believe is true now. For example, i am looking at MJP shadow sample project, where he downsample/scale texture, there is no "pixel offsets" applied, just bilinear filter: quad verts QuadVertex verts[4] = { { XMFLOAT4(1, 1, 1, 1), XMFLOAT2(1, 0) }, { XMFLOAT4(1, -1, 1, 1), XMFLOAT2(1, 1) }, { XMFLOAT4(-1, -1, 1, 1), XMFLOAT2(0, 1) }, { XMFLOAT4(-1, 1, 1, 1), XMFLOAT2(0, 0) } }; quad vertex shader VSOutput QuadVS(in VSInput input) { VSOutput output; // Just pass it along output.PositionCS = input.PositionCS; output.TexCoord = input.TexCoord; return output; } // Uses hw bilinear filtering for upscaling or downscaling float4 Scale(in PSInput input) : SV_Target { return InputTexture0.Sample(LinearSampler, input.TexCoord); } -
DX11 How to make downsampling with directx 11 ?
belfegor replied to theScore's topic in Graphics and GPU Programming
I thought we didnt need pixel offset since dx11? -
"But how your position can be full zero in the first place ?" Great question.Because i cleared that texture with 0's. I am thinking now what is better clear value to avoid future problems.
-
Thats it. float3 v = normalize(-(posVS.xyz + 0.0001f.xxx)); Thank you very much.
-
@iedoc We don't know how normalize is implemented in HLSL. But still i thought about that before and i tried even with this : if (IsNAN(posVS)) posVS = float4(1.0f, 1.0f, 1.0f, 1.0f); if (any(isinf(posVS))) posVS = float4(1.0f, 1.0f, 1.0f, 1.0f); rest of the code is the same, but same results. @galop1n cant use native isnan: warning X3577: value cannot be NaN, isnan() may not be necessary. /Gis may force isnan() to be performed CANT set /Gis option in VS2013 fxc compiler options
-
That gives me same results as my code.
-
After getting "black squares" rendered on my "bloom" post process: So i searched and read this thread where i think L.Spiro had same problem. Then i searched my HLSLs programs to find where i could have NaNs. Found one problem in my point light shader: float4 posVS = gbPosition_tex.Sample(sampPointClamp, tc); // this has no effect ! if (IsNAN(posVS)) posVS = float4(0.0f, 0.0f, 0.0f, 1.0f); if (any(isinf(posVS))) posVS = float4(0.0f, 0.0f, 0.0f, 1.0f); ... //code skipped, nothing else touches posVS float3 v = normalize(-posVS.xyz); // but if i uncomment this there is NO more black squares ! //if (IsNAN(v)) // v = float3(0.0f, 0.0f, 0.0f); ...// rest of lightning computation Please read comments in code. IsNAN is defined like this (i cant use "native" isnan function because i cant set /Gis option inside VS2013 for fxc compiler options): bool IsNAN(float n) { return (n < 0.0f || n > 0.0f || n == 0.0f) ? false : true; } bool IsNAN(float2 v) { return (IsNAN(v.x) || IsNAN(v.y)) ? true : false; } bool IsNAN(float3 v) { return (IsNAN(v.x) || IsNAN(v.y) || IsNAN(v.z)) ? true : false; } bool IsNAN(float4 v) { return (IsNAN(v.x) || IsNAN(v.y) || IsNAN(v.z) || IsNAN(v.w)) ? true : false; } wtf is going on?
-
Question about hardware instancing and HLSL semantics
belfegor replied to GuyWithBeard's topic in Graphics and GPU Programming
Can you elaborate why is it better to use Structured buffer for instance data? Thanks. -
For example, in deferred renderer, in pixel shader i would like to write distance from camera to pixel (read from g-buffer R32F texture) at current cursor position. What kind of buffers i could use (if any, except render targets) and how to setup those to be able to access that value on CPU side? With google i couldn't find any example code, and i don't know with what keywords to search properly. I tried to write some code, but i am stuck as don't know what is correct "semantic" for shader code side: c++ struct MyStruct { vec4 myData; }; ... std::memset(&buffDesc, 0, sizeof(buffDesc)); buffDesc.Usage = D3D11_USAGE_DEFAULT; buffDesc.ByteWidth = sizeof(MyStruct); buffDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; buffDesc.CPUAccessFlags = 0; hr = dev->CreateBuffer(&buffDesc, nullptr, &m_writeBuffer); if (FAILED(hr)) { ... } std::memset(&buffDesc, 0, sizeof(buffDesc)); buffDesc.Usage = D3D11_USAGE_STAGING; buffDesc.ByteWidth = sizeof(MyStruct); buffDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; buffDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ; hr = dev->CreateBuffer(&buffDesc, nullptr, &m_readBuffer); if (FAILED(hr)) { ... } ... //after drawing Context->CopyResource(m_readBuffer, m_writeBuffer); MyStruct s; D3D11_MAPPED_SUBRESOURCE ms; Context->Map(m_readBuffer, 0, D3D11_MAP_READ, 0, &ms); s.myData = *(vec4*)ms.pData; Context->Unmap(m_readBuffer, 0); ... // use myData hlsl ??? MyStruct : register(???) { float4 myData; }; void PixelShader(...) { ... myData = something; ... }
-
I was using DX9 long time. Before i could use depth/stencil buffer with render target that has smaller size without a problem. Now i get this error: D3D11 ERROR: ID3D11DeviceContext::OMSetRenderTargets: The RenderTargetView at slot 0 is not compatable with the DepthStencilView. DepthStencilViews may only be used with RenderTargetViews if the effective dimensions of the Views are equal, as well as the Resource types, multisample count, and multisample quality. The RenderTargetView at slot 0 has (w:640,h:360,as:1), while the Resource is a Texture2D with (mc:1,mq:0). The DepthStencilView has (w:1280,h:720,as:1), while the Resource is a Texture2D with (mc:1,mq:0). D3D11_RESOURCE_MISC_TEXTURECUBE factors into the Resource type, unless GetFeatureLevel() returns D3D_FEATURE_LEVEL_10_1 or greater. [ STATE_SETTING ERROR #388: OMSETRENDERTARGETS_INVALIDVIEW] I thought i could reuse same DS. Such a waste if i have to create DS for every RT that doesnt match in size.
-
view frustum culling particle effects?
belfegor replied to Anddos's topic in Graphics and GPU Programming
Check each point and grab the min/max, center is (min + max) * 0.5 Better yet approximate size so you don't need to recalculate every frame. -
Use CreateTexture to create new texture then use GetSurfaceLvl to grab its surface then use UpdateSurface to copy to it from your surface, tho i don't understand why do you even load surface from file instead texture?
-
Advertisement