Jump to content
  • Advertisement

petitrabbit

Member
  • Content count

    7
  • Joined

  • Last visited

Community Reputation

3 Neutral

1 Follower

About petitrabbit

  • Rank
    Newbie

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Sounds like you're trying to build a color histogram (if I'm not misunderstanding your question ). Since WebGL doesn't have Compute shaders, retrieving texels on the CPU (using glReadPixels) and building the histogram on the CPU-side is probably the easiest way to solve your problem. You'll have to iterate each bytes to count how many colors there are in your image in any case.
  2. petitrabbit

    Removing Youtube Tutorials

    I don't think the whole subscription thing is a good idea. Even if your video tutorials are really good, there'll always be free tutorials somewhere else on the internet. And most of the time, people would rather read some free poorly written article than spending money to read decent content. Moving content to your website and maybe asking for donations (either patreon or a simple paypal button) would be better. You should also watermark your content if you think people will re-upload your work.
  3. As far as I know, memory transaction order is handled by the API itself which is why you don't have memory barriers on the CPU side
  4. petitrabbit

    Physically Accurate Material Layering

    Right now I'm using a simple texture scaling (basically 'MyTexture.Sample( Sampler, ( uvCoordinates + Offset ) * Scale )) which looks wrong Procedural noise might be a little bit costly if done directly on the GPU? I was thinking about generating a random noise kernel on the CPU once (at application initialization), upload it to the GPU as a texture and sample the noise to randomize the texture coordinates. I guess it would worth checking if a texture sample cost more or less than procedural GPU noise. Thanks for the link, definitely worth checking out I have noticed a huge mistake in my normal mapping (I swapped binormal and tangent in my Tangent to World Space matrix ) which explain why flakes normal seemed strange. I guess the material still need some tweaking (it doesn't reach my expectation yet ), but at least it's closer than the previous version I posted. I haven't fixed IBL yet, which is why the in game screenshot doesn't match the editor one. As a small test, I tested carbon fiber rendering with clear coat, which doesn't look too bad compared to modern rendering engine (I guess ). EDIT: I've just noticed I haven't posted the shading code yet. So here it is: float3 DoShading( in float3 L, in LightSurfaceInfos surface ) { const float3 H = normalize( surface.V + L ); const float LoH = saturate( dot( L, H ) ); const float NoH = saturate( dot( surface.N, H ) ); float clearCoatRoughness = 1.0f - surface.ClearCoatGlossiness; float clearCoatLinearRoughness = max( 0.01f, ( clearCoatRoughness * clearCoatRoughness ) ); // Clear Coat Specular BRDF (returns (distribution * fresnel * visibility)) float3 clearCoatSpecular = ComputeClearCoatSpecular( NoH, LoH, clearCoatLinearRoughness ); // IOR = 1.5 -> F0 = 0.04 static const float ClearCoatF0 = 0.04; static const float ClearCoatIOR = 1.5; static const float ClearCoatRefractionIndex = 1.0 / ClearCoatIOR; float3 RefractedL = refract( -L, -H, ClearCoatRefractionIndex ); float3 RefractedV = refract( -surface.V, -H, ClearCoatRefractionIndex ); float3 RefractedH = normalize( RefractedV + RefractedL ); float NoL = saturate( dot( surface.N, RefractedL ) ) + 1e-5f; float NoV = saturate( dot( surface.N, RefractedV ) ) + 1e-5f; NoH = saturate( dot( surface.N, RefractedH ) ); LoH = saturate( dot( RefractedL, RefractedH ) ); float3 layerAbsorption = ( 1.0 - surface.ClearCoat ) + surface.Albedo * ( surface.ClearCoat * ( 1.0 / surface.FresnelColor ) ); float layerReflectionFactor = GetReflectionFactor( NoV, NoL, surface.LinearRoughness ); float3 layerAttenuation = ( ( 1.0f - clearCoatFresnel ) * layerReflectionFactor ) * layerAbsorption; // Diffuse BRDF float diffuse = Diffuse_Disney( NoV, NoL, LoH, surface.DisneyRoughness ) * INV_PI; // Specular BRDF float3 fresnel = Fresnel_Schlick( surface.FresnelColor, surface.F90, LoH ); float visibility = Visibility_SmithGGXCorrelated( NoV, NoL, surface.LinearRoughness ); float distribution = Distribution_GGX( NoH2, surface.LinearRoughness ); float3 baseSpecular = distribution * fresnel * visibility * INV_PI; float3 specular = clearCoatSpecular + baseSpecular * LayerAttenuation; return ( ( diffuse * surface.Albedo ) + specular ); }
  5. petitrabbit

    Physically Accurate Material Layering

    Sure I switched to a skybox instead of the atmosphere I was previously using, since my reflection probe capture is half-broken right now (which broke the IBL as well). Without flecks normal map: With flecks normal map: I'm still not satisfied by the metal flakes rendering (probably because my normal map is not that great, or maybe there is something wrong in my shading function...). I will update the thread if I find anything.
  6. petitrabbit

    Physically Accurate Material Layering

    Thanks for the detailed explanations! I've modified my shading function based on your post, it seems everything works correctly (except the absorption, which I haven't looked into yet). Regarding the specular reflections, if you consider a regular IBL-only setup (no SSR), is it technically correct to evaluate the IBL specular cubemap twice (once for the metallic layer and once for the clearcoat)? Since IBL is supposed to act as an ambient term, it seems odd to me... (your typical Forward+ light evaluation, nothing fancy here) [...] surfaceLighting.rgb += evaluateIBLDiffuse( surface.V, surface.N, surface.R, surface.Roughness, surface.NoV ) * surface.Albedo; // Compute Specular Reflections for the ClearCoat Layer // IOR = 1.5 -> F0 = 0.04 float ClearCoatF0 = 0.04f; float ClearCoatRoughness = 1.0f - surface.ClearCoatGlossiness; float ClearCoatLinearRoughness = ClearCoatRoughness * ClearCoatRoughness; float ClearCoatNoV = dot( surface.V, surface.ClearCoatOrangePeel ); float ClearCoatFresnel = Fresnel_Schlick( ClearCoatF0, 1.0f, ClearCoatNoV ).r * surface.ClearCoat; float LightTransmitAmt = ( 1.0f - ClearCoatFresnel ); float3 ClearCoatSpecular = evaluateIBLSpecular( ClearCoatF0, ( 1.0f - surface.ClearCoat ), surface.ClearCoatOrangePeel, ClearCoatNoV, ClearCoatLinearRoughness, ClearCoatRoughness, dot( N, surface.V ) ); // Comput Specular Reflection for the layer below float3 MetallicSpecular = evaluateIBLSpecular( surface.FresnelColor, surface.F90, surface.N, surface.R, surface.LinearRoughness, surface.Roughness, surface.NoV ) * computeSpecOcclusion( surface.NoV, surface.AmbientOcclusion, surface.LinearRoughness ); surfaceLighting.rgb += ( ClearCoatSpecular + MetallicSpecular * LightTransmitAmt ); return surfaceLighting; // https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf float3 evaluateIBLSpecular( float3 f0, float f90, in float3 N, in float3 R, in float linearRoughness, in float roughness, float NoV ) { float3 dominantR = getSpecularDominantDir( N, R, roughness ); // Rebuild the function // L . D . ( f0.Gv.(1-Fc) + Gv.Fc ) . cosTheta / (4 . NdotL . NdotV ) NoV = max( NoV, 0.5f / DFG_TEXTURE_SIZE ); float mipLevel = linearRoughnessToMipLevel( linearRoughness, DFG_MIP_COUNT ); float3 preLD = iblSpecularTest.SampleLevel( GeometrySampler, fix_cube_lookup( dominantR ), mipLevel ).rgb; // Sample pre - integrate DFG // Fc = (1-H.L)^5 // PreIntegratedDFG.r = Gv.(1-Fc) // PreIntegratedDFG.g = Gv.Fc float2 preDFG = iblBRDF.SampleLevel( GeometrySampler, float2( NoV, roughness ), 0 ).xy; // LD.(f0.Gv.(1 - Fc) + Gv.Fc.f90) return preLD * ( f0 * preDFG.x + f90 * preDFG.y ); }
  7. A lot of progress has been made in the area of complex, multilayered materials rendering (e.g. patina, laquered wood, car paint). From what I understand, GPUs nowadays have enough compute power to handle multilayered BRDF in real time, which should technically allow accurate physically based layer simulation (by technically accurate I mean accurate light attenuation per layer, energy conservation for the whole material, ...) in real time. So far, I've tried to built my own material layering system (offline shader generation, supports 4 layers, ...). It works ok but it's nowhere close to be physically realistic, since my blending function between layers (based on The Order 1886 paper http://www.gdcvault.com/play/1020162/Crafting-a-Next-Gen-Material page 81) is a naive linear interpolation for each input: void BlendLayers( inout MaterialLayer baseLayer, in MaterialLayer topLayer ) { baseLayer.BaseColor = lerp( baseLayer.BaseColor, topLayer.BaseColor * topLayer.DiffuseContribution, topLayer.BlendMask ); baseLayer.BaseColor = saturate( baseLayer.BaseColor ); baseLayer.Reflectance = lerp( baseLayer.Reflectance, topLayer.Reflectance * topLayer.SpecularContribution, topLayer.BlendMask ); baseLayer.Reflectance = saturate( baseLayer.Reflectance ); baseLayer.Normal = BlendNormals( baseLayer.Normal, topLayer.Normal * topLayer.NormalMapContribution ); baseLayer.Roughness = lerp( baseLayer.Roughness, topLayer.Roughness * topLayer.SpecularContribution, topLayer.BlendMask ); baseLayer.Roughness = clamp( baseLayer.Roughness, 1e-3f, 1.0f ); baseLayer.Metalness = lerp( baseLayer.Metalness, topLayer.Metalness * topLayer.SpecularContribution, topLayer.BlendMask ); baseLayer.Metalness = clamp( baseLayer.Metalness, 1e-3f, 1.0f ); baseLayer.AmbientOcclusion = lerp( baseLayer.AmbientOcclusion, topLayer.AmbientOcclusion, topLayer.BlendMask ); baseLayer.AmbientOcclusion = saturate( baseLayer.AmbientOcclusion ); baseLayer.Emissivity = lerp( baseLayer.Emissivity, topLayer.Emissivity, topLayer.BlendMask ); } float4 EntryPointPS( psData_t VertexStage ) : SV_TARGET { float3 N = normalize( VertexStage.normal ); MaterialLayer BaseLayer; float4 LightContribution = float4( 0, 0, 0, 1 ); MaterialLayer TestLayer1 = GetTestLayer1Layer( VertexStage, N ); BaseLayer = TestLayer1; MaterialLayer TestLayer2 = GetTestLayer2Layer( VertexStage, N ); BlendLayers( BaseLayer, TestLayer2 ); LightContribution += ComputeLighting( VertexStage.positionWS.xyz, VertexStage.position.xyz, VertexStage.depth, SHADING_MODEL_LIT, BaseLayer ); return LightContribution; } I'm looking to build a car paint material based on this system, split in 3 layers (primer coat, metal flakes and clear coat, as described during Forza Motorsport 5 session at GDC https://www.gdcvault.com/play/1020556/Lighting-and-Materials-in-Forza). With my current system, the whole thing looks... lame (and too plastic-ish to be believable). I had a look at COD:IW paper ( https://www.activision.com/cdn/research/s2017_pbs_multilayered_slides_final.pdf ) but their approach seems quite complicated (in details at page 29). UE4 handle multi-layering easily so I guess there is a way to simulate layers in a cheap way without ending with NaN and artifacts all over the place.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!