• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

WFP

GDNet+ Basic
  • Content count

    111
  • Joined

  • Last visited

Community Reputation

2773 Excellent

About WFP

  • Rank
    Member

Personal Information

Social

  • Twitter
    @willp_tweets

Recent Profile Visitors

6450 profile views
  1. I won't have time to do a full install and all that, but if you can grab a capture with RenderDoc (or a few captures with different deficiencies showing), I might be able to step through the shader that way and see what could be going on. Can't promise when I'll have a lot of focused time to sit down with it, but I'll try to as soon as I can.
  2. Admittedly, I've done very little with stereo rendering, but perhaps this offset needs to be accounted for in your vertex shader? float4 stereo = StereoParams.Load(0); float separation = stereo.x * (leftEye ? -1 : 1); float convergence = stereo.y; viewPosition.x += separation * convergence * inverseProj._m00; The view ray you create in the vertex shader may need to be adjusted similarly to align correctly. Just kinda guessing at the moment
  3. For thoroughness's sake, would you mind switching them back to non-negated versions and seeing if that helps? Could you also test with the original rayLength code? Does the game use a right- or left-handed coordinate system?
  4. Is there a particular reason you're negating these values? traceScreenSpaceRay(-rayOriginVS, -rayDirectionVS
  5. Crap! Sorry again for the delay, let's see if we can get you sorted out If this is a full-screen pass, I would suggest trying the position reconstruction from my other post and see how that works out for you. Also - are you sure that the water surface is written to the depth buffer? If the game draws it as a transparent surface, it may disable writes when drawing it. That would mean you're actually using the land surface underneath the water as the reflection start point. I would first try using the projected view ray I mentioned and see if that gets you anywhere: o10 = mul(modelView, v0.xyzw); //viewPosition o10 = float4(o10.xy / o10.z, 1.0f, o10.w); If you're doing the SSR as part of the water shader itself, i.e. at the same time as drawing the transformed water geometry, you should be able to calculate the view space position and pass it through to the pixel shader, then use that value directly instead of combining it with a linearized depth value. Let me know if any of that helps.
  6. At first glance, this seems incorrect to me: float3 rayOriginVS = viewPosition * linearizeDepth(depth); Can you tell me what value you're storing in viewPosition? In my implementation, that line describes a ray from the camera projected all the way to the far clip plane. In the vertex shader: // project the view-space position to the far plane vertexOut.viewRay = float3(posV.xy / posV.z, 1.0f); Using that multiplied by linearized depth lets you reconstruct view-space position in the pixel shader, which is what you need for rayOriginVS. MJP's entire series is a great resource on reconstructing position from depth, but the last post, https://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/, is closest to what I use for most cases. I don't own/haven't played the game, so this is strictly a guess, but to me it looks like the rocks (as well as the boat and fisherman in it) are billboarded sprites. If that's the case, there's a good chance they aren't even being written to the depth buffer, which means there would be nothing for the ray-tracing step to detect. Are you able to pull a linearized version of the depth buffer that shows what is and isn't actually stored in it? That could be really helpful for debugging . P.S. I think you might be the same person that was asking about this (Dirt 3 mod + 3DMigoto) on my blog post on glossy reflections. Sorry I never got back to you - to be completely honest, I got really busy with a few things around then and forgot over time.
  7. On mobile right now, but briefly - the HLSL compiler will spit out the same intermediate bytecode regardless of what your system specs are. When your application calls a D3D function to load the shader from the pre-compiled bytecode, it will be compiled again in a vendor-specific way before actually being used. So to answer your question - you can compile a shader on one machine with a dedicated GPU and run it on another machine with an integrated GPU just fine.
  8. Passes are typically ran sequentially. If you ran 100 passes, the instructions would get executed serially just like any other instruction. Looking at it at a bit of a finer grain - the CPU instructions will get executed serially, and move along. At some point in the future the commands that the CPU generates and sends to the GPU will get executed in the order sent. Your program should be no more or less prone to freezing while processing a loop of 100 render passes than it would be a loop of anything else.
  9. Yep, that's exactly it. When a technique (for example, a blur) has multiple passes, a pass is typically just a separate draw call with either a new shader, updated data, or both. Exactly that.
  10. The Gaussian blur is separable, meaning it can be done in one direction, then the other and have the same result as if you did a kernel with all surrounding points included. In other words, for a 3x3 blur, you can do an X pass with 3 taps (the middle sample and one on either side) and a Y pass with 3 taps (the middle sample and one above and below) for a total of 6 taps. If you did it in one pass, you would need to sample all nine points, i.e. the center point, the left and right, the above and below, and all four corners - those are values you get for "free" when you break it up into two passes. For a small blur width like the above example, it might be faster to use one pass, but for larger kernels, you can see that the number of samples in the single pass start to add up. For a 7x7 kernel, you would need 49 taps total for a single pass, but only 14 if you break it into separate passes.
  11. According to the documentation (link below), yes it is a necessity.  There is some inheritance for bundles, but between direct command lists the pipeline state is reset. https://msdn.microsoft.com/en-us/library/windows/desktop/dn899196(v=vs.85).aspx#Graphics_pipeline_state_inheritance
  12. Each separate command list should be seen as having it's own completely separate state and setup. You should call it from each individually. You'll also need to set your description heaps, etc., (even if they're the same) on each.
  13. Command lists will execute in the order they're submitted, so in your example there's no need for a fence between the two.   See the section "Executing command Lists" in the following link:  https://msdn.microsoft.com/en-us/library/windows/desktop/dn899124(v=vs.85).aspx   Edit:  Particularly, the part that applies to your question is "Applications can submit command lists to any command queue from multiple threads. The runtime will perform the work of serializing these requests in the order of submission."
  14. Hi all, Great work so far in this thread. I've been implementing this technique over the past little while as I've had time, and thought I would dump what I've got so far in case it helps move the discussion along. A few caveats: This is my first big foray into volume rendering, so I'm all but positive that some of the math is off. The atmospheric scattering below the clouds is completely fake and just trial and error values (you'll see in the shader what I'm doing). Ambient light is also just trial and error values, nothing fancy. My noise doesn't tile well - I get around this by using a mirrored sampler, but the downside is that the repeating pattern becomes clear pretty quickly if you looked at from the right distance. No temporal reprojection (yet), so performance is about what you'd expect based on the articles. I think items 2 and 3 can be somewhat addressed by looking into the way Uncharted did some of their scattering (mip fog - basically mip map the sky box and use that as a fog lookup based on distance). FreneticPony mentioned this in Post 40. I'm hoping some of you can help me with item 1 through code review. Suggestions for item 4 are also welcome, of course. Here's what's cooked up so far: #include "../../ConstantBuffers/PerFrame.hlsli" #include "../../Utils/DepthUtils.hlsli" #include "../../Constants.hlsli" Texture3D baseShapeLookup : register(t0); Texture3D erosionLookup : register(t1); Texture2D weatherLookup : register(t2); Texture2D depthTexture : register(t3); SamplerState texSampler : register(s0); cbuffer cbVolumetricClouds : register(b0) { float3 cb_lightDirection; // light direction world space float cb_groundRadius; // meters - for a ground/lower atmosphere only version, this could be smaller float3 cb_sunColor; // color of sun light uint cb_baseShapeTextureBottomMipLevel; float cb_cloudVolumeStartHeight; // meters - height above ground level float cb_cloudVolumeHeight; // meters - height above ground level float cb_cloudSpeed; float cb_cloudTopOffset; float3 cb_windDirection; uint cb_erosionTextureBottomMipLevel; float3 cb_weatherTexMod; // scale(x), offset(y, z) float cb_windStrength; }; static const float VOLUME_END_HEIGHT = cb_cloudVolumeStartHeight + cb_cloudVolumeHeight; // planet center (world space) static const float3 PLANET_CENTER = float3(0.0f, -cb_groundRadius - 100.0f, 0.0f); // TODO revisit - 100.0f offset is to match planet sky settings // radius from the planet center to the bottom of the cloud volume static const float PLANET_CENTER_TO_LOWER_CLOUD_RADIUS = cb_groundRadius + cb_cloudVolumeStartHeight; // radius from the planet center to the top of the cloud volume static const float PLANET_CENTER_TO_UPPER_CLOUD_RADIUS = cb_groundRadius + VOLUME_END_HEIGHT; static const float CLOUD_SCALE = 1.0f / VOLUME_END_HEIGHT; static const float3 WEATHER_TEX_MOD = float3(1.0f / (VOLUME_END_HEIGHT * cb_weatherTexMod.x), cb_weatherTexMod.y, cb_weatherTexMod.z); static const float2 WEATHER_TEX_MOVE_SPEED = float2(cb_windStrength * cb_windDirection.x, cb_windStrength * cb_windDirection.z); // this is modded by app run time // samples based on shell thickness between inner and outer volume static const uint2 SAMPLE_RANGE = uint2(64u, 128u); static const float4 STRATUS_GRADIENT = float4(0.02f, 0.05f, 0.09f, 0.11f); static const float4 STRATOCUMULUS_GRADIENT = float4(0.02f, 0.2f, 0.48f, 0.625f); static const float4 CUMULUS_GRADIENT = float4(0.01f, 0.0625f, 0.78f, 1.0f); // these fractions would need to be altered if cumulonimbus are added to the same pass /** * Perform a ray-sphere intersection test. * Returns the number of intersections in the direction of the ray (excludes intersections behind the ray origin), between 0 and 2. * In the case of more than one intersection, the nearest point will be returned in t1. * * http://www.scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-sphere-intersection */ uint intersectRaySphere( float3 rayOrigin, float3 rayDir, // must be normalized float3 sphereCenter, float sphereRadius, out float2 t) { float3 l = rayOrigin - sphereCenter; float a = 1.0f; // dot(rayDir, rayDir) where rayDir is normalized float b = 2.0f * dot(rayDir, l); float c = dot(l, l) - sphereRadius * sphereRadius; float discriminate = b * b - 4.0f * a * c; if(discriminate < 0.0f) { t.x = t.y = 0.0f; return 0u; } else if(abs(discriminate) - 0.00005f <= 0.0f) { t.x = t.y = -0.5f * b / a; return 1u; } else { float q = b > 0.0f ? -0.5f * (b + sqrt(discriminate)) : -0.5f * (b - sqrt(discriminate)); float h1 = q / a; float h2 = c / q; t.x = min(h1, h2); t.y = max(h1, h2); if(t.x < 0.0f) { t.x = t.y; if(t.x < 0.0f) { return 0u; } return 1u; } return 2u; } } float remap( float value, float oldMin, float oldMax, float newMin, float newMax) { return newMin + (value - oldMin) / (oldMax - oldMin) * (newMax - newMin); } float3 sampleWeather(float3 pos) { return weatherLookup.SampleLevel(texSampler, pos.xz * WEATHER_TEX_MOD.x + WEATHER_TEX_MOD.yz + (WEATHER_TEX_MOVE_SPEED * cb_appRunTime), 0.0f).rgb; } float getCoverage(float3 weatherData) { return weatherData.r; } float getPrecipitation(float3 weatherData) { return weatherData.g; } float getCloudType(float3 weatherData) { // weather b channel tells the cloud type 0.0 = stratus, 0.5 = stratocumulus, 1.0 = cumulus return weatherData.b; } float heightFraction(float3 pos) { return saturate((distance(pos, PLANET_CENTER) - PLANET_CENTER_TO_LOWER_CLOUD_RADIUS) / cb_cloudVolumeHeight); } float4 mixGradients( float cloudType) { float stratus = 1.0f - saturate(cloudType * 2.0f); float stratocumulus = 1.0f - abs(cloudType - 0.5f) * 2.0f; float cumulus = saturate(cloudType - 0.5f) * 2.0f; return STRATUS_GRADIENT * stratus + STRATOCUMULUS_GRADIENT * stratocumulus + CUMULUS_GRADIENT * cumulus; } float densityHeightGradient( float heightFrac, float cloudType) { float4 cloudGradient = mixGradients(cloudType); return smoothstep(cloudGradient.x, cloudGradient.y, heightFrac) - smoothstep(cloudGradient.z, cloudGradient.w, heightFrac); } float sampleCloudDensity( float3 pos, float3 weatherData, float heightFrac, float lod) { pos += heightFrac * cb_windDirection * cb_cloudTopOffset; pos += (cb_windDirection + float3(0.0f, -0.25f, 0.0f)) * cb_cloudSpeed * (cb_appRunTime/* * 25.0f*/); // the * 25.0f is just for testing to make the effect obvious pos *= CLOUD_SCALE; float4 lowFreqNoise = baseShapeLookup.SampleLevel(texSampler, pos, lerp(0.0f, cb_baseShapeTextureBottomMipLevel, lod)); float lowFreqFBM = (lowFreqNoise.g * 0.625f) + (lowFreqNoise.b * 0.25f) + (lowFreqNoise.a * 0.125f); float baseCloud = remap( lowFreqNoise.r, -(1.0f - lowFreqFBM), 1.0f, // gets about the same results just using -lowFreqFBM 0.0f, 1.0f); float densityGradient = densityHeightGradient(heightFrac, getCloudType(weatherData)); baseCloud *= densityGradient; float cloudCoverage = getCoverage(weatherData); float baseCloudWithCoverage = remap( baseCloud, 1.0f - cloudCoverage, 1.0f, 0.0f, 1.0f); baseCloudWithCoverage *= cloudCoverage; //// TODO add curl noise //// pos += curlNoise.xy * (1.0f - heightFrac); float3 highFreqNoise = erosionLookup.SampleLevel(texSampler, pos * 0.1f, lerp(0.0f, cb_erosionTextureBottomMipLevel, lod)).rgb; float highFreqFBM = (highFreqNoise.r * 0.625f) + (highFreqNoise.g * 0.25f) + (highFreqNoise.b * 0.125f); float highFreqNoiseModifier = lerp(highFreqFBM, 1.0f - highFreqFBM, saturate(heightFrac * 10.0f)); baseCloudWithCoverage = remap( baseCloudWithCoverage, highFreqNoiseModifier * 0.2f, 1.0f, 0.0f, 1.0f); return saturate(baseCloudWithCoverage); } struct VertexOut { float4 posH : SV_POSITION; float3 viewRay : VIEWRAY; float2 tex : TEXCOORD; }; // random vectors on the unit sphere static const float3 RANDOM_VECTORS[] = { float3( 0.38051305f, 0.92453449f, -0.02111345f), float3(-0.50625799f, -0.03590792f, -0.86163418f), float3(-0.32509218f, -0.94557439f, 0.01428793f), float3( 0.09026238f, -0.27376545f, 0.95755165f), float3( 0.28128598f, 0.42443639f, -0.86065785f), float3(-0.16852403f, 0.14748697f, 0.97460106f) }; static const uint LIGHT_RAY_ITERATIONS = 6u; static const float RCP_LIGHT_RAY_ITERATIONS = 1.0f / float(LIGHT_RAY_ITERATIONS); float beerLambert(float sampleDensity, float precipitation) { return exp(-sampleDensity * precipitation); } float powder(float sampleDensity, float lightDotEye) { float powd = 1.0f - exp(-sampleDensity * 2.0f); return lerp( 1.0f, powd, saturate((-lightDotEye * 0.5f) + 0.5f) // [-1,1]->[0,1] ); } float henyeyGreenstein( float lightDotEye, float g) { float g2 = g * g; return ((1.0f - g2) / pow((1.0f + g2 - 2.0f * g * lightDotEye), 1.5f)) * 0.25f; } float lightEnergy( float lightDotEye, float densitySample, float originalDensity, float precipitation) { return 2.0f * beerLambert(densitySample, precipitation) * powder(originalDensity, lightDotEye) * lerp(henyeyGreenstein(lightDotEye, 0.8f), henyeyGreenstein(lightDotEye, -0.5f), 0.5f); } // TODO get from cb values - has to change as time of day changes float3 ambientLight(float heightFrac) { return lerp( float3(0.5f, 0.67f, 0.82f), float3(1.0f, 1.0f, 1.0f), heightFrac); } float sampleCloudDensityAlongCone( float3 startPos, float stepSize, float lightDotEye, float originalDensity) { float3 lightStep = stepSize * -cb_lightDirection; float3 pos = startPos; float coneRadius = 1.0f; float coneStep = RCP_LIGHT_RAY_ITERATIONS; float densityAlongCone = 0.0f; float lod = 0.0f; float lodStride = RCP_LIGHT_RAY_ITERATIONS; float3 weatherData = 0.0f; float rcpThickness = 1.0f / (stepSize * LIGHT_RAY_ITERATIONS); float density = 0.0f; for(uint i = 0u; i < LIGHT_RAY_ITERATIONS; ++i) { float3 conePos = pos + coneRadius * RANDOM_VECTORS[i] * float(i + 1u); float heightFrac = heightFraction(conePos); if(heightFrac <= 1.0f) { weatherData = sampleWeather(conePos); float cloudDensity = sampleCloudDensity( conePos, weatherData, heightFrac, lod); if(cloudDensity > 0.0f) { density += cloudDensity; float transmittance = 1.0f - (density * rcpThickness); densityAlongCone += (cloudDensity * transmittance); } } pos += lightStep; coneRadius += coneStep; lod += lodStride; } // take additional step at large distance away for shadowing from other clouds pos = pos + (lightStep * 8.0f); weatherData = sampleWeather(pos); float heightFrac = heightFraction(pos); if(heightFrac <= 1.0f) { float cloudDensity = sampleCloudDensity( pos, weatherData, heightFrac, 0.8f); // no need to branch here since density variable is no longer used after this density += cloudDensity; float transmittance = 1.0f - saturate(density * rcpThickness); densityAlongCone += (cloudDensity * transmittance); } return saturate(lightEnergy( lightDotEye, densityAlongCone, originalDensity, lerp(1.0f, 2.0f, getPrecipitation(weatherData)))); } float4 traceClouds( float3 viewDirW, // world space view direction float3 startPos, // world space start position float3 endPos) // world space end position { float3 dir = endPos - startPos; float thickness = length(dir); float rcpThickness = 1.0f / thickness; uint sampleCount = lerp(SAMPLE_RANGE.x, SAMPLE_RANGE.y, saturate((thickness - cb_cloudVolumeHeight) / cb_cloudVolumeHeight)); float stepSize = thickness / float(sampleCount); dir /= thickness; float3 posStep = stepSize * dir; float lightDotEye = -dot(cb_lightDirection, viewDirW); float3 pos = startPos; float3 weatherData = 0.0f; float4 result = 0.0f; float density = 0.0f; for(uint i = 0u; i < sampleCount; ++i) { float heightFrac = heightFraction(pos); weatherData = sampleWeather(pos); float cloudDensity = sampleCloudDensity( pos, weatherData, heightFrac, 0.0f); if(cloudDensity > 0.0f) { density += cloudDensity; float transmittance = 1.0f - (density * rcpThickness); float lightDensity = sampleCloudDensityAlongCone( pos, stepSize, lightDotEye, cloudDensity); float3 ambientBadApprox = ambientLight(heightFrac) * min(1.0f, length(cb_sunColor.rgb * 0.0125f)) * transmittance; float4 source = float4((cb_sunColor.rgb * lightDensity) + ambientBadApprox/*+ ambientLight(heightFrac)*/, cloudDensity * transmittance); // TODO enable ambient when added to constant buffer source.rgb *= source.a; result = (1.0f - result.a) * source + result; if(result.a >= 1.0f) break; } pos += posStep; } // experimental fog - may not be needed if clouds are drawn before atmosphere - would have to draw sun by itself, then clouds, then atmosphere // fogAmt = 0 to disable float fogAmt = 1.0f - exp(-distance(startPos, cb_eyePositionW) * 0.00001f); float3 fogColor = float3(0.3f, 0.4f, 0.45f) * length(cb_sunColor.rgb * 0.125f) * 0.8f; float3 sunColor = normalize(cb_sunColor.rgb) * 4.0f * length(cb_sunColor.rgb * 0.125f); fogColor = lerp(fogColor, sunColor, pow(saturate(lightDotEye), 8.0f)); return float4(clamp(lerp(result.rgb, fogColor, fogAmt), 0.0f, 1000.0f), saturate(result.a)); } float4 main(VertexOut pIn) : SV_TARGET { int3 loadIndices = int3(pIn.posH.xy, 0); float zwDepth = depthTexture.Load(loadIndices).r; bool depthPresent = zwDepth < 1.0f; float depth = linearizeDepth(zwDepth); float3 posV = pIn.viewRay * depth; float3 posW = mul(float4(posV, 1.0f), cb_inverseViewMatrix).xyz; float3 viewDirW = normalize(posW - cb_eyePositionW); // find nearest planet surface point float2 ph = 0.0f; uint planetHits = intersectRaySphere( cb_eyePositionW, viewDirW, PLANET_CENTER, cb_groundRadius, ph); // find nearest inner shell point float2 ih = 0.0f; uint innerShellHits = intersectRaySphere( cb_eyePositionW, viewDirW, PLANET_CENTER, PLANET_CENTER_TO_LOWER_CLOUD_RADIUS, ih); // find nearest outer shell point float2 oh = 0.0f; uint outerShellHits = intersectRaySphere( cb_eyePositionW, viewDirW, PLANET_CENTER, PLANET_CENTER_TO_UPPER_CLOUD_RADIUS, oh); // world space ray intersections float3 planetHitSpot = cb_eyePositionW + (viewDirW * ph.x); float3 innerShellHit = cb_eyePositionW + (viewDirW * ih.x); float3 outerShellHit = cb_eyePositionW + (viewDirW * oh.x); // eye radius from planet center float eyeRadius = distance(cb_eyePositionW, PLANET_CENTER); if(eyeRadius < PLANET_CENTER_TO_LOWER_CLOUD_RADIUS) // under inner shell { // exit if there's something in front of the start of the cloud volume if((depthPresent && (distance(posW, cb_eyePositionW) < distance(innerShellHit, cb_eyePositionW))) || planetHits > 0u) // shell hits are guaranteed, but the ground may be occluding cloud layer { return float4(0.0f, 0.0f, 0.0f, 0.0f); } return traceClouds( viewDirW, innerShellHit, outerShellHit); } else if(eyeRadius > PLANET_CENTER_TO_UPPER_CLOUD_RADIUS) // over outer shell { // possibilities are // 1) enter outer shell, leave inner shell // 2) enter outer shell, leave outer shell float3 firstShellHit = outerShellHit; if(outerShellHits == 0u || depthPresent && (distance(posW, cb_eyePositionW) < distance(firstShellHit, cb_eyePositionW))) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } float3 secondShellHit = outerShellHits == 2u && innerShellHits == 0u ? cb_eyePositionW + (viewDirW * oh.y) : innerShellHit; return traceClouds( viewDirW, firstShellHit, secondShellHit); } else // between shells { /* * From a practical viewpoint (properly scaled planet, atmosphere, etc.) * only one shell will be hit. * Start position is always eye position. */ float3 shellHit = innerShellHits > 0u ? innerShellHit : outerShellHit; // if there's something in the depth buffer that's closer, that's the end point if(depthPresent && (distance(posW, cb_eyePositionW) < distance(shellHit, cb_eyePositionW))) { shellHit = posW; } return traceClouds( viewDirW, cb_eyePositionW, shellHit); } } Again, I'm sure some of the math and maybe even the general approach is off a bit, so please feel free to offer suggestions for improvement. One thing I wish was better is that the clouds don't seem to taper well as they rise with my current implementation. There may be some ideas from this thread I can use to help with that. I also need to get curl noise in, but I'm more worried about getting the general solution working well first. If there's anything I can help out with, let me know [sharedmedia=gallery:albums:1149] Edit: Huge thank you to Andrew and the team at Guerrilla Games for their publications and willingness to help other developers work through implementing a solution.
  15. From the album Volumetric Clouds