Ah! So I was mistaken in my first analysis, it was only showing me the first texture i'd loaded! Thanks mhagain, your response led me to the function D3D11CalcSubresource, which in turn led me to realize that the problem wasn't in the portion of my code where I mapped, memcpy'd, and unmapped (after I updated it per your response of specifying the subresource). It was in the portion after, where I created the shader resource view and copied my texture set up as D3D11_USAGE_STAGING to one set up as D3D11_USAGE_DEFAULT. I was only copying the first subresource.
Posted by Funkymunky
on 24 November 2013 - 02:49 PM
the movement relative to the eye, before the modulo bit resets the positions of the vertices
The modulo should be the only thing moving the vertices. As far as movement relative to the eye, aside from the standard ModelViewProjection matrix work, nothing should be translating relative to the eye. The camera moves, and the vertices only change position when the modulus changes.
The grid cells need to be odd because each level of detail has twice the number of vertices as the next level, and they need to line up. Like this:
That inner level has 4 vertices along its edge. See how that hits the middle of a quad in the next LOD? By extending it to 5 vertices, the LOD edges always line up.
All grids are 2^n - 1. The reason for this number is that it's generally preferred for textures to have dimensions that are a power of two (2^n), but since odd numbers are required, the vertices only sample from 255 of the 256 pixels available.
Pay attention to the outer levels as the camera moves. See how they only "move" when the camera has moved far enough for the modulus to index the next pixel? They don't slide around with every change in eye position.
Posted by Funkymunky
on 23 November 2013 - 05:52 PM
Actually I mis-spoke. I meant to say that I use two vertices from the Outer LOD and one from the inner. The whole point of them is to get rid of the T-Junction by drawing a triangle there.
I'm still not understanding what you mean when you say that your layers "slide". In my implementation, the LOD levels move in discrete steps, such that one pixel always corresponds to one vertex. As you fly about the scene, the levels in the grid only move (and shift around with the L shaped strip scheme) when the camera has moved far enough within the LOD level.
So the degenerates stay at the same exact place as you move until the camera passes a threshold, and the LOD levels shift, and the degenerates also shift to accommodate the new boundaries.
Posted by Funkymunky
on 23 November 2013 - 01:35 PM
So you're talking about doing the degenerate triangles that marry the LOD levels, right? I'm not sure why you say that "these in-between rings scale a bit". They shouldn't scale at all. The way I did it was to create the rings as static vertex buffers that use the Y vertex component to represent the LOD level they were a part of. So each triangle is comprised of two vertices from the inner LOD and one from the outer; the inner ones have a Y component of 0 and the outer ones have a Y component of 1. This lets me pick which LOD texture to fetch the height from in the vertex shader.
Posted by Funkymunky
on 21 November 2013 - 02:26 PM
I was banging my head against this as well and just solved it. So there are four issues really.
First of all, that image you're comparing against is squashed horizontally. The actual transmittance table is a 256x64 texture, and should look like this:
Second, your texture is upside down. I know because I generated the same one at first. Swap the Y coordinate.
Third, your colors look washed out because you are using an HM value of 12. It should be 1.2
And Fourth, in the Bruneton code he defines "TRANSMITTANCE_NON_LINEAR" (in the common.glsl file). You are using the code from the else clause of this, which indeed results in the curve you are seeing. With the code from the #ifdef TRANSMITTANCE_NON_LINEAR block, you get the correct curve. The difference is in the last two lines of your getTransmittanceRMu function; instead of this:
I'm assuming that you mean that it's a predefined set of vertices with the heights already applied, and that you're going to be moving to a shader that offsets the vertices instead? Well, how are you planning to offset the vertices in the shader? With a texture and a vertex-texture-fetch in the vertex shader? Or are you going to offset by means of noise functions in the vertex shader?
If it's by texture, then I'd say you could read back that value from the texture rather than from the depth buffer. If it's by noise functions, then you'd probably have to do it via a sampling of the depth buffer.
OR, if using a texture for the heightmap, you could set up a post-processing pipeline and include an "overlay" texture which represents any user interaction. This way you could adjust the size of the "brush" that offsets the terrain. You'd do an RTT that combines the current heightmap with the overlay texture (which varies depending on if the user clicked, brush selection, etc.) to render to another heightmap. Then you ping pong back and forth between the heightmaps so that you can update the values seen by the vertex shader.
I have my sky dome, I have my terrain, I have my shaders all set up. The one thing that eludes me, after reading all this stuff, is the scale I'm supposed to be working with. As I'm doing all this from ground level within the atmosphere, I'm using the 's' parameter simply as the depth from the vertex to the camera.
What is the range of this value supposed to be? All the other parameters (rayleigh and mie consants, 'g' parameter, etc.) are set up as constant values, which would imply that 's' has to be in a specific range for the atmospheric scattering effect to be correct.
I seem to have gleened that I want a difference of radii of 1:1.025, ie if the planet radius is 10.0 then the radius of the atmosphere should be 10.25. But is 10 and 10.25 the actual numbers I'm supposed to use? When I stand on the planet's surface and look straight up, should my 's' parameter be 0.25? Because I'm getting either a black sky or an enormous sun, but I can't find the right range that will give me the realistic sky I want.
For reference, I'm using numbers like betaR = vec3(0.00000695, 0.0000118, 0.0000244), betaM = vec3(0.00000125, 0.00000125, 0.00000125), and g = -0.93.