DX11 - Volume Rendering - Article Misunderstanding

Started by
22 comments, last by Migi0027 10 years, 7 months ago

Hi guys! Again... ohmy.png

So I'm trying to follow an article/tutorial on volume rendering from GraphicsRunner (Great!) [ http://graphicsrunner.blogspot.dk/2009/01/volume-rendering-101.html ].

But there is something which I don't understand, which is the following:

We could always calculate the intersection of the ray from the eye to the current pixel position with the cube by performing a ray-cube intersection in the shader. But a better and faster way to do this is to render the positions of the front and back facing triangles of the cube to textures. This easily gives us the starting and end positions of the ray, and in the shader we simply sample the textures to find the sampling ray.

So I understand that I have to render the back and front culled positions to individual textures, which I am doing and it works (It looks fine), but does he mean render the depth model into view + projected space, or to render it as a texture that can be sampled onto the cube with the cubes respective texture coordinates?

Thanks, as always.

-MIGI0027

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

Advertisement

And if it is the 2nd case, how would I be able to accomplish it? huh.png

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

Just to confirm something, the back and front textures look the same to me, comparing mine with:

http://graphicsrunner.blogspot.dk/2009/01/volume-rendering-101.html (Scroll a bit down).

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

One step further, the re-projection is working correctly...

It's amazing how far I am from the original question. tongue.png

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

If anyone has had the same problem, for the love of god, what did I do wrong? (Still testing)

2dkh6ae.png

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

I read that post a while back, and what he is doing is storing the end points of a segment that would be created by passing a ray through the volume, and then using those end points to do the ray marching through the volume. So what you need to do is look at his comparison function and figure out how he performs the iterative step.

In my volume rendering implementation in Hieroglyph 3, I use the texture space coordinates and step through the texture that way. If you wanted to do that, you would find the 3D texture coordinate at the pixel location for the front and back faces, and store them accordingly. Then you can simply step from the front value to the back value and do a texture lookup at each step, looking for the intersection with the isosurface.

Still testing and changing.

If useful, here is the shader:


cbuffer ConstantObjectBuffer : register (b0)
{
	matrix worldMatrix;

	float3 StepSize;
	float Iterations;

	float4 ScaleFactor;
};

#define Side 2

cbuffer ConstantFrameBuffer : register (b1)
{
	
	matrix viewMatrix;
	matrix projectionMatrix;

	float3 eyepos;
	float cppad;

	float4 lightvec;
	float4 lightcol;

	float FogStart;
	float FogEnd;
	float2 __space;

	float3 FogColor;
	float shadows;

	float SpecularIntensity;
	float3 pad3;
	float4 SpecularColor;
}

//***************************************************//
//                 VERTEX SHADER                     //
//***************************************************//

struct VOut
{
    float4 position : SV_POSITION;
    float3 texC		: TEXCOORD0;
    float4 pos		: TEXCOORD1;
    float2 texcoord : TEXCOORD2;
    float3 normal   : NORM;
};

struct GlobalIn
{
	float4 position : POSITION;
	float4 normal : NORMAL;
	float2 texcoord : TEXCOORD;
	float4 tangent : TANGENT;
};
	
**CE_RESERVED_SHADER[INPUTS]**

Texture3D t_VolData : register(t0);
Texture2D t_TransFront : register(t1);
Texture2D t_TransBack : register(t2);

SamplerState ss;

VOut VShader(GlobalIn input)
{
    VOut output;

    input.position.w = 1.0f;
	output.texcoord = input.texcoord;

	// Calculate the position of the vertex against the world, view, and projection matrices.
    output.position = mul(input.position, worldMatrix);
    output.position = mul(output.position, viewMatrix);
    output.position = mul(output.position, projectionMatrix);

	output.texC = input.position;
    output.pos = output.position;
    output.normal = mul(float4(input.normal.xyz,0), worldMatrix);
	
    return output;
}

//***************************************************//
//                 PIXEL SHADER                      //
//***************************************************//

struct POut
{
	float4 Diffuse  : SV_Target0;
	float4 Position : SV_Target1;
	float4 Depth    : SV_Target2;
	float4 Normals  : SV_Target3;
	float4 Lighting : SV_Target4;
};

// Functions
float4 GetVRaycast(VOut input)
{
	//calculate projective texture coordinates
    //used to project the front and back position textures onto the cube
    float2 texC = input.pos.xy /= input.pos.w;
	texC.x =  0.5f*texC.x + 0.5f; 
	texC.y = -0.5f*texC.y + 0.5f;  
 
    float3 front = t_TransFront.Sample(ss, texC).xyz;
    float3 back = t_TransBack.Sample(ss, texC).xyz;
 
    float3 dir = normalize(back - front);
    float4 pos = float4(front, 0);
 
    float4 dst = float4(0, 0, 0, 0);
    float4 src = 0;
 
    float value = 0;
 
    float3 Step = dir * StepSize;
 
    for(int i = 0; i < 32; i++)
    {
        pos.w = 0;
        value = t_VolData.Sample(ss, pos).r;
             
        src = (float4)value;
        src.a *= .5f; //reduce the alpha to have a more transparent result 
         
        //Front to back blending
        // dst.rgb = dst.rgb + (1 - dst.a) * src.a * src.rgb
        // dst.a   = dst.a   + (1 - dst.a) * src.a     
        src.rgb *= src.a;
        dst = (1.0f - dst.a)*src + dst;     
     
        //break from the loop when alpha gets high enough
        if(dst.a >= .95f)
            break; 
     
        //advance the current position
        pos.xyz += Step;
     
        //break if the position is greater than <1, 1, 1>
        if(pos.x > 1.0f || pos.y > 1.0f || pos.z > 1.0f)
            break;
    }
 
    return dst;
}

POut PShader(VOut input)
{
	POut output;

	// Depth
	output.Depth = float4(0, 0, 0, 1.0f);

	// Normals
	output.Normals = float4(normalize(input.normal), 1);
	output.Position = float4(0, 0, 0, 1);
	output.Lighting = float4(1, 1, 1, 1);

	output.Diffuse = GetVRaycast(input);

	return output;
}

Thanks!

-MIGI0027

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

Still on my epic quest!

Btw, found this awesome paper on volume rendering (only needed the part about 3d textures), http://vplab.snu.ac.kr/lectures/11-1/comp_appl/05_Texture_Based_Volume_Rendering_Lab.pdf

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

What is going on? huh.png

2ezojdj.png

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

Can somebody confirm that this is an Ok way of loading a volume texture:


// Will be filled and returned
	ID3D11ShaderResourceView* pSRV = NULL;
 
	// Build the texture header descriptor
	D3D11_TEXTURE3D_DESC descTex;
	descTex.Width = width;
	descTex.Height = height;
	descTex.Depth = depth;
	descTex.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
	descTex.Usage = D3D11_USAGE_DEFAULT;
	descTex.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
	descTex.CPUAccessFlags = 0;
	descTex.MipLevels = 1;
	descTex.MiscFlags = D3D10_RESOURCE_MISC_GENERATE_MIPS;

	// Load Data into Memory
	const int size = height*width*depth;

	// Initialize memory
    unsigned int* pVolume = new unsigned int[size];

	// Load into memory
	FILE* pFile = fopen ( (const char*)filePath.c_str() , (const char*)"rb" );
	fread(pVolume,sizeof(unsigned int), size, pFile);
    fclose(pFile);
 
	// Resource data descriptor, with depth
	D3D11_SUBRESOURCE_DATA data ;
	data.pSysMem = pVolume;
	data.SysMemPitch = 1.0f * width;
	data.SysMemSlicePitch = width * height * 1;

	/*_until(r, depth)
	{
		// Fetch Data

		memset( &data[r], 0, sizeof(D3D11_SUBRESOURCE_DATA));
		data[r].pSysMem = pData;
		data[r].SysMemPitch = 4 * width;
	}*/
 
	// Create the 3d texture from data
	ID3D11Texture3D * pTexture = NULL;
	HV( pDevice->CreateTexture3D( &descTex, &data, &pTexture ));
 
	// Create resource view descriptor
	D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
	srvDesc.Format = descTex.Format;
	srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE3D;

	srvDesc.Texture3D.MostDetailedMip = 0;
	srvDesc.Texture3D.MipLevels = D3D11_RESOURCE_MISC_GENERATE_MIPS;

	// Create the shader resource view
	HV( pDevice->CreateShaderResourceView( pTexture, &srvDesc, &pSRV ));
 
	return pSRV;

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

This topic is closed to new replies.

Advertisement