Volume texture for clouds

Started by
7 comments, last by ongamex92 9 years, 11 months ago

Hi everyone and please forgive me if I lack fundamental knowledge regarding this issue but graphics programming has always been my passion whenever I get the chance.

Now what I'm ultimately trying to do is drawing some clouds in the 3d scene, I'm experimenting with perlin noise from the pixel shader and utilizing a uvw coordinate to generate the values.

I set the pixel shader only for a cube which supposedly will be the clouds but I guess my understanding of the concept is not clear on volume textures, because it only renders the sides and never shows the inside:

[attachment=21399:1.png]

[attachment=21400:2.png]

Am I doing a viable technique for clouds, is there a better/right way to do this? if this is indeed a practical way what do I need to change to render the whole thing with the inside pixels/texels

Many thanks!

Advertisement

There is no "inside": the texture is drawn on the mesh surface if you use meshes.

What you want is raymarching (google it).

As said above... use raymarching.

EDIT: Basic Explanation here: http://http.download.nvidia.com/developer/presentations/2005/GDC/Sponsored_Day/GDC_2005_VolumeRenderingForGames.pdf

I'm currently building a simple demo that uses the pixel shader volume ray marching approch.

Basicly if you draw a cube with vertices located between (0,0,0) and (1,1,1) with some trasform

use the normalize(camPosWorldSpace - VertexPositionWorldSpace) as a marching dir.

Compute the marching dir in UVW space(usually the same vector).

Compute the ray marching step size.

Start sampling from the current pixels UVW until the current sampling location is greater than 1 or smaller than 0. Accumulate the sampled opacity/trasp for the final pixel color.

pseudo


PSMain()
{

   float accumVal = 0;
   samplingUVW = interpolatedVS_UVW;

   while(InUVWBounds(samplingUVW))
   {   
      float sampledValue = Sample(samplingUVW);
    
      accumVal += accumulateFunction(accumVal, sampledValue);
      samplingUVW += someStepSize * marchingDirUVWSpace;
   }
   return color(accumVal);
}

e: mixed posts, please ignore.

Guys thanks so much for pointing me to the right direction, i have gone through the paper and roughly understand the concept but not enough to do my own implementation so I'm just using the nvidia code, I made some progress I think but not enough:

[attachment=21438:1.png]

I'm using the exact code from here: http://developer.download.nvidia.com/SDK/9.5/Samples/MEDIA/HLSL/volume.fx but just converted to hlsl

1- regarding the boxMin and boxMax, do we have to account for the world transformation of the cube? at the moment I'm just drawing the cube at 0 so it should not affect the result

2- I'm using d3d11 and by default all matrices are transposed before sending to the shaders, however for view inverse I'm sending it normally because if it is transposed the results are messed up ( and based on debugging the correct results are sent without transposing it)

So any idea what I'm doing wrong? I'm just drawing a cube here and running the VS and PS from that code on them, this is my shader code for reference:

VS:


struct VertexShaderInput
{
	float3 pos : POSITION;
  float3 uvw : TEXCOORD0;
};



static const float foclen = 2500.0f;

void RayMarchVS(inout float3 pos : POSITION,
				in float4 texcoord : TEXCOORD0,
				out Ray eyeray : TEXCOORD1
				)
{
	// calculate world space eye ray
	// origin
	eyeray.o = mul(float4(0, 0, 0, 1), viewInv);
  float2 viewport = {640,480};
	// direction
	eyeray.d.xy = ((texcoord.xy*2.0)-1.0) * viewport;
	eyeray.d.y = -eyeray.d.y;	// flip y axis
	eyeray.d.z = foclen;
	
	eyeray.d = mul(eyeray.d, (float3x3) viewInv);
}


VertexShaderOutput main(VertexShaderInput input)
{
	VertexShaderOutput output;
 
	float4 pos = float4(input.pos, 1.0f);

	// Transform the vertex position into projected space.
	pos = mul(pos, model);
	pos = mul(pos, view);
	pos = mul(pos, projection);
	output.pos = pos;
  RayMarchVS(input.pos,pos,output.eyeray);
  output.uvw = input.uvw;
	return output;
}


PS:



static const float brightness = 25.0f;
static const float3 boxMin = { -1.0, -1.0, -1.0 };
static const float3 boxMax = { 1.0, 1.0, 1.0 };

// Pixel shader
float4 RayMarchPS(Ray eyeray : TEXCOORD0, uniform int steps=30) : SV_TARGET
{
	eyeray.d = normalize(eyeray.d);

	// calculate ray intersection with bounding box
	float tnear, tfar;
	bool hit = IntersectBox(eyeray, boxMin, boxMax, tnear, tfar);
	if (!hit) discard;

	// calculate intersection points
	float3 Pnear = eyeray.o + eyeray.d*tnear;
	float3 Pfar = eyeray.o + eyeray.d*tfar;
		
	// map box world coords to texture coords
	Pnear = Pnear*0.5 + 0.5;
	Pfar = Pfar*0.5 + 0.5;
	
	// march along ray, accumulating color
	float4 c = 0;
	float3 Pstep = (Pnear - Pfar) / (steps-1);
	float3 P = Pfar;
	// this compiles to a real loop in PS3.0:
	for(int i=0; i<steps; i++) {		
		float4 s = volume(P);
		c = (1.0-s.a)*c + s.a*s;
		P += Pstep;
	}
	c /= steps;
	c *= brightness;

//	return hit;
//	return tfar - tnear;
	return c;
}

float4 main(VertexShaderOutput input) : SV_TARGET
{

  return RayMarchPS(input.eyeray);

}


however for view inverse I'm sending it normally because if it is transposed the results are messed up

For proper conversion, one should use the transpose of the inverse, not the inverse. You calculate the inverse as a row-major matrix (CPU side) and stuff it into the shader. The shader (GPU side) uses that value as a column-major matrix, which is, in fact, the transpose of what you stored. All done auto-magically. wink.png

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

Well the paper implements a bit different rendering idea, but the ray marching algorithm is the same.

The approch that was described into the paper is working like post process.

I've drawn a picture that maybe will explain those things better.

http://imgur.com/laPDGE5

laPDGE5.png

Currently I cannot share my code(It is not something spacial).

About the matrices issues: I dont know your CPU side math so I can't help.

thanks imoogiBG ! I followed your description and code for the VS and now it is working correctly

only thing left now is fine tune the volume function biggrin.png

http://www.seas.upenn.edu/~cis565/LECTURES/VolumeRendering.ppt might help around slide 8

https://www.shadertoy.com/view/XslGRr <-----

This topic is closed to new replies.

Advertisement