Jump to content

  • Log In with Google      Sign In   
  • Create Account

Volume texture for clouds


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 Noplace   Members   -  Reputation: 144

Like
0Likes
Like

Posted 08 May 2014 - 04:37 AM

Hi everyone and please forgive me if I lack fundamental knowledge regarding this issue but graphics programming has always been my passion whenever I get the chance.

 

Now what I'm ultimately trying to do is drawing some clouds in the 3d scene, I'm experimenting with perlin noise from the pixel shader and utilizing a uvw coordinate to generate the values.

 

I set the pixel shader only for a cube which supposedly will be the clouds but I guess my understanding of the concept is not clear on volume textures, because it only renders the sides and never shows the inside:

 

  1.png

 

  2.png

 

 

Am I doing a viable technique for clouds, is there a better/right way to do this? if this is indeed a practical way what do I need to change to render the whole thing with the inside pixels/texels

 

Many thanks!



Sponsor:

#2 n3Xus   Members   -  Reputation: 704

Like
2Likes
Like

Posted 08 May 2014 - 07:02 AM

There is no "inside": the texture is drawn on the mesh surface if you use meshes.

 

What you want is raymarching (google it).



#3 imoogiBG   Members   -  Reputation: 1183

Like
2Likes
Like

Posted 08 May 2014 - 08:26 AM

As said above... use raymarching.

 

EDIT: Basic Explanation here: http://http.download.nvidia.com/developer/presentations/2005/GDC/Sponsored_Day/GDC_2005_VolumeRenderingForGames.pdf

 

I'm currently building a simple demo that uses the pixel shader volume ray marching approch.

 

Basicly if you draw a cube with vertices located between (0,0,0) and (1,1,1) with some trasform

use the normalize(camPosWorldSpace - VertexPositionWorldSpace) as a marching dir. 

 

Compute the marching dir in UVW space(usually the same vector). 

Compute the ray marching step size.

Start sampling from the current pixels UVW until the current sampling location is greater than 1 or smaller than 0. Accumulate the sampled opacity/trasp for the final pixel color.

 

pseudo

PSMain()
{

   float accumVal = 0;
   samplingUVW = interpolatedVS_UVW;

   while(InUVWBounds(samplingUVW))
   {   
      float sampledValue = Sample(samplingUVW);
    
      accumVal += accumulateFunction(accumVal, sampledValue);
      samplingUVW += someStepSize * marchingDirUVWSpace;
   }
   return color(accumVal);
}

Edited by imoogiBG, 08 May 2014 - 02:48 PM.


#4 fanwars   Members   -  Reputation: 995

Like
0Likes
Like

Posted 09 May 2014 - 04:28 AM

e: mixed posts, please ignore.


Edited by fanwars, 09 May 2014 - 05:10 AM.


#5 Noplace   Members   -  Reputation: 144

Like
0Likes
Like

Posted 11 May 2014 - 12:15 AM

Guys thanks so much for pointing me to the right direction, i have gone through the paper and roughly understand the concept but not enough to do my own implementation so I'm just using the nvidia code, I made some progress I think but not enough:

 

1.png

 

I'm using the exact code from here: http://developer.download.nvidia.com/SDK/9.5/Samples/MEDIA/HLSL/volume.fx but just converted to hlsl

 

1- regarding the boxMin and boxMax, do we have to account for the world transformation of the cube? at the moment I'm just drawing the cube at 0 so it should not affect the result 

 

2- I'm using d3d11 and by default all matrices are transposed before sending to the shaders, however for view inverse I'm sending it normally because if it is transposed the results are messed up ( and based on debugging the correct results are sent without transposing it)

 

So any idea what I'm doing wrong? I'm just drawing a cube here and running the VS and PS from that code on them, this is my shader code for reference:

 

VS:

struct VertexShaderInput
{
	float3 pos : POSITION;
  float3 uvw : TEXCOORD0;
};



static const float foclen = 2500.0f;

void RayMarchVS(inout float3 pos : POSITION,
				in float4 texcoord : TEXCOORD0,
				out Ray eyeray : TEXCOORD1
				)
{
	// calculate world space eye ray
	// origin
	eyeray.o = mul(float4(0, 0, 0, 1), viewInv);
  float2 viewport = {640,480};
	// direction
	eyeray.d.xy = ((texcoord.xy*2.0)-1.0) * viewport;
	eyeray.d.y = -eyeray.d.y;	// flip y axis
	eyeray.d.z = foclen;
	
	eyeray.d = mul(eyeray.d, (float3x3) viewInv);
}


VertexShaderOutput main(VertexShaderInput input)
{
	VertexShaderOutput output;
 
	float4 pos = float4(input.pos, 1.0f);

	// Transform the vertex position into projected space.
	pos = mul(pos, model);
	pos = mul(pos, view);
	pos = mul(pos, projection);
	output.pos = pos;
  RayMarchVS(input.pos,pos,output.eyeray);
  output.uvw = input.uvw;
	return output;
}


PS:


static const float brightness = 25.0f;
static const float3 boxMin = { -1.0, -1.0, -1.0 };
static const float3 boxMax = { 1.0, 1.0, 1.0 };

// Pixel shader
float4 RayMarchPS(Ray eyeray : TEXCOORD0, uniform int steps=30) : SV_TARGET
{
	eyeray.d = normalize(eyeray.d);

	// calculate ray intersection with bounding box
	float tnear, tfar;
	bool hit = IntersectBox(eyeray, boxMin, boxMax, tnear, tfar);
	if (!hit) discard;

	// calculate intersection points
	float3 Pnear = eyeray.o + eyeray.d*tnear;
	float3 Pfar = eyeray.o + eyeray.d*tfar;
		
	// map box world coords to texture coords
	Pnear = Pnear*0.5 + 0.5;
	Pfar = Pfar*0.5 + 0.5;
	
	// march along ray, accumulating color
	float4 c = 0;
	float3 Pstep = (Pnear - Pfar) / (steps-1);
	float3 P = Pfar;
	// this compiles to a real loop in PS3.0:
	for(int i=0; i<steps; i++) {		
		float4 s = volume(P);
		c = (1.0-s.a)*c + s.a*s;
		P += Pstep;
	}
	c /= steps;
	c *= brightness;

//	return hit;
//	return tfar - tnear;
	return c;
}

float4 main(VertexShaderOutput input) : SV_TARGET
{

  return RayMarchPS(input.eyeray);

}



#6 Buckeye   Crossbones+   -  Reputation: 4902

Like
0Likes
Like

Posted 11 May 2014 - 10:27 AM


however for view inverse I'm sending it normally because if it is transposed the results are messed up

For proper conversion, one should use the transpose of the inverse, not the inverse. You calculate the inverse as a row-major matrix (CPU side) and stuff it into the shader. The shader (GPU side) uses that value as a column-major matrix, which is, in fact, the transpose of what you stored. All done auto-magically. wink.png


Edited by Buckeye, 11 May 2014 - 10:29 AM.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.


#7 imoogiBG   Members   -  Reputation: 1183

Like
1Likes
Like

Posted 11 May 2014 - 04:13 PM

Well the paper implements a bit different rendering idea, but the ray marching algorithm is the same.

The approch that was described into the paper is working like post process.

I've drawn a picture that maybe will explain those things better.

 

http://imgur.com/laPDGE5

laPDGE5.png

 

Currently I cannot share my code(It is not something spacial).

 

About the matrices issues: I dont know your CPU side math so I can't help.


Edited by imoogiBG, 11 May 2014 - 04:16 PM.


#8 Noplace   Members   -  Reputation: 144

Like
0Likes
Like

Posted 11 May 2014 - 10:18 PM

thanks imoogiBG ! I followed your description and code for the VS and now it is working correctly

 

only thing left now is fine tune the volume function biggrin.png



#9 imoogiBG   Members   -  Reputation: 1183

Like
1Likes
Like

Posted 12 May 2014 - 12:15 AM

http://www.seas.upenn.edu/~cis565/LECTURES/VolumeRendering.ppt might help around slide 8

 

https://www.shadertoy.com/view/XslGRr <-----


Edited by imoogiBG, 12 May 2014 - 02:50 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS