Jump to content

  • Log In with Google      Sign In   
  • Create Account

Visualising 2D Fluids (passing data to the GPU)


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 Haegr   Members   -  Reputation: 125

Like
0Likes
Like

Posted 10 March 2014 - 03:52 PM

Hello there. I have recently followed Jos Stam's paper on fluids, Stable Fluids, as well as extending it to 3D but I am having issues actually...rendering it.

 

I am not sure what the best practise is to actually pass my arrays to the graphics card, for either 2D or 3D. Rendering the fluids for 2D should be as simple as displaying the density field, at least according to Stam, while for 3D I'll need a volume rendering. 

 

I've been searching for a while but none of my Google searches actually seem to be getting me anywhere.

 

Thanks.



Sponsor:

#2 eppo   Crossbones+   -  Reputation: 2621

Like
0Likes
Like

Posted 12 March 2014 - 04:21 AM

Have you looked into iso-surfacing algorithms like Marching Cubes?



#3 Haegr   Members   -  Reputation: 125

Like
0Likes
Like

Posted 12 March 2014 - 04:35 AM

Would any iso-surfacing algorithm work? I am looking to render smoke and fire so I was unsure if iso-surfacing would work on such things. My understanding of iso-surfacing is very limited right now but I've only seen it used for water and not fire and smoke.

 

I probably should have specified that I was looking to do smoke and fire.

 

Another issue that has cropped up is actually passing my CPU data to the GPU. I am not sure how to pass a dynamically sized array to the GPU from the CPU, though some answers on stackoverflow might be leading me down the right path to an answer.



#4 eppo   Crossbones+   -  Reputation: 2621

Like
1Likes
Like

Posted 12 March 2014 - 05:16 AM

You get a very obvious transition between spaces in and outside the volume, but perhaps you could do some post-process blurring and distortion like they do with volumetric cloud rendering.



#5 Haegr   Members   -  Reputation: 125

Like
0Likes
Like

Posted 12 March 2014 - 08:20 AM

I think the technique is called "volumetric splatting" and it seems to be the best approach for clouds. Thanks very much.

 

Somewhat related. Any idea on how to pass a 2D or 3D array of floats to the GPU? I've read they should be passed as 2D / 3D textures but the CPU manipulated texture I have only takes in ints, not floats. And I am not sure how to have it accept floats instead.



#6 eppo   Crossbones+   -  Reputation: 2621

Like
1Likes
Like

Posted 12 March 2014 - 09:14 AM

What rendering API do you use?

 

In D3D you can read floats from a Texture Buffer Object defined as: Buffer<float> aBuffer;



#7 Haegr   Members   -  Reputation: 125

Like
0Likes
Like

Posted 12 March 2014 - 02:51 PM

I'm using DX11. I'm not entirely sure how to put floats into the buffer, could you point me to a code sample? The only way I know how to do it is the following, and that's just with ints.

D3D11_MAPPED_SUBRESOURCE mappedResource;
result = deviceContext->Map(m_renderTargetTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if(result != S_OK)
{
	return false;
}

UCHAR* pTexels = (UCHAR*)mappedResource.pData;
//For 3D do the same but add "depthStart" as mappedResource.DepthPitch * depth
for( UINT row = 0; row < textureDesc.Height; row++ )
{
	//Row number * height
	UINT rowStart = row * mappedResource.RowPitch;
	for( UINT col = 0; col < textureDesc.Width; col++ )
	{
		//width * number of channels (r,g,b,a)
		UINT colStart = col * 4;
		if( row > (textureWidth*0.4) && row < (textureWidth*0.6) )
		{
			pTexels[rowStart + colStart + 0] = 255; // Red
			pTexels[rowStart + colStart + 1] = 0; // Green
			pTexels[rowStart + colStart + 2] = 0; // Blue
			pTexels[rowStart + colStart + 3] = 255; // Alpha
		}
		else
		{
			pTexels[rowStart + colStart + 0] = 0; // Red
			pTexels[rowStart + colStart + 1] = 255; // Green
			pTexels[rowStart + colStart + 2] = 0; // Blue
			pTexels[rowStart + colStart + 3] = 255; // Alpha
		}
	}
}

deviceContext->Unmap(m_renderTargetTexture, 0);

Edited by Haegr, 12 March 2014 - 03:16 PM.


#8 eppo   Crossbones+   -  Reputation: 2621

Like
1Likes
Like

Posted 13 March 2014 - 05:10 AM

You simply cast the mapped resource to the same type as the view that describes it.

 

float* elementsOnline = (float*)mappedResource.pData;

 

Or:

 

memcpy(mappedResource.pData, elements, sizeof(float) * elementsCount);


Edited by eppo, 13 March 2014 - 05:12 AM.


#9 Haegr   Members   -  Reputation: 125

Like
0Likes
Like

Posted 13 March 2014 - 10:42 AM

Ah right. I did that at first. I assume any information I send like this will mean that if I were to save the texture to a file it would not look as it should, because the data it is expecting is not the correct format for the image?






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS