Visualising 2D Fluids (passing data to the GPU)

Started by
7 comments, last by Haegr 10 years, 1 month ago

Hello there. I have recently followed Jos Stam's paper on fluids, Stable Fluids, as well as extending it to 3D but I am having issues actually...rendering it.

I am not sure what the best practise is to actually pass my arrays to the graphics card, for either 2D or 3D. Rendering the fluids for 2D should be as simple as displaying the density field, at least according to Stam, while for 3D I'll need a volume rendering.

I've been searching for a while but none of my Google searches actually seem to be getting me anywhere.

Thanks.

Advertisement

Have you looked into iso-surfacing algorithms like Marching Cubes?

Would any iso-surfacing algorithm work? I am looking to render smoke and fire so I was unsure if iso-surfacing would work on such things. My understanding of iso-surfacing is very limited right now but I've only seen it used for water and not fire and smoke.

I probably should have specified that I was looking to do smoke and fire.

Another issue that has cropped up is actually passing my CPU data to the GPU. I am not sure how to pass a dynamically sized array to the GPU from the CPU, though some answers on stackoverflow might be leading me down the right path to an answer.

You get a very obvious transition between spaces in and outside the volume, but perhaps you could do some post-process blurring and distortion like they do with volumetric cloud rendering.

I think the technique is called "volumetric splatting" and it seems to be the best approach for clouds. Thanks very much.

Somewhat related. Any idea on how to pass a 2D or 3D array of floats to the GPU? I've read they should be passed as 2D / 3D textures but the CPU manipulated texture I have only takes in ints, not floats. And I am not sure how to have it accept floats instead.

What rendering API do you use?

In D3D you can read floats from a Texture Buffer Object defined as: Buffer<float> aBuffer;

I'm using DX11. I'm not entirely sure how to put floats into the buffer, could you point me to a code sample? The only way I know how to do it is the following, and that's just with ints.


D3D11_MAPPED_SUBRESOURCE mappedResource;
result = deviceContext->Map(m_renderTargetTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if(result != S_OK)
{
	return false;
}

UCHAR* pTexels = (UCHAR*)mappedResource.pData;
//For 3D do the same but add "depthStart" as mappedResource.DepthPitch * depth
for( UINT row = 0; row < textureDesc.Height; row++ )
{
	//Row number * height
	UINT rowStart = row * mappedResource.RowPitch;
	for( UINT col = 0; col < textureDesc.Width; col++ )
	{
		//width * number of channels (r,g,b,a)
		UINT colStart = col * 4;
		if( row > (textureWidth*0.4) && row < (textureWidth*0.6) )
		{
			pTexels[rowStart + colStart + 0] = 255; // Red
			pTexels[rowStart + colStart + 1] = 0; // Green
			pTexels[rowStart + colStart + 2] = 0; // Blue
			pTexels[rowStart + colStart + 3] = 255; // Alpha
		}
		else
		{
			pTexels[rowStart + colStart + 0] = 0; // Red
			pTexels[rowStart + colStart + 1] = 255; // Green
			pTexels[rowStart + colStart + 2] = 0; // Blue
			pTexels[rowStart + colStart + 3] = 255; // Alpha
		}
	}
}

deviceContext->Unmap(m_renderTargetTexture, 0);

You simply cast the mapped resource to the same type as the view that describes it.

float* elementsOnline = (float*)mappedResource.pData;

Or:

memcpy(mappedResource.pData, elements, sizeof(float) * elementsCount);

Ah right. I did that at first. I assume any information I send like this will mean that if I were to save the texture to a file it would not look as it should, because the data it is expecting is not the correct format for the image?

This topic is closed to new replies.

Advertisement