Sign in to follow this  
Tordin

dx10 - Texture2D::Map() - E_INVALDARG ( cant get it working )

Recommended Posts

hi! i am working on a heightmap function, wich means i need to load a imagefile, read all pixel values and then set it to the up axis on the terrain. my problem is that when i try to map the texture2D, i get an E_INVALDARG result which, ends up not giving me the data i need to have to read the pixel values. and i cant get my head around this so pleas, give me a push in the right direction! btw, here is the code of d00m!
void Terrain::generateByHeightMap(LPCWSTR targetFile,ID3D10Device *gDev)

{  

	// Initialize D3D10 device...

	HRESULT hr;

	D3DX10_IMAGE_LOAD_INFO loadInfo;	

	D3D10_TEXTURE2D_DESC desc; 

	D3D10_MAPPED_TEXTURE2D mappedTexture;

	ID3D10Texture2D *pTexture2D;

	ID3D10Resource *pTexture = NULL;

	ZeroMemory(&desc,sizeof(desc));

	

	hr = D3DX10CreateTextureFromFile( gDev, targetFile, NULL, NULL, &pTexture, NULL );

	if(FAILED(hr))

		MessageBox(NULL,L"Cant load heightmap!",L"Error",MB_ICONERROR);



	D3D10_SHADER_RESOURCE_VIEW_DESC srvDesc;

	D3D10_RESOURCE_DIMENSION type;

	ZeroMemory(&type,sizeof(type));

	pTexture->GetType( &type );

	switch( type )

	{  

	    case D3D10_RESOURCE_DIMENSION_TEXTURE2D:

	    { 

	        pTexture2D = (ID3D10Texture2D*)pTexture;

	        pTexture2D->GetDesc( &desc );

			desc.CPUAccessFlags = D3D10_CPU_ACCESS_READ;

	    }

	}  

	hr = pTexture2D->Map( D3D10CalcSubresource(0, 1, 1), D3D10_MAP_READ  , 0, &mappedTexture );

	pTexture2D->GetDesc( &desc );

										

	const UINT WIDTH = maxU;

	const UINT HEIGHT = maxV;

		

		int count = 0;

		FLOAT* pTexels = (FLOAT*)mappedTexture.pData;

		for( UINT row = 0; row < HEIGHT; row++ )

		{   

			UINT rowStart = row;

		    for( UINT col = 0; col < WIDTH; col++ )

			{

				float heigth = 0.0f;

		       	heigth += pTexels[rowStart + col*4 + 0]; // Red

		        heigth += pTexels[rowStart + col*4 + 1]; // Green

		        heigth += pTexels[rowStart + col*4 + 2]; // Blue

		        heigth += pTexels[rowStart + col*4 + 3]; // Alpha

				map[count].pos.y = heigth;

				count++; 			

			}

		}

		pTexture2D->Unmap(D3D10CalcSubresource(0, 0, 0));

		terrain->SetVertexData(0,map);

		terrain->CommitToDevice(); 

}

any ideas/tutorials/info would be helpful! or even better if you find the error.

Share this post


Link to post
Share on other sites
Firstly, all your code dealing with the desc variable are redundant. You don't actually do anything with that information, and the line desc.CPUAccessFlags = D3D10_CPU_ACCESS_READ; doesn't change your ability to access the texture from the CPU. All you've done is modify a field in a local variable.

Secondly, your usage of D3D10CalcSubresource is likely incorrect. From the documentation:
Quote:
inline UINT D3D10CalcSubresource(
UINT MipSlice,
UINT ArraySlice,
UINT MipLevels
);


Parameters
MipSlice
[in] A zero-based index into an array of subtextures; 0 indicates the first, most detailed subtexture (or mipmap level).
ArraySlice
[in] The zero-based index of the first texture to use (in an array of textures).
MipLevels
[in] Number of mipmap levels (or subtextures) to use.


The indices are zero based. That means that unless your texture is a texture array, you should be passing 0 as the second argument instead of 1. Your call to Unmap also doesn't match your call to Map - the arguments to D3D10CalcSubresource are different.

Thirdly, if you use D3DX10CreateTextureFromFile with NULL as the third parameter, you're not guaranteed to get back a texture which allows read access. You should pass in a D3DX10_IMAGE_LOAD_INFO structure and explicitly specify the D3D10_CPU_ACCESS_READ flag to ensure that you have CPU read access. Consult the D3DX10_IMAGE_LOAD_INFO documentation for more information.

Share this post


Link to post
Share on other sites
the calcsubresourcse was set to 0,0,1 first but i have changed it to test it along.
and second of all, when i fill the loadInfo with read acces, width, height and some other stuff, the CreateTextureFromFile Fails.

this is why i am so confused with this, i have pasted the code from the SDK but still it wont work.

Share this post


Link to post
Share on other sites
You should not cast the resource to a texture2d. It's best to use QueryInterface() on the resource and ask for the texture2D interface.

Once you do that, get the texture2D description from your loaded texture and post the values of the fields here. That will provide enough information to indicate what is happening and why your code is failing.

Share this post


Link to post
Share on other sites
Okay here are the description.

Width = 32;
height = 32;
MipLevels = 6;
ArraySize = 1;
Format = DXGI_FORMAT_R8G8B8A8_UNORM;
Usage = D3D10_USAGE_DEFAULT;
BindFlags = 8;
CPUAccessFlags = 0;
MiscFlags = 0;


cheers!

Share this post


Link to post
Share on other sites
so BindFlags = D3D10_BIND_SHADER_RESOURCE, which is fine.

You have to have CPUAccessFlags = D3D10_CPU_ACCESS_READ = 0x20000L; in order to map the resource for reading.

What were the values of D3DX10_IMAGE_LOAD_INFO that you tried to use when loading the texture?

Share this post


Link to post
Share on other sites
i did use NULL, but i have been trying most of the parameters with the loadinfo struct.
thats why i am so confused, how much i change the code it still resolve in problem!

Share this post


Link to post
Share on other sites
Just as Sc4Freak said, you'll need to provide your own D3DX10_IMAGE_LOAD_INFO in order to get write access to the resource.

You can use the D3DX10_DEFAULT value to use the original information for values you don't want to change.

You have to modify the bind, usage, and access values in order to get cpu read capability.

Try:

Width = D3DX10_DEFAULT;
Height = D3DX10_DEFAULT;
Depth = D3DX10_DEFAULT;
FirstMipLevel = D3DX10_DEFAULT;
MipLevels = D3DX10_DEFAULT;
ArraySize = D3DX10_DEFAULT;
Usage = D3D10_USAGE_STAGING; // ** required in order to read the data
Format = D3DX10_DEFAULT;
BindFlags = 0; // ** you cannot bind a staging resource that is readable by cpu
CPUAccessFlags = D3D10_CPU_ACCESS_READ;
MiscFlags = D3DX10_DEFAULT;
Filter = D3DX10_FILTER_NONE;
MipFilter = D3DX10_FILTER_NONE;
pSrcInfo = NULL;


Share this post


Link to post
Share on other sites
Nope, still returning E_INVALIDARG.
and i did exactly as you said.

here is the entire code:

// Initialize D3D10 device...

HRESULT hr;

D3DX10_IMAGE_LOAD_INFO loadInfo;

D3D10_TEXTURE2D_DESC desc;

D3D10_MAPPED_TEXTURE2D mappedTexture;

ID3D10Texture2D *pTexture2D;

ID3D10Resource *pTexture = NULL;

ZeroMemory(&loadInfo,sizeof(D3DX10_IMAGE_LOAD_INFO));

loadInfo.Width = D3DX10_DEFAULT;

loadInfo.Height = D3DX10_DEFAULT;

loadInfo.Depth = D3DX10_DEFAULT;

loadInfo.FirstMipLevel = D3DX10_DEFAULT;

loadInfo.MipLevels = D3DX10_DEFAULT;

loadInfo.Usage = D3D10_USAGE_STAGING;

loadInfo.BindFlags = 0;

loadInfo.CpuAccessFlags = D3D10_CPU_ACCESS_READ;

loadInfo.MiscFlags = D3DX10_DEFAULT;

loadInfo.Filter = D3DX10_FILTER_NONE;

loadInfo.MipFilter = D3DX10_FILTER_NONE;

loadInfo.pSrcInfo = NULL;



hr = D3DX10CreateTextureFromFile( gDev, targetFile, NULL, NULL, &pTexture, NULL );

if(FAILED(hr))

MessageBox(NULL,L"Cant load heightmap!",L"Error",MB_ICONERROR);



D3D10_SHADER_RESOURCE_VIEW_DESC srvDesc;

D3D10_RESOURCE_DIMENSION type;

//ZeroMemory(&type,sizeof(type));

pTexture->GetType( &type );

switch( type )

{

case D3D10_RESOURCE_DIMENSION_TEXTURE2D:

{

pTexture->QueryInterface(__uuidof(ID3D10Texture2D),(LPVOID*)&pTexture2D);

pTexture->Release();

pTexture2D->GetDesc(&desc);

}

}

hr = pTexture2D->Map( D3D10CalcSubresource(0, 0, 1), D3D10_MAP_READ , 0, &mappedTexture );

pTexture2D->GetDesc( &desc );



const UINT WIDTH = maxU;

const UINT HEIGHT = maxV;



int count = 0;

FLOAT* pTexels = (FLOAT*)mappedTexture.pData;

for( UINT row = 0; row < HEIGHT; row++ )

{

UINT rowStart = row;

for( UINT col = 0; col < WIDTH; col++ )

{

float heigth = 0.0f;

heigth += pTexels[rowStart + col*4 + 0]; // Red

heigth += pTexels[rowStart + col*4 + 1]; // Green

heigth += pTexels[rowStart + col*4 + 2]; // Blue

heigth += pTexels[rowStart + col*4 + 3]; // Alpha

map[count].pos.y = heigth;

count++;

}

}

pTexture2D->Unmap(D3D10CalcSubresource(0, 0, 1));

terrain->SetVertexData(0,map);

terrain->CommitToDevice();



[Edited by - Tordin on July 24, 2009 1:48:11 PM]

Share this post


Link to post
Share on other sites
You need to pass your loadInfo to the function call. You need:

hr = D3DX10CreateTextureFromFile( gDev, targetFile, &loadInfo, NULL, &pTexture, NULL );

Without using the loadInfo structure you aren't modifying anything.

Share this post


Link to post
Share on other sites
Whops, i misst to include that, but when i did i insted get a D3DERR_INVALID call, or some -2000045 value which i looktup in the DirectX error lookup.

any new ideas?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this