Sign in to follow this  
shultays

is it possible to read a pixel from a render target?

Recommended Posts

Yes you can, for Direct3D9 you need to call Lock() on an IDirect3DSurface9* which gives you a pointer to the memory. Then, depending on the format of the surface, you look 'into' the memory at the correct location using the returned pointer.

Share this post


Link to post
Share on other sites
wow, I was rendering on textures by locking it and changing it pixel by pixel and it decreases can cause fps drops for large paintings. I thought locking render targets was impossible (don't know why, I probably read something like that or misunderstand something).

my projects requires reading from textures so I thought render targets is not an option :/

Share this post


Link to post
Share on other sites
It's not as cut and dried as locking and unlocking will be slow, it depends how the resources you are locking and unlocking have been created. I.e, where they lay in memory, usage etc. May i ask why you are rendering to the textures using so much locking? It sounds fishy.

Share this post


Link to post
Share on other sites
Quote:
Original post by Dave
It's not as cut and dried as locking and unlocking will be slow, it depends how the resources you are locking and unlocking have been created. I.e, where they lay in memory, usage etc. May i ask why you are rendering to the textures using so much locking? It sounds fishy.


because I didn't know render targets are lockable. I am pretty sure I read something like that somewhere. I will use render targets for rendering on textures now.

I am implementing a painting system in my project, you can paint the terrain using your paint gun. but it also requires if I am stepping on paint or not so I must be able to read from my texture too.

microsoft says that (link

"Textures placed in the D3DPOOL_DEFAULT pool cannot be locked unless they are dynamic textures or they are private, FOURCC, driver formats"

render targets requires D3DPOOL_DEFAULT. I guess FOURCC is A8R8G8B8 format? so I can lock a A8R8G8B8 format even if it is a render target.

and also if I use texture as a render target, it should not cost too much right? at least much less than locking the texture and rendering on it. while reading I am locking the texture and reading a single pixel. this shouldn't be also become a problem if I specify my dirty rect as 1 pixel and make locking read only right?

Share this post


Link to post
Share on other sites
A FOURCC format is a format defined by four letters, such as 'ATIC'.

You can create lockable render targets using CreateRenderTarget. However far as I remember you can't create textures which are render targets and lockable.

You can go the GetRenderTargetData way, but I'd suggest that you try to do everything on the graphics card. For example, drawing on the texture by rendering onto it instead of locking and changing the data. I didn't understand what you need to change if you're "stepping on paint", so can't say how difficult this might be to implement.

Share this post


Link to post
Share on other sites
Quote:
Original post by Adam_42
You can use GetRenderTargtetData() to read it with the CPU.


Quote:
Original post by ET3D
A FOURCC format is a format defined by four letters, such as 'ATIC'.

You can create lockable render targets using CreateRenderTarget. However far as I remember you can't create textures which are render targets and lockable.

You can go the GetRenderTargetData way, but I'd suggest that you try to do everything on the graphics card. For example, drawing on the texture by rendering onto it instead of locking and changing the data. I didn't understand what you need to change if you're "stepping on paint", so can't say how difficult this might be to implement.


Thanks for replies.

I will try to use render targets and GPU for rendering on textures and GetRenderTargetData for reading from it. Can you explain how should I use GetRenderTargetData? Do I need two textures now? One for render target and another one using as a surface in GetRenderTargetData method? I will render on render target and after rendering call GetRenderTargetData using these two as input and read from surface?

And also sorry about my English, it is not my native language. I couldn't explained well why I need this.

In my game (it is called Pigment btw :D) players can paint the terrain with their team colors using paint guns. There are two teams (blue and red), both are trying to capture a flag and retrieve it to base. But they can move the flag if they are standing on their team colored terrain.



Like in this example, red player can move freely in red ground (which is painted by him) but when he leaves the red area he drops the flag.

For that I need to be able to both read and write on textures (each model has a texture for storing painted areas).

Share this post


Link to post
Share on other sites
you need a surface in D3DPOOL_DEFAULT to render to and another one in D3DPOOL_SYSTEMMEM to copy the render target data to and lock.


void createRenderTarget(D3DFORMAT format, int width, int height = 0, bool lockable = false)
{

device->CreateTexture(width, height, 1, D3DUSAGE_RENDERTARGET, format, D3DPOOL_DEFAULT, &tex, NULL);

tex->GetSurfaceLevel(0, &surface_rt);

if(lockable)
{
device->CreateOffscreenPlainSurface(width, height, format, D3DPOOL_SYSTEMMEM, &surface, NULL);
}

}

void fetchData()
{
device->GetRenderTargetData(surface_rt, surface);
}

template<class T>
void lock(RECT *rect, DWORD flags)
{
surface->LockRect( &lr, rect, flags);
bits = (T*)lr.pBits;
}

void unlock()
{
surface->UnlockRect();
}




Share this post


Link to post
Share on other sites
Is there a reason why you need to be using the GPU for this sort of calculation? Why are you not creating a separate buffer in regular system memory and working off that? Depending on the number of teams you could probably get away with a big array of bytes (i.e. one byte per 'texel') and avoid the massive hassle of lockable render target support, (which varies from card to card, btw) video memory transfers, etc. You're placing a lot of unnecessary constraints on your implementation, methinks :)

Share this post


Link to post
Share on other sites
Quote:
Original post by scope
you need a surface in D3DPOOL_DEFAULT to render to and another one in D3DPOOL_SYSTEMMEM to copy the render target data to and lock.

*** Source Snippet Removed ***


thanks you!

Quote:
Original post by InvalidPointer
Is there a reason why you need to be using the GPU for this sort of calculation? Why are you not creating a separate buffer in regular system memory and working off that? Depending on the number of teams you could probably get away with a big array of bytes (i.e. one byte per 'texel') and avoid the massive hassle of lockable render target support, (which varies from card to card, btw) video memory transfers, etc. You're placing a lot of unnecessary constraints on your implementation, methinks :)


I don't get it, what I am doing is not just calculation. I am actually painting the model textures in the game and I also need to be able to find which part of the models are painted.

Share this post


Link to post
Share on other sites
Quote:
Original post by shultays
Quote:
Original post by InvalidPointer
Is there a reason why you need to be using the GPU for this sort of calculation? Why are you not creating a separate buffer in regular system memory and working off that? Depending on the number of teams you could probably get away with a big array of bytes (i.e. one byte per 'texel') and avoid the massive hassle of lockable render target support, (which varies from card to card, btw) video memory transfers, etc. You're placing a lot of unnecessary constraints on your implementation, methinks :)


I don't get it, what I am doing is not just calculation. I am actually painting the model textures in the game and I also need to be able to find which part of the models are painted.

Sure, I understand that. What I'm suggesting is that you keep and maintain a separate record of what is painted CPU-side (theoretically at a lower resolution, unless you require really high-precision control) and keep it updated in tandem with the GPU representation. While there's obviously a bit of extra work involved, odds are good that you'll get better performance anyway as you avoid the stalling/pipeline bubble headaches associated with doing frequent video memory transfers every frame. While the approach you suggested sounds elegant at first, I suspect it may end up being more trouble than it's worth once you actually start using it in practice. It's not really what graphics hardware was designed for; algorithms that run "entirely on the GPU" are considered heavily advantageous for a reason.

Share this post


Link to post
Share on other sites
Sorry I'm a late to follow up. You can do what you want without reading any texture data back to the CPU side by using an occlusion query. I have a sample of using an occlusion query to count texels which answer a certain condition. It shouldn't be difficult to make this fit your needs by limiting the drawing (and therefore counting) to the area of the texture that the character is over (instead of using a rectangle that covers the entire texture). You can then read the query result to see if the character is over the relevant colour.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this