Please help me!
I've managed to get pixel shaders to work in directx9, however, i do not know how to add samplers to my pixel shaders!
I wan't to create a blur shader, but i don't know how to add the back buffer to the pixel shader so that i can compare the TEXCOORDS!
I know how to write the shader, but i don't know how to attach the sampler to it. Do you use a surface? or does it only work with textures?
I've heard that you need to add a texture to the pixel shader as the back buffer, but how do you add the data from the back
buffer to an empty texture? I'm so confused.
I'm not asking anyone to program for me, but could someone give me a general idea of how to create a basic postprocessing pixel shader in directx9? or show me some learning resources?
I'll appreciate any help i can get, thanks!
Directx9 don't know how to add samplers to pixel shaders
No, it does not work like that. PS output is always to backbuffer. It's not a texture. It's a buffer. Set it using SetRenderTarget. If the used resource is a texture, it will be filled for you, no need to copy.
I've heard that you need to add a texture to the pixel shader as the back buffer...
No, it does not work like that. PS output is always to backbuffer. It's not a texture. It's a buffer. Set it using SetRenderTarget. If the used resource is a texture, it will be filled for you, no need to copy.
[quote name='Solid_Spy' timestamp='1353983528' post='5004387']
I've heard that you need to add a texture to the pixel shader as the back buffer...
[/quote]
So it isn't possible to render PS output to seperate render states other than the back buffer? What if i want to render to multiple different states, and then combine them all into the back buffer later?
When you create your device, you usually create a "back buffer" with it. This "back buffer" is a "render-target".
You can also create your own textures that are also "render-targets".
As mentioned above, the SetRenderTarget function determines which render-target you are currently drawing to.
You can use SetRenderTarget to start drawing to your own texture.
Then you can use SetRenderTarget to start drawing somewhere else, and at the same time also use SetTexture to bind the 1st texture to one of your sampler slots, so you can read the 1st texture and ouput it to the new render-target.
You can also create your own textures that are also "render-targets".
As mentioned above, the SetRenderTarget function determines which render-target you are currently drawing to.
You can use SetRenderTarget to start drawing to your own texture.
Then you can use SetRenderTarget to start drawing somewhere else, and at the same time also use SetTexture to bind the 1st texture to one of your sampler slots, so you can read the 1st texture and ouput it to the new render-target.
When you create your device, you usually create a "back buffer" with it. This "back buffer" is a "render-target".
You can also create your own textures that are also "render-targets".
As mentioned above, the SetRenderTarget function determines which render-target you are currently drawing to.
You can use SetRenderTarget to start drawing to your own texture.
Then you can use SetRenderTarget to start drawing somewhere else, and at the same time also use SetTexture to bind the 1st texture to one of your sampler slots, so you can read the 1st texture and ouput it to the new render-target.
Ah, i see :] However, would creating a lot of render states slow down my program significantly? if my render states have to be textures that are 1920 / 1080 pixels won't that be bad for the graphics card? Like, how bad would 10 be every loop for the average consumer do you think?
n.b. you mean "Render-targets" -- "render states" are any value that affects the current drawing operation, such as which shader/textures/vertex-buffers/render-targets have been set, or if depth-testing or alpha-blending are enabled.
A 1920*1080 RGBA 8-bit per channel texture is about 10MiB, so you can easily use up a lot of your GPUs RAM if you make too many.
A lot of games reduce their memory requirements by "ping-ponging" between two render-targets.
1) Render to A
2) Render to B (reading from A)
3) Render to A (reading from B)
etc
The processing (time) cost of these kinds of post-processing effects depends on the total number of pixels drawn (whether they're in the same two targets over and over again, or whether they're in a series of unique targets).
Certain post-processing effects, such as blurring, can be done at a lower resolution so that you're drawing less pixles, e.g.
1) Render to A (full res) -- draw scene
2) Render to B (half res) -- read A
3) Render to C (half res) -- read B and blur horizontally
4) Render to B (half res) -- read C and blur vertically
5) Render to A (full res) -- read B
A 1920*1080 RGBA 8-bit per channel texture is about 10MiB, so you can easily use up a lot of your GPUs RAM if you make too many.
A lot of games reduce their memory requirements by "ping-ponging" between two render-targets.
1) Render to A
2) Render to B (reading from A)
3) Render to A (reading from B)
etc
The processing (time) cost of these kinds of post-processing effects depends on the total number of pixels drawn (whether they're in the same two targets over and over again, or whether they're in a series of unique targets).
Certain post-processing effects, such as blurring, can be done at a lower resolution so that you're drawing less pixles, e.g.
1) Render to A (full res) -- draw scene
2) Render to B (half res) -- read A
3) Render to C (half res) -- read B and blur horizontally
4) Render to B (half res) -- read C and blur vertically
5) Render to A (full res) -- read B
Ok, well now i have another problem.
I know how to create surfaces and render to them, but the depth doesn't seem to transfer.
I have depth buffer enabled, but i have 3d objects being drawn on top of each other. what i do?
Here's my code:
Credit goes to DirectXTutorial.com for helping me out.
I know how to create surfaces and render to them, but the depth doesn't seem to transfer.
I have depth buffer enabled, but i have 3d objects being drawn on top of each other. what i do?
Here's my code:
shaderbuff->GetSurfaceLevel(0, &blur_surface1);
// Is there something i should be doing here?
d3ddev->SetRenderTarget(0, blur_surface1);
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_ARGB(1, 0, 0, 0), 1.0f, 0);
d3ddev->Clear(0, NULL, D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3ddev->BeginScene();
d3ddev->SetFVF(CUSTOMFVF);
if(playing == 1)
{
D3DXMATRIX matView;
D3DXMatrixLookAtLH(&matView,
&D3DXVECTOR3 (x, y + 1, z + 10),
&D3DXVECTOR3 (cos(yaw) + x, y + 1, sin(yaw) + z + 10),
&D3DXVECTOR3 (0.0f, 1.0f, 0.0f));
d3ddev->SetTransform(D3DTS_VIEW, &matView);
D3DXMATRIX matProjection;
D3DXMatrixPerspectiveFovLH(&matProjection,
D3DXToRadian(45),
(FLOAT)SCREEN_WIDTH / (FLOAT)SCREEN_HEIGHT,
0.0f,
100.0f);
d3ddev->SetTransform(D3DTS_PROJECTION, &matProjection);
D3DXMATRIX mattranslate;
for(int i = 0; i < 16; i++)
{
for(int j = 0; j < 16; j++)
{
D3DXMatrixTranslation(&mattranslate, i * 3.0f, 0.0f, j * 3.0f);
d3ddev->SetTransform(D3DTS_WORLD, &mattranslate);
d3ddev->SetStreamSource(0, v_buffer, 0, sizeof(CUSTOMVERTEX));
d3ddev->SetIndices(i_buffer);
d3ddev->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, 4, 0, 2);
}
}
}
d3ddev->EndScene();
d3ddev->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &backbuffer);
d3ddev->SetRenderTarget(0, backbuffer);
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, 0, NULL, NULL);
d3ddev->Clear(0, NULL, D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3ddev->BeginScene();
//d3ddev->SetPixelShader(blur_shader);
d3ddev->SetTexture(0, texture1);
d3ddev->StretchRect(blur_surface1, &rect, backbuffer, &rect2, D3DTEXF_NONE);
if(playing == 0)
{
for(UINT i = 0; i < objectlist.size(); i++)
{
objectlist.Render();
}
}
d3ddev->EndScene();
d3ddev->Present(NULL, NULL, NULL, NULL);
d3ddev->SetTexture(0, NULL);
Credit goes to DirectXTutorial.com for helping me out.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement