Hi, I'm on Rastertek series 42, soft shadows, which uses a blur shader and runs extremely slow.
He obnoxiously states that there are many ways to optimize his blur shader, but gives you no idea how to do it.
The way he does it is :
1. Project the objects in the scene to a render target using the depth shader.
2. Draw black and white shadows on another render target using those depth textures.
3. Blur the black/white shadow texture produced in step 2 by
a) rendering it to a smaller texture
b) vertical / horizontal blurring that texture
c) rendering it back to a bigger texture again.
4. Send the blurred shadow texture into the final shader, which samples its black/white values to determine light intensity.
So this uses a ton of render textures, and I just added more than one light, which multiplies the render textures required.
Is there any easy way I can optimize the super expensive blur shader that wouldnt require a whole new complicated system?
Like combining any of these render textures into one for example?
If you know of any easy way not requiring too many changes, please let me know, as I already had a really hard time
understanding the way this works, so a super complicated change would be beyond my capacity. Thanks.
*For reference, here is my repo, in which I have simplified his tutorial and added an additional light.
I have never quite been a master of the d3d9 blend modes.. I know the basic stuff, but have been trying for a while to get a multiply/add blending mode... the best I can figure out is mult2x by setting:
//this isn't quite what I want.. basically I wonder if there is a way to like multiply by any color darker than 0.5 and add by any color lighter than that..? I don't know, maybe this system is too limited...
after implementing skinning with compute shader i want to implement skinning with VertexShader Streamout method to compare performance.
The following Thread is a discussion about it.
Here's the recommended setup:
Use a pass-through geometry shader (point->point), setup the streamout and set topology to point list.
Draw the whole buffer with context->Draw(). This gives a 1:1 mapping of the vertices.
Later bind the stream out buffer as vertex buffer. Bind the index buffer of the original mesh.
draw with DrawIndexed like you would with the original mesh (or whatever draw call you had).
I know the reason why a point list as input is used, because when using the normal vertex topology as input the output would be a stream of "each of his own" primitives that would blow up the vertexbuffer. I assume a indexbuffer then would be needless ?
But how can you transform position and normal in one step when feeding the pseudo Vertex/Geometry Shader with a point list ?
In my VertexShader i first calculate the resulting transform matrix from bone indexes(4) und weights (4) and transform position and normal with the same resulting transform Matrix.
Do i have to run 2 passes ? One for transforming position and one for transforming normal ?
I think it could be done better ?
thanks for any help
i am new to directx. i just followed some tutorials online and started to program. It had been well till i faced this problem of loading my own 3d models from 3ds max exported as .x which is supported by directx. I am using c++ on visual studio 2010 and directX9. i really tried to find help on the net but i couldn't find which can solve my problem. i don't know where exactly the problem is. i run most of samples and examples all worked well. can anyone give me the hint or solution for my problem ?
thanks in advance!