Vertex shader to pixel shader - way of sharing info?

Started by
8 comments, last by _the_phantom_ 9 years, 11 months ago

Hello everyone,

I am using DX9 and having trouble trying to optimize some SM 3.0 shaders, the thing is that I have all the passable through data between vertex shader and pixel shader full and I dont know how to do it in order to get all the operations only needed in the pixel shader be done just there...in the pixel shader wacko.png .

If it were a kind of instruction like SetStreamSource but directly for the pixel shader only to pass there too the same info like passed before to the vertex shader or any other way of reusing in the pixel shader the same info streamed into the vertex shader before like skining info or even instancing info without requiring the valuable few slots that DX9 have?.

So reached to this point and after wasting nice times, I think I need some ideas about, please be different of "Pass all to DX11" because atm this isnt a possible option hahaha.

Thanks in advance.

Advertisement
The output you return in vertex shader is input in pixel shader.

Haha, well indeed I knew that since the beginning. I am asking if is there any way to share as a pre declaration variables those which can be used both for the pixel and vertex shader without having to pass that data through the tiny COLOR(n) TEXCOORDS(n) pipe which in DX9 is two for the color and eight for the texcoords.

This should be done for saving many calculations done in the vertex shader for the pixel shader and so...

There are many things for which I want to make for, one is that I have a motion blur effect implemented which needs the actual and last screen position, I use skinning and instancing so the current world matrix must be calculated on the vertex shader, but the lastPosition should be in the pixel shader.

So to calculate that old world matrix there the skinning and instancing info must be passed via which seems the only way to do, COLOR(n) TEXCOORDS(n).

For me it seems a real waste of precious passing slots, and my answer is, is there any other way to do?

Thank you.

The slots are there for communication from vertex shader to pixel shader: that's the way you pass info between them.

You need to get this "waste" mentality out of your head. It's not a "waste" if you're using them for a purpose. You need to pass the info, the slots are for passing the info, so just use (not "waste", "use") the slots.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


So to calculate that old world matrix there the skinning and instancing info must be passed via which seems the only way to do, COLOR(n) TEXCOORDS(n).

Can't you just pass the result of that (i.e. the "lastPosition") to the pixel shader? That's a lot less data than the matrix, skinning info, etc...


For me it seems a real waste of precious passing slots, and my answer is, is there any other way to do?

Well the other way to get data into a pixel shader is via the textures :-). So, thinking outside the box here, you can put the data you need in a texture. Obviously this data can't come from the vertex shader, it must have been written to in a previous draw call. But your vertex shader could possibly pass a texture coordinate to the pixel shader that tells it which part of the texture to sample from to get more data. That's a pretty complicated proposition though.

The slots are there for communication from vertex shader to pixel shader: that's the way you pass info between them.

You need to get this "waste" mentality out of your head. It's not a "waste" if you're using them for a purpose. You need to pass the info, the slots are for passing the info, so just use (not "waste", "use") the slots.

Re: I will try it out, the thing is there should be a way of having a vertex stream linked data as for the vertex shader is, for the pixel shader also too, it should be shared memory at the same cost because it is just the same info. Thank you.

Can't you just pass the result of that (i.e. the "lastPosition") to the pixel shader? That's a lot less data than the matrix, skinning info, etc...

Re: Yes, thats what I am doing right now, but there are many faces to be culled that will :

1. Adquire the last world matrix by skinning+instancing

2. Adquire then the worldviewmatrix to adquire the last position.

Well the other way to get data into a pixel shader is via the textures :-). So, thinking outside the box here, you can put the data you need in a texture. Obviously this data can't come from the vertex shader, it must have been written to in a previous draw call. But your vertex shader could possibly pass a texture coordinate to the pixel shader that tells it which part of the texture to sample from to get more data. That's a pretty complicated proposition though.

Re: That is the only solution I reached to the skinning is done by textures so i will try passing the indexes to the pixel shader, I referred to that as a waste because I dont understand why it cant be just streamed in as I did with the vertex shader as BLENDINDICES+BLENDWEIGHT... isnt really possible?

I thank you so much for all the people who answered this post, I am at the same point now, but at least I dont feel so that alone rolleyes.gif .

The problem you are running into is simply because of the details of DX9 era hardware; the vertex and pixel shaders tended to be implemented in different silicon with different abilities and above all different interconnects to memory. For the longest time DX9 hardware didn't have a connection between the vertex shaders and the hardware for texture sampling, for example.

In this case the pixel shaders have access to three sources of information;
- textures
- data from previous vertex shader stages
- constant data "in registers"

Those are your 'data in paths' and only paths.

The only way to get a more complex memory subsystem is to break from hardware with those limits which means breaking from the API; D3D11 and OpenGL4.x allow you to attach many more streams of input data to the various shader stages (and indeed, output too with 11.1 allowing output buffers on all stages and OpenGL4.x having the same functionality).

Thank you for that history appointment phantom, at least for me has been quite enlightening.

Also I would like to ask for forgiveness about if my english could inflict any eyes bleeding.

I will try to continue the way with this DX9 knowing this "era" was still quite dark between the relation of graphics pipeline programmers and hardware producers hahaha.

Welcome to the new DX11 era, I will want to get there fast :), maybe the jump to there from DX9 obviating DX10 will worth?. I am still supporting DX9 because of compatibility issues only.

Avoiding DX10 is the best plan; DX11 works on the same hardware (via feature levels to expose what the hardware can do) and is a better API in general - there is no need to worry about the existence of DX10 at this point smile.png

This topic is closed to new replies.

Advertisement