Modify pixel position

Started by
5 comments, last by CryZe 11 years, 6 months ago
Hi,

I'm in the process of writing a shader using HLSL and I stumbled upon a dilemma.
The purpose of the shader is to create a splitscreen view by shifting the pixel position.
So you got a pixel with position x,y and it needs to be translate to x'y' and x"y" which should simulate a splitscreen.

In theory this should be possible but since I'm planning to use it as a PP effect (since the shader should only affect the pixels and not the vertices), the fragment shader can't modify the pixel position (except for the depth value) since x&y get fixed after the rasterizer.

The dilemma is whether or not I should drop the idea of the PP effect and make it a complete shader meaning that I can use the vertex shader as well to give me more options in terms of modifiy the pixel position before the rasterizer. Or is there another option which I overlooked?

Thanks in progress

Forcecast
Advertisement
You would need the source to duplicate, and a target, and you would just do simething similiar to how texture wrapping works in the pixel shader (drawing to texture and then putting that on a quad, duplicated using wrapping might br faster and/or easier tho)

Can you use the modulus function in shaders?

So you have have the pixels 0.0-1.0 in the target, the source has 0.0-0.5, and you get the pixel to copy by doing something like source=targetPixelPos%0.5

o3o

This is something that is not usually done using shaders.

Instead you just set the viewport to the split-screen area and update the scissor rectangle accordingly. (At least with OpenGL; I guess it's pretty much the same with D3D, though)

Afterwards you just render using the standard shaders.
Thank you both for your input.

I understand that using a render target is the way to go for splitscreen but for my case this doesn't apply.
The pixels need to be remapped spherically, both on the left and the right part of the screen.

Initially I used the word splitscreen to cover the principal but it doesn't apply for it like you would expect.
The viewport doesn't get divided, physically there should only be one buffer to where the data gets draw on to.

Thanks in progress

Forcecast
Like you said in your first post, the xy is fixed. You can change where you sample from though.

You can render your scene in the normal way to a render target, then do a second pass which that samples from that in whatever way you want.
If you really need to 'scatter' data then your best bet is probably a compute shader
He could still do it in a pixel shader by writing the results to an Unordered Access View at the modified positions instead of to a Render Target. I would recommend a gather approach instead of scatter approach, though, as it might synchronisation of the individual threads, while all the threads are independent when implementing a gather based approach.

This topic is closed to new replies.

Advertisement