# Anaglyph DirectX 9

This topic is 2376 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi everyone!

I've been trying to implement Stereoscopic support for my game and have mannged to get this working fine in a Side-By-Side fashion but I'm quite unsure how to render in Anaglyph. From my understanding one way to do this is to render one eye and then render the second eye ontop but only letting the Reds blend through. I'm not sure if this is the best way to handle it but I was wondering if anyone had any advice on how this is done.

Is there a way to set the Render state to only blend Reds? ( I understand this might be quite costly)

Or is there a way to render 2 buffers and then blend them at the end but only the reds on the second layer?

Is this something that would be done with a Stencil Buffer?

David

##### Share on other sites
It's probably not best to just take the "red's" from one image and the "cyan's" from the other. For example, if a car is red - this would make it visible in one eye, and not visible in the other. This would be very confusing to the player/viewer. It's best to keep the images as similar as possible.

If you're hoping to keep the player/viewer's eyes intact - the best solution is to saturate the colour and then multiply each images colour with the filter colour (red or cyan).

You can do this in D3D9, by rendering the scene for the left eye to a render target - then rendering the scene for the right eye to a different render target.

Finally, you render a fullscreen quad to the backbuffer - and in the pixel shader:
- sample the left image and right image
- saturate the colour values
- multiply the new colour values with the filter colours (red and cyan)
- then add them together and output the colour

Note: due to the saturation - this will lose ALL colour information - i.e you won't be able to tell apart green objects from yellow objects. But, it will reduce eye strain considerably and provide the most comfortable viewing experience.

I implemented this for my final university project a while back, feel free to take a look: http://dl.dropbox.co...issertation.pdf (page 26 onwards, more importantly, Appendix E/F on the last page)

Hope this helps.

##### Share on other sites
I've done this before, this is the way I did it:

1. Render each eye separately to textures (one texture for each eye)
2. Render one eye to the screen using a screenquad and a simple pixel shader, for example:
 float4 PS_anaglyphRed(float2 iTex0 : TEXCOORD0) : COLOR { float3 tex = tex2D(texSamplerLinear, iTex0).rgb; return float4(tex.r, 0, 0, 1.0f); } 
3. Render the second texture using the same screen quad and a different shader:
 float4 PS_anaglyphCyan(float2 iTex0 : TEXCOORD0) : COLOR { float3 tex = tex2D(texSamplerLinear, iTex0).rgb; return float4(0, tex.g, tex.b, 0.5f); } 

Few important notes:
- This requires alpha blending (the second quad has 0.5 alpha because it's rendered over the first one.
- It could of course be done in a single rendering step using a single pixel shader reading and combining both textures. I needed to make it separately for other reasons.

I should also mention that there are multiple anaglyph types and you need to filter the proper colors coresponding to your glasses. The most common are red-blue, red-green, red-cyan (my example), green-magenta.
http://en.wikipedia..../Anaglyph_image

Anaglyph has some obvious problems, for example glasses with a red eye will "destroy" red objects in the scene. I tested also the green-magenta ones and it's much better.
Black&white (shades of gray) is better for the eyes, but of course you loose all colors, so it may be completely unacceptable.

[color=#ff0000]EDIT - I realised that I took the code from an old version of my application and the code isn't really perfect, read my post #13 in this thread please.
[color=#FF0000](The second shaders doesn't need 0.5 alpha and the blending operation should be [color=#ff0000]D3DBLEND_ONE, D3DBLEND_ONE.)

##### Share on other sites
Thank you both for your advice it's been very usefull! I'll do some research into Pixel Shaders as I haven't worked with them before.

Thanks Again!

David

##### Share on other sites
I think you don't need to learn pixel shaders for this. You can do it this way (I hope I'm not missing something):

1. Prepare two screen quads with colors per vertex, the first quad having colors RGBA = (1.0f, 0, 0, 1.0f) and the second one (0, 1.0f, 1.0f, 0.5f).
2. Render each eye separately to textures (one texture for each eye).
3. Render one eye to the screen using the first screenquad and the first texture, with modulation of texture color and vertex color (I think that's the default fixed-function pipeline setting. If not, it can be set easily by SetRenderState or SetTextureStageState).
4. Render the second quad with the second texture, again modulation plus you need alpha blending this time (the second quad is half-transparent).

That should IMHO do the job.

##### Share on other sites
Thank eveyone for their input here I have a question regarding the blending. By just setting the alpha to 0.5, wouldn't that darken the image due to the fact only half the colour values would blend through?

Thanks
David

##### Share on other sites

Thank eveyone for their input here I have a question regarding the blending. By just setting the alpha to 0.5, wouldn't that darken the image due to the fact only half the colour values would blend through?

Thanks
David

Nope, it won't darken the result - don't forget that you are blending 50 % of the second image with the "full" first image. The result is 50 % of the first + 50 % of the second, which gives "100 %".

If you have the proper blending operation:
 device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA); 
then the blending equation for each color channel is
result = color_on_screen * (1 - rendered_alpha) + rendered_color * rendered_alpha
and in our case
result = color_on_screen * (1 - 0.5) + rendered_color * 0.5
which is
result = color_on_screen * 0.5 + rendered_color * 0.5

Imagine you have two identical images. Let's take one example pixel with color RBG 250,100,50.
When you render the first quad, the pixel on the screen will be 250,100,50.
Now you render the second one and by the calculations you'll get 125+125, 50+50, 25+25 which is 250,100,50. Not darkened at all.

An important note: The first quad must be fully opaque, or you must disable blending before rendering it. If you did render also the first quad with 50 % alpha, it would blend with the empty backbuffer, which would probably be black (depends on how do you clear the buffer) and that WOULD darken the result.

##### Share on other sites
Maybe I'm not understanding how the blends work but if the texture below has 0 for both Green and Blue. Then won't it be like you're blening againts black for those colours?
Set lets say the first pixel is RGB( 1.0f, 1.0f, 1.0f ) We Remove G and B and are Left With RGB( 1.0f, 0.0f, 0.0f ). Then we blend this with 50% RGB( 0.0f, 1.0f, 1.0f ). Wouldn't this result in RGB( 0.5f, 0.5f, 0.5f ).

Thanks For You Help Here!

David

##### Share on other sites
OK, that makes sense now.

Thanks Again!

David

##### Share on other sites

2. Render one eye to the screen using a screenquad and a simple pixel shader, for example:
 float4 PS_anaglyphRed(float2 iTex0 : TEXCOORD0) : COLOR { float3 tex = tex2D(texSamplerLinear, iTex0).rgb; return float4(tex.r, 0, 0, 1.0f); } 
3. Render the second texture using the same screen quad and a different shader:
 float4 PS_anaglyphCyan(float2 iTex0 : TEXCOORD0) : COLOR { float3 tex = tex2D(texSamplerLinear, iTex0).rgb; return float4(0, tex.g, tex.b, 0.5f); } 

While this will appear to work at first, after you've tested a few scenes you'll notice the images will look wrong - for example, a red teapot will be clearly visible in one eye, and invisible in the other. Hence:

Anaglyph has some obvious problems, for example glasses with a red eye will "destroy" red objects in the scene.

It's not the glasses that's the problem here, it's the image being transformed into the correct filter.

If one eye uses filter A - and the other uses filter B. It's best to make sure the first image uses filter A - and the second uses filter B.

This might be a better way:
 float4 PS_anaglyphRed(float2 iTex0 : TEXCOORD0) : COLOR { float3 tex = tex2D(texSamplerLinear, iTex0).rgb; tex = saturate(tex); tex *= float3( 1.0f, 0, 0 ); // red filter return float4( tex.rgb, 1.0f); } 
3. Render the second texture using the same screen quad and a different shader:
 float4 PS_anaglyphCyan(float2 iTex0 : TEXCOORD0) : COLOR { float3 tex = tex2D(texSamplerLinear, iTex0).rgb; tex = saturate(tex); tex *= float3( 0.7f, 1.0f, 0 ); // cyan filter return float4( tex.rgb, 1.0f); } 

For an anaglyph solution which doesn't cause retinal rivalry (differences between the two images confusing the brain), saturation and then colour filtering is the safest bet. Unfortunately, as far as I'm aware, you can't saturate through blending - at least, not easily, and not in a single pass. So, you might need to use render targets / render textures and pixel shaders.

1. 1
Rutin
34
2. 2
3. 3
4. 4
5. 5

• 11
• 10
• 13
• 98
• 11
• ### Forum Statistics

• Total Topics
632974
• Total Posts
3009651
• ### Who's Online (See full list)

There are no registered users currently online

×