Jump to content

  • Log In with Google      Sign In   
  • Create Account


Anaglyph DirectX 9


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
21 replies to this topic

#1 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 15 April 2012 - 10:36 AM

Hi everyone!

I've been trying to implement Stereoscopic support for my game and have mannged to get this working fine in a Side-By-Side fashion but I'm quite unsure how to render in Anaglyph. From my understanding one way to do this is to render one eye and then render the second eye ontop but only letting the Reds blend through. I'm not sure if this is the best way to handle it but I was wondering if anyone had any advice on how this is done.

Is there a way to set the Render state to only blend Reds? ( I understand this might be quite costly)

Or is there a way to render 2 buffers and then blend them at the end but only the reds on the second layer?

Is this something that would be done with a Stencil Buffer?

Thanks in advance!

David

Sponsor:

#2 MajorTom   Members   -  Reputation: 659

Like
0Likes
Like

Posted 15 April 2012 - 12:24 PM

It's probably not best to just take the "red's" from one image and the "cyan's" from the other. For example, if a car is red - this would make it visible in one eye, and not visible in the other. This would be very confusing to the player/viewer. It's best to keep the images as similar as possible.

If you're hoping to keep the player/viewer's eyes intact - the best solution is to saturate the colour and then multiply each images colour with the filter colour (red or cyan).

You can do this in D3D9, by rendering the scene for the left eye to a render target - then rendering the scene for the right eye to a different render target.

Finally, you render a fullscreen quad to the backbuffer - and in the pixel shader:
- sample the left image and right image
- saturate the colour values
- multiply the new colour values with the filter colours (red and cyan)
- then add them together and output the colour

Note: due to the saturation - this will lose ALL colour information - i.e you won't be able to tell apart green objects from yellow objects. But, it will reduce eye strain considerably and provide the most comfortable viewing experience.

I implemented this for my final university project a while back, feel free to take a look: http://dl.dropbox.co...issertation.pdf (page 26 onwards, more importantly, Appendix E/F on the last page)

Hope this helps.

Saving the world, one semi-colon at a time.


#3 Tom KQT   Members   -  Reputation: 1348

Like
0Likes
Like

Posted 16 April 2012 - 01:44 AM

I've done this before, this is the way I did it:

1. Render each eye separately to textures (one texture for each eye)
2. Render one eye to the screen using a screenquad and a simple pixel shader, for example:
float4 PS_anaglyphRed(float2 iTex0 : TEXCOORD0) : COLOR
{
	float3 tex = tex2D(texSamplerLinear, iTex0).rgb;
	return float4(tex.r, 0, 0, 1.0f);
}
3. Render the second texture using the same screen quad and a different shader:
float4 PS_anaglyphCyan(float2 iTex0 : TEXCOORD0) : COLOR
{
	float3 tex = tex2D(texSamplerLinear, iTex0).rgb;
	return float4(0, tex.g, tex.b, 0.5f);
}

Few important notes:
- This requires alpha blending (the second quad has 0.5 alpha because it's rendered over the first one.
- It could of course be done in a single rendering step using a single pixel shader reading and combining both textures. I needed to make it separately for other reasons.

I should also mention that there are multiple anaglyph types and you need to filter the proper colors coresponding to your glasses. The most common are red-blue, red-green, red-cyan (my example), green-magenta.
http://en.wikipedia..../Anaglyph_image

Anaglyph has some obvious problems, for example glasses with a red eye will "destroy" red objects in the scene. I tested also the green-magenta ones and it's much better.
Black&white (shades of gray) is better for the eyes, but of course you loose all colors, so it may be completely unacceptable.

EDIT - I realised that I took the code from an old version of my application and the code isn't really perfect, read my post #13 in this thread please.
(The second shaders doesn't need 0.5 alpha and the blending operation should be D3DBLEND_ONE, D3DBLEND_ONE.)

#4 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 16 April 2012 - 12:25 PM

Thank you both for your advice it's been very usefull! I'll do some research into Pixel Shaders as I haven't worked with them before.

Thanks Again!

David

#5 Tom KQT   Members   -  Reputation: 1348

Like
0Likes
Like

Posted 17 April 2012 - 03:11 AM

I think you don't need to learn pixel shaders for this. You can do it this way (I hope I'm not missing something):

1. Prepare two screen quads with colors per vertex, the first quad having colors RGBA = (1.0f, 0, 0, 1.0f) and the second one (0, 1.0f, 1.0f, 0.5f).
2. Render each eye separately to textures (one texture for each eye).
3. Render one eye to the screen using the first screenquad and the first texture, with modulation of texture color and vertex color (I think that's the default fixed-function pipeline setting. If not, it can be set easily by SetRenderState or SetTextureStageState).
4. Render the second quad with the second texture, again modulation plus you need alpha blending this time (the second quad is half-transparent).

That should IMHO do the job.

#6 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 17 April 2012 - 11:06 AM

Thank eveyone for their input here I have a question regarding the blending. By just setting the alpha to 0.5, wouldn't that darken the image due to the fact only half the colour values would blend through?

Thanks
David

#7 Tom KQT   Members   -  Reputation: 1348

Like
0Likes
Like

Posted 17 April 2012 - 11:15 AM

Thank eveyone for their input here I have a question regarding the blending. By just setting the alpha to 0.5, wouldn't that darken the image due to the fact only half the colour values would blend through?

Thanks
David

Nope, it won't darken the result - don't forget that you are blending 50 % of the second image with the "full" first image. The result is 50 % of the first + 50 % of the second, which gives "100 %".

If you have the proper blending operation:
device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
then the blending equation for each color channel is
result = color_on_screen * (1 - rendered_alpha) + rendered_color * rendered_alpha
and in our case
result = color_on_screen * (1 - 0.5) + rendered_color * 0.5
which is
result = color_on_screen * 0.5 + rendered_color * 0.5

Imagine you have two identical images. Let's take one example pixel with color RBG 250,100,50.
When you render the first quad, the pixel on the screen will be 250,100,50.
Now you render the second one and by the calculations you'll get 125+125, 50+50, 25+25 which is 250,100,50. Not darkened at all.

An important note: The first quad must be fully opaque, or you must disable blending before rendering it. If you did render also the first quad with 50 % alpha, it would blend with the empty backbuffer, which would probably be black (depends on how do you clear the buffer) and that WOULD darken the result.

#8 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 17 April 2012 - 11:36 AM

Maybe I'm not understanding how the blends work but if the texture below has 0 for both Green and Blue. Then won't it be like you're blening againts black for those colours?
Set lets say the first pixel is RGB( 1.0f, 1.0f, 1.0f ) We Remove G and B and are Left With RGB( 1.0f, 0.0f, 0.0f ). Then we blend this with 50% RGB( 0.0f, 1.0f, 1.0f ). Wouldn't this result in RGB( 0.5f, 0.5f, 0.5f ).

Thanks For You Help Here!

David

#9 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 17 April 2012 - 12:00 PM

OK, that makes sense now.

Thanks Again!

David

#10 MajorTom   Members   -  Reputation: 659

Like
0Likes
Like

Posted 17 April 2012 - 02:06 PM

2. Render one eye to the screen using a screenquad and a simple pixel shader, for example:

float4 PS_anaglyphRed(float2 iTex0 : TEXCOORD0) : COLOR
{
	float3 tex = tex2D(texSamplerLinear, iTex0).rgb;
	return float4(tex.r, 0, 0, 1.0f);
}
3. Render the second texture using the same screen quad and a different shader:
float4 PS_anaglyphCyan(float2 iTex0 : TEXCOORD0) : COLOR
{
	float3 tex = tex2D(texSamplerLinear, iTex0).rgb;
	return float4(0, tex.g, tex.b, 0.5f);
}

While this will appear to work at first, after you've tested a few scenes you'll notice the images will look wrong - for example, a red teapot will be clearly visible in one eye, and invisible in the other. Hence:

Anaglyph has some obvious problems, for example glasses with a red eye will "destroy" red objects in the scene.

It's not the glasses that's the problem here, it's the image being transformed into the correct filter.

If one eye uses filter A - and the other uses filter B. It's best to make sure the first image uses filter A - and the second uses filter B.

This might be a better way:
float4 PS_anaglyphRed(float2 iTex0 : TEXCOORD0) : COLOR
{
	float3 tex = tex2D(texSamplerLinear, iTex0).rgb;
    tex = saturate(tex);
    tex *= float3( 1.0f, 0, 0 ); // red filter
	return float4( tex.rgb, 1.0f);
}
3. Render the second texture using the same screen quad and a different shader:
float4 PS_anaglyphCyan(float2 iTex0 : TEXCOORD0) : COLOR
{
	float3 tex = tex2D(texSamplerLinear, iTex0).rgb;
    tex = saturate(tex);
    tex *= float3( 0.7f, 1.0f, 0 ); // cyan filter
	return float4( tex.rgb, 1.0f);
}

For an anaglyph solution which doesn't cause retinal rivalry (differences between the two images confusing the brain), saturation and then colour filtering is the safest bet. Unfortunately, as far as I'm aware, you can't saturate through blending - at least, not easily, and not in a single pass. So, you might need to use render targets / render textures and pixel shaders.

Saving the world, one semi-colon at a time.


#11 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 18 April 2012 - 04:11 AM

Nope, it won't darken the result - don't forget that you are blending 50 % of the second image with the "full" first image. The result is 50 % of the first + 50 % of the second, which gives "100 %".

If you have the proper blending operation:
device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);then the blending equation for each color channel is
result = color_on_screen * (1 - rendered_alpha) + rendered_color * rendered_alpha
and in our case
result = color_on_screen * (1 - 0.5) + rendered_color * 0.5
which is
result = color_on_screen * 0.5 + rendered_color * 0.5

Imagine you have two identical images. Let's take one example pixel with color RBG 250,100,50.
When you render the first quad, the pixel on the screen will be 250,100,50.
Now you render the second one and by the calculations you'll get 125+125, 50+50, 25+25 which is 250,100,50. Not darkened at all.

An important note: The first quad must be fully opaque, or you must disable blending before rendering it. If you did render also the first quad with 50 % alpha, it would blend with the empty backbuffer, which would probably be black (depends on how do you clear the buffer) and that WOULD darken the result.


OK, I feel stupid bringing this up again considering I said I understood what you were saying but I've tested this and I am getting a darker image.

From what you've wrote here you say that the image behind is fully opaque. However, it is missing the Red Channel so isn't it a little diferent. If we just look at the red channel in isolation then if the original pixel is 255. The back image would be 0, and the front image would be 255. So 0 * 0.5 + 255 * 0.5 = 127.5.

Or is this not the case?

Thanks
David

#12 Tom KQT   Members   -  Reputation: 1348

Like
0Likes
Like

Posted 18 April 2012 - 08:03 AM

While this will appear to work at first, after you've tested a few scenes you'll notice the images will look wrong - for example, a red teapot will be clearly visible in one eye, and invisible in the other. Hence:

Yes, that's given by how the filtering works.

It's not the glasses that's the problem here, it's the image being transformed into the correct filter.

But filtering of the imagine is given by the glasses, by the color filters used on them.

If one eye uses filter A - and the other uses filter B. It's best to make sure the first image uses filter A - and the second uses filter B.

I wouldn't say it's best to make sure it's done this way, I'd say it's rather necessary. And IMHO also natural and logical.

This might be a better way:
... code ....

I'm sorry but I don't get your point. Your code is exactly the same as mine (well, not the same, but the result will be the same).

Let's look at the differences:
tex = saturate(tex);
That line won't do anything. The saturate HLSL function clamps the argument into 0-1. When applied to float4, it simply clamps each channel separately. And as we are applying it on a texture which already has values only in this range (0-1), there won't be any effect at all.

tex *= float3( 1.0f, 0, 0 );
return float4( tex.rgb, 1.0f);
That will have IMHO the very same effect as my code, which is simpler:
return float4(tex.r, 0, 0, 1.0f);


So the only real difference is that you have 0.7, 1.0, 0.0 as the second filter while I have 1.0, 1.0, 0.0.

#13 Tom KQT   Members   -  Reputation: 1348

Like
1Likes
Like

Posted 18 April 2012 - 08:26 AM

OK, I feel stupid bringing this up again considering I said I understood what you were saying but I've tested this and I am getting a darker image.

From what you've wrote here you say that the image behind is fully opaque. However, it is missing the Red Channel so isn't it a little diferent. If we just look at the red channel in isolation then if the original pixel is 255. The back image would be 0, and the front image would be 255. So 0 * 0.5 + 255 * 0.5 = 127.5.

Or is this not the case?

Thanks
David

No need to feel stupid, I should Posted Image
You are right. In my example I didn't apply the color filters, I was talking only about blending two images 50 % + 50 %.
When you do apply the filters, however, you must add the resulting images together 100 % + 100 %. The filtering naturally darkens the particular
image because it removes one or even two channels (sets it/them to black).

So if we again take my example color RBG 250,100,50 - it would be:
one eye 250, 100, 50 -> after filtering 250, 0, 0
second eye 250, 100, 50 -> after filtering 0, 100, 50
And when you add those together, you get 250, 100, 50 which is the original color (this is the case when both eyes see the same color at the same pixel, for example when you are looking at a large single-colored wall).

This can still be done without shaders, with blending, but you need this setting:
device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE);
device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ONE);
And the alpha of both quads can be 1.0 - alpha doesn't matter here, the blending operation is to add both quads (the colors) together, regardless
alpha values.

The same would apply to my previous code using shaders, the only change would be to have 1.0 alpha in both cases and different blending operation (which wasn't included in the code anyway). I took the code from a wrong version of my app and it's been already quite a long time, so I forgot about it.

I'm really sorry. It's great that you didn't just blindly use it, but thought about it yourself ;-)

#14 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 18 April 2012 - 10:42 AM

Ahh, thanks that worked!

Thanks everyone for their time I really appreciate all of your help!

I'm getting a few render glitches for some reason but you can see it working!

Posted Image


Thanks Again!
David

#15 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 18 April 2012 - 01:50 PM

OK, these glitches I'm getting seem to be releated to Anti-Aliasing. I've been reading and many people have been saying that this doesn't work with mutiple render targets. Is this just for when you are rendering to 2 targets at once? or does that include what I'm doing by switching targets but only rendering to 1 at a time?

Thanks
David

#16 MajorTom   Members   -  Reputation: 659

Like
0Likes
Like

Posted 18 April 2012 - 04:28 PM


While this will appear to work at first, after you've tested a few scenes you'll notice the images will look wrong - for example, a red teapot will be clearly visible in one eye, and invisible in the other. Hence:

Yes, that's given by how the filtering works.


Not necessarily, if the image is transformed into a single colour space - i.e greyscale. and then having the colour filter applied.


This might be a better way:
... code ....

I'm sorry but I don't get your point. Your code is exactly the same as mine (well, not the same, but the result will be the same).


Yep, my bad, I was half asleep when I posted that. What I meant to say was: you should calculate the brightness of the colour / transform to greyscale - and then apply the colour filter. This stops red objects disappearing, but loses the colour information (obviously), but it does make it much more easy on the eyes. When rendering a 3D image, you should do all you can to stop an object appearing in one eye, and not the other - unless the object is supposed to be blindingly close...

Saving the world, one semi-colon at a time.


#17 Tom KQT   Members   -  Reputation: 1348

Like
0Likes
Like

Posted 19 April 2012 - 12:49 AM

thekiwimaddog - it seems you are doing something like Guitar Hero where I think (never really played it myself) colors are very important. How does it look in the 3D glasses? Are you able to tell the colors apart easily? I cannot test it on your screenshot, I've got only green-magenta glasses at me right now.

Yep, my bad, I was half asleep when I posted that. What I meant to say was: you should calculate the brightness of the colour / transform to greyscale - and then apply the colour filter. This stops red objects disappearing, but loses the colour information (obviously), but it does make it much more easy on the eyes. When rendering a 3D image, you should do all you can to stop an object appearing in one eye, and not the other - unless the object is supposed to be blindingly close...

No problem, what's important is that I finaly see what you meant. Yes, using grayscale is better for the 3D impression, but it depends on the specific use whether grayscale is acceptable or not. Color anaglyph has more or less serious problems with some colors (especially clear red), but everything can also be fine and you get color information. Grayscale is generally more pleasant for your eyes, but you will have no idea what colors are in the scene.

#18 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 19 April 2012 - 01:08 AM

The colours are quite important in the game and I can make out the colours fairly well. The Red and Blue notes are obviously the hardest to see but they are spaced apart so it's not likely that you would mistake one for the other. However, I've been thinking about changing the shape of the notes for each colour to make it a little easier to recognise. I could then have an option to saturate the colours and replace the notes accordingly.

So does anyone have any ideas on how I should handle AA when changing render targets like this?

Thanks
David

#19 Tom KQT   Members   -  Reputation: 1348

Like
0Likes
Like

Posted 19 April 2012 - 01:49 AM

So does anyone have any ideas on how I should handle AA when changing render targets like this?

Thanks
David

What kind of glitches do you mean? I don't see anything bad on the screenshot.
Can you describe your rendering order? I don't mean code, just tell how are you doing it. And especially - where is the problem ;)

#20 thekiwimaddog   Members   -  Reputation: 154

Like
0Likes
Like

Posted 19 April 2012 - 03:06 AM

I am using some draw order tricks to create some of the layers in the game. But you can see the keys on the board here are not taking the Z-Buffer into consideration at all here for some reason. It does seem odd this only happens when AA is enabled.

With AA:
Posted Image

Without AA:
Posted Image
Do you have to treat the Z-Buffer differently when rending to a texture?
Do I need to setup the AA settings per render target?

Thanks!
David




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS