Jump to content
  • Advertisement
Sign in to follow this  
dr4cula

Render To Texture: WTF results

This topic is 2068 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I've been trying to get rendering to a texture to work and run into a completely weird result. Here's what I mean: http://tinypic.com/view.php?pic=28hecfr&s=5#.UoKc0_nwmt8

 

As you can see on the bottom left, in the small window, there's the result of my render-to-texture operation. As you can see, the texture receives a nice little cube. However when I apply this texture to a quad and move my camera around, the picture distorts. I immediately thought my matrices were wrong but as you can see, a full cube renders fine right next to this textured quad: http://tinypic.com/view.php?pic=1z36j5i&s=5#.UoKdO_nwmt8

 

To add even more confusion to the problem, when I remove my lighting calculations just to see pure texture colors, like so:

//float4 color = CalculatePhongShading(input.posCamSpace, input.normal, input.texCoord);
 
float4 color = texture_.Sample(texSampler_, input.texCoord);
 
return color;
 
This happens (had to change clear color to white for the render-to-texture pass): http://tinypic.com/view.php?pic=jttb8j&s=5#.UoKe3_nwmt8
 
I tried placing objects behind the quad and it turns out I can see through the "black" cube on the texture: as if it's cut out or something. What on Earth is going on?
 
I'd post code but I'm honestly confused which part of it could even cause this. Since the "black"-seethrough effect appears in another scene, I'm guessing I'm setting up the RTV wrong, so I suppose I shall post that:
 
void SysD3D::CreateRenderTargetView(std::string name, int width, int height, ShaderType type) {
// set up a texture
D3D11_TEXTURE2D_DESC texDesc;
texDesc.Width = width;
texDesc.Height = height;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
texDesc.CPUAccessFlags = 0;
texDesc.MiscFlags = 0;
 
// create the texture
ID3D11Texture2D* texBlank;
HRESULT blankTexCreation = p_device_->CreateTexture2D(&texDesc, NULL, &texBlank);
 
if(FAILED(blankTexCreation)) {
MessageBox(NULL, L"Failed creating a blank texture for a RTV.", L"Error", 0);
return;
}
 
// set up the render target view
D3D11_RENDER_TARGET_VIEW_DESC rtvDesc;
rtvDesc.Format = texDesc.Format;
rtvDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
rtvDesc.Texture2D.MipSlice = 0;
 
// create the render target view
ID3D11RenderTargetView* rtv;
HRESULT rtvCreation = p_device_->CreateRenderTargetView(texBlank, &rtvDesc, &rtv);
 
if(FAILED(rtvCreation)) {
MessageBox(NULL, L"Failed creating a render target view.", L"Error", 0);
return;
}
 
// sort out internal referencing
RenderTargetView renderTV;
renderTV.name_ = name;
renderTV.p_renderTargetView_ = rtv;
renderTV.type_ = type;
 
renderTargetViews_.push_back(renderTV);
 
// set up the shader resource view
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = texDesc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MipLevels = 1;
 
// create the shader resource view
ID3D11ShaderResourceView* srv;
HRESULT srvCreation = p_device_->CreateShaderResourceView(texBlank, &srvDesc, &srv);
 
if(FAILED(srvCreation)) {
MessageBox(NULL, L"Failed creating a shader resource view for a RTV texture.", L"Error", 0);
return;
}
 
// sort out internal referencing
ShaderResourceView shaderRV;
shaderRV.name_ = name;
shaderRV.p_shaderResourceView_ = srv;
shaderRV.shaderType_ = type;
 
shaderResourceViews_.push_back(shaderRV);
 
// cleanup
SAFE_RELEASE(texBlank);
}
 
And here's my set render target:
void SysD3D::SetRenderTargetView(std::string name) {
for(std::vector<RenderTargetView>::iterator it = renderTargetViews_.begin(); it != renderTargetViews_.end(); ++it) {
if(it->name_ == name) {
p_deviceContext_->OMSetRenderTargets(1, &(it->p_renderTargetView_), p_defaultDepthStencilView_);
}
}
}
 
The default depth-stencil is compatible with the RTV, I've checked for errors with the DirectX debugger. 
 
And here's my clear:
 
void SysD3D::Clear(float color[4], std::string renderTargetView) {
ID3D11RenderTargetView* rtv = NULL;
for(std::vector<RenderTargetView>::iterator it = renderTargetViews_.begin(); it != renderTargetViews_.end(); ++it) {
if(it->name_ == renderTargetView) {
rtv = it->p_renderTargetView_;
}
}
 
if(rtv == NULL) {
MessageBox(NULL, L"Failed finding the specified render target view for clearing.", L"Error", 0);
return;
}
 
p_deviceContext_->ClearRenderTargetView(rtv, color);
p_deviceContext_->ClearDepthStencilView(p_defaultDepthStencilView_, D3D11_CLEAR_DEPTH, 1.0f, 0);
}
 
Since StencilEnable = false; I'm clearing the depth only (tried D3D11_CLEAR_DEPTH || D3D11_CLEAR_STENCIL as well but didn't work).
 
I'm really confused. How would the geometry render correctly but the texture get warped? And why does the render to texture break once I disable lighting? Any help would be greatly appreciated.
 
Thanks in advance!

Share this post


Link to post
Share on other sites
Advertisement

Typically if you can 'see through' an area that is black it is due to alpha blending being enabled.  You can prove whether this is the case or not by using your clear color on the render to texture as black but with an alpha value of 1.0.  That will become opaque if the source of the transparency is alpha related.

 

What does your cube from the first image look like when the render to texture clear color is white?  At first look, it would seem to me that your texture coordinates on the cube probably aren't what you really want, but that is just a first guess.

Share this post


Link to post
Share on other sites

Hiya,

  Make sure you're setting the viewport correctly for the render target dimensions when rendering to a texture (ie. deviceContext->RSSetViewports).

 

n!

Share this post


Link to post
Share on other sites

Typically if you can 'see through' an area that is black it is due to alpha blending being enabled.  You can prove whether this is the case or not by using your clear color on the render to texture as black but with an alpha value of 1.0.  That will become opaque if the source of the transparency is alpha related.
 
What does your cube from the first image look like when the render to texture clear color is white?  At first look, it would seem to me that your texture coordinates on the cube probably aren't what you really want, but that is just a first guess.

 
Thanks for your quick reply!
 
I suspected it had something to do with transparency as well but I've turned transparency off for this particular scene. However, it turned out that the transparency that was written to the texture was 0, so when I did this:
 
 
color = texture_.Sample(texSampler_, input.texCoord);
color.a = 1.0f;
 
I could see the texture without any holes or anything like that. One problem solved biggrin.png (well kind of, I'm still not sure why the alpha would be set to 0 when rendering the scene to the texture).
 
I had accidentally used the wrong coordinate system when specifying the quad's texture coordinates, hence why they appeared flipped. However, the other problem with the texture distortion is still there. In the following pictures the camera is positioned in the center of the white quad looking along the negative z-axis (the quad is not rendered in the first render pass, its rendered at the end of the second render pass).
 
In this one, the cube looks like its leaning downwards: http://tinypic.com/view.php?pic=2r59qwy&s=5#.UoOE1vnwmt8
And in this one, the cube looks like it is squished (view from the side of the quad): http://tinypic.com/view.php?pic=2hxpykx&s=5#.UoOF4vnwmt8
 
It seems like the way the texture that I rendered onto appears when I apply it to that quad is dependent on the camera's viewmatrix. The texture in the bottom left window looks good, however when I apply it to the quad it just looks... weird. The aspect ratio for the quad is the same as for the texture, ie 4:3. Any thoughts?
 
Thanks in advance once more! Edited by dr4cula

Share this post


Link to post
Share on other sites

Hiya,
  Make sure you're setting the viewport correctly for the render target dimensions when rendering to a texture (ie. deviceContext->RSSetViewports).
 
n!

 
Thanks for your reply! I'm rendering at the same resolution as the window dimensions, ie the viewport would remain the same. Edited by dr4cula

Share this post


Link to post
Share on other sites

The alpha value that ends up in the scene is either based on your clear color (which I think you mentioned is alpha = 0) or the result of a drawing operation, in which the color (including the alpha) would be included in the pixel shader output.  So unless you are clearing it to alpha = 1, you should expect the value to be zero anywhere you didn't render something.  You can enable/disable alpha blending in the output merger's blend state - it is definitely still turned on if you get transparent pixels when alpha is 0!


In this one, the cube looks like its leaning downwards: http://tinypic.com/view.php?pic=2r59qwy&s=5#.UoOE1vnwmt8
And in this one, the cube looks like it is squished (view from the side of the quad): http://tinypic.com/view.php?pic=2hxpykx&s=5#.UoOF4vnwmt8

Those two images actually look correct to me.  Remember what you are doing here - you render the scene into one image, then you take that image and draw it in another coordinate space.  This is akin to having a television in your room and the moving around the angle that you are viewing it from.  Even though the television picture isn't changing, the shape it appears to you from the different vantage points will be changing and distorting.

Share this post


Link to post
Share on other sites

Those two images actually look correct to me.  Remember what you are doing here - you render the scene into one image, then you take that image and draw it in another coordinate space.  This is akin to having a television in your room and the moving around the angle that you are viewing it from.  Even though the television picture isn't changing, the shape it appears to you from the different vantage points will be changing and distorting.


Thanks for your reply once again! Hm, right... To be honest, I was thinking that perhaps it should look distorted but I just really, really wanted it to not look distorted tongue.png I'm trying to implement a basic mirror by putting the camera at the center of the mirror looking along the normal's direction (flat mirror). The result on the debug window is exactly the one I'm looking for (except the texture coordinates for the mirror need to be flipped along the u-axis). Any suggestions as to how I would go about this?

Share this post


Link to post
Share on other sites
For a mirror you'll want to render the scene from a camera facing in the direction of your view reflected around the mirror normal. Rendering in the direction of the normal won't be correct. Then, you have to project the vertices of the mirror into the reflection camera's screen space to get the correct texture coordinates when you render the mirror with the reflection texture. I hope that basic explanation helps some, wish I could explain more but I'm on my phone!

Share this post


Link to post
Share on other sites

For a mirror you'll want to render the scene from a camera facing in the direction of your view reflected around the mirror normal. Rendering in the direction of the normal won't be correct. Then, you have to project the vertices of the mirror into the reflection camera's screen space to get the correct texture coordinates when you render the mirror with the reflection texture. I hope that basic explanation helps some, wish I could explain more but I'm on my phone!


Thanks for your reply! I understand the reflection about the view direction part but I'm not quite sure how I would go about projecting the vertices. Would appreciate if you could explain that a bit more.

Thanks in advance!

Share this post


Link to post
Share on other sites

First, let me be more clear about rendering from the reflected view direction: it is not enough to simply render from a camera that is facing in the direction of the reflected view direction, what you need to do is actually reflect your camera around the mirror plane and render from that camera. The distinction is subtle but important: the reflected camera will have a different "handedness" (ie if your original camera was such that positive z was forward, positive y was up and positive x was right, then your reflected camera might be such that positive z is forward, y is up and x is LEFT).

 

Anyway, once you've rendered your scene from the reflected camera into your reflection texture, then you want to render the mirror using that reflection texture. The trick now is to get the right texture coordinates when you render the mirror object. You'll end up with a shader that looks something like this:

float4x4 modelMatrix;
float4x4 viewMatrix;
float4x4 reflectedViewMatrix;
float4x4 projMatrix;

struct VS_INPUT {
    float4 pos : POSITION;
};

struct VS_OUTPUT {
    float4 pos : SV_POSITION;
    float2 uv : TEXCOORD;
};

VS_OUTPUT VS(const VS_INPUT input)
{
    VS_OUTPUT output;

    // transform your verts like normal
    output.pos = mul(mul(mul(input.pos, modelMatrix), viewMatrix), projMatrix);
    
    // get the projected uv coordinates
    float4 reflected_projection = mul(mul(mul(input.pos, modelMatrix), reflectedViewMatrix), projMatrix);
    float2 uv = reflected_projection.xy / reflected_projection.w;  // perform the perspective divide
    output.uv = 0.5 * (uv + 1.0);   // uv should be in the range [-1, 1], so map it to [0, 1]

    return output;
}

I have the shader using reflectedViewMatrix when calculating the projected position of the vertices for clarity, but in reality you shouldn't need the reflectedViewMatrix, because the vertex points should project to the same coordinates with the viewMatrix.

 

Also, as always, this code was written off the top of my head and may have some bugs but I think the general idea of what's going on should give you a good start.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!