This topic is 4862 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I was trying to find out how to render a cube map, so I was looking at the HDRFormats sample which renders a cube map. I don't understand why the sample uses the back buffer dimensions to adjust the transformed geometry. // Map texels to pixels float fHighW = -1.0f - (1.0f/(float)pBackBufferSurfaceDesc->Width); float fHighH = -1.0f - (1.0f/(float)pBackBufferSurfaceDesc->Height); float fLowW = 1.0f + (1.0f/(float)pBackBufferSurfaceDesc->Width); float fLowH = 1.0f + (1.0f/(float)pBackBufferSurfaceDesc->Height); pVertex[0].pos = D3DXVECTOR4(fLowW, fLowH, 1.0f, 1.0f); pVertex[1].pos = D3DXVECTOR4(fLowW, fHighH, 1.0f, 1.0f); pVertex[2].pos = D3DXVECTOR4(fHighW, fLowH, 1.0f, 1.0f); pVertex[3].pos = D3DXVECTOR4(fHighW, fHighH, 1.0f, 1.0f); In the sample's vertex shader, the vertices' position is transformed using the inverse wvp matrix and used in the pixel shader to look up the cube map color I don't understand what the back buffer dimensions has to do with it - can someone explain. If you need to look at all the code, it the skybox.cpp under the HDRFormats sample. Thanks for your help [Edited by - DrGUI on March 29, 2005 5:43:29 AM]

Majorly edited

Bump?

1. 1
2. 2
3. 3
Rutin
22
4. 4
JoeJ
16
5. 5

• 14
• 29
• 13
• 11
• 11
• ### Forum Statistics

• Total Topics
631774
• Total Posts
3002290
×