texel alignment issue
Hey all,
first off quick question, is it possible to map texels to pixels when dealing with transformed geometry (not pre-transformed geometry).
if so, does anyone have an example of how i would do this (proper world view projection matrix?)
or perhaps a general explanation?
thanks =)
It's possible if you create a perfect ortho projection matrix, therefore basically "emulating" the pretransformed vertices. You do have to apply the same -0.5f, -0.5f offset as well. The matrices for view and projection are set to identity, the projection is a OrthoOffCenter type.
I've found that due to rounding errors i was actually better off to do the transformations through the matrices myself (ProcessVertices) and AFTER that process realign the positions to exact 0.5f values. Again, i only used that method for screen aligned GUI stuff.
That almost completely rebuilds the pretransformed mode with added overhead. The only advantage i'd see is that you can use lights on that geometry.
I've found that due to rounding errors i was actually better off to do the transformations through the matrices myself (ProcessVertices) and AFTER that process realign the positions to exact 0.5f values. Again, i only used that method for screen aligned GUI stuff.
That almost completely rebuilds the pretransformed mode with added overhead. The only advantage i'd see is that you can use lights on that geometry.
I'm not too sure exactly what it is you're getting at. If you set the World, View and Projection matrices to identity, you'd effectively be drawing Pre-Transformed vertices, with the extra overhead of doing 3 matrix multiplications.
Could you be a bit more specific? Are shaders an option? I have a VS written that does somthing similar to drawing Pre-Transformed vertices, along with a few extra operations. If you're interested, I'll gladly post it.
Hope this helps.
Could you be a bit more specific? Are shaders an option? I have a VS written that does somthing similar to drawing Pre-Transformed vertices, along with a few extra operations. If you're interested, I'll gladly post it.
Hope this helps.
Endurion seems to be close.
I've set up a situation like this.
the vertex data is a single quad in a satic vertex buffer that is 1x1 large,
it is scaled to the proper size based on how large the texture is.
the shader code looks like this:
the result is close, very close, but the image is still a bit blurry =/
any ideas?
thanks =D
I've set up a situation like this.
D3DXMATRIX mProj; //prepare the matricies D3DXMatrixScaling(&objScl,frame->tex->imgWidth,frame->tex->imgHeight,1); D3DXMatrixOrthoOffCenterRH(&mProj,0.5f,1024,768,0.5f,0,1); D3DXMatrixIdentity(&entity->wvp); //multiply the matricies D3DXMatrixMultiply(&entity->wvp,&objScl,&entity->wvp); D3DXMatrixMultiply(&entity->wvp,&entity->wvp,&mProj); //set the shader proprties entityShader->effect->SetMatrix(entityShader->hWorldViewProj,&entity->wvp); entityShader->effect->SetTexture(entityShader->hFrame,entity->animation->sheet->frames[(int)entity->row][(int)entity->col]->tex->tex); entityShader->effect->SetVector(entityShader->hMaterialDiffuse,&D3DXVECTOR4(1,1,1,1)); entityShader->commitChanges(); //draw dev->DrawIndexedPrimitive(D3DPT_TRIANGLELIST,0,0,4,0,2);
the vertex data is a single quad in a satic vertex buffer that is 1x1 large,
it is scaled to the proper size based on how large the texture is.
the shader code looks like this:
float4x4 worldViewProj : worldViewProjection;float4 lightColor : Diffuse< string UIName = "Diffuse Light Color"; string Object = "DirectionalLight";> = {1.0f, 1.0f, 1.0f, 1.0f};float4 lightAmbient : Ambient< string UIWidget = "Ambient Light Color"; string Space = "material";> = {0.0f, 0.0f, 0.0f, 0.0f};float4 materialDiffuse : Diffuse< string UIWidget = "Surface Color"; string Space = "material";> = {1.0f, 1.0f, 1.0f, 1.0f};//------------------------------------texture frame< string ResourceName = "default_color.dds";>;//------------------------------------struct vertexInput { float4 pos : POSITION; float2 tex0 : TEXCOORD0;};struct vertexOutput { float4 pos : POSITION; float2 tex0 : TEXCOORD0;};//------------------------------------vertexOutput VShader(vertexInput IN) { vertexOutput OUT; OUT.pos=mul(IN.pos,worldViewProj); OUT.tex0=IN.tex0; return OUT;}//------------------------------------sampler frameSampler = sampler_state { texture = <frame>; AddressU = CLAMP; AddressV = CLAMP; MINFILTER = LINEAR; MAGFILTER = LINEAR;};//-----------------------------------float4 PShader( vertexOutput IN): COLOR{ float4 ambColor = materialDiffuse * lightAmbient; float4 diffColor = materialDiffuse * lightColor ; float4 frameColor = tex2D( frameSampler, IN.tex0 ); return frameColor * diffColor;}//-----------------------------------technique renderMenuItem{ pass p0 { VertexShader = compile vs_1_1 VShader(); PixelShader = compile ps_1_1 PShader(); }}
the result is close, very close, but the image is still a bit blurry =/
any ideas?
thanks =D
okay I got it fixed.
the issue was not in my math calculations (though in the projection matrix i should have used (0.5,1024.5,768.5,0.5)) the real problem came from how i was loading my images, i was using the default texture filter on my images which seemed to stretch them to fill the entire texture, i am now using no filter instead which causes excess space to be colored black and 100% alpha.
so all is well, thanks for the help guys =)
the issue was not in my math calculations (though in the projection matrix i should have used (0.5,1024.5,768.5,0.5)) the real problem came from how i was loading my images, i was using the default texture filter on my images which seemed to stretch them to fill the entire texture, i am now using no filter instead which causes excess space to be colored black and 100% alpha.
so all is well, thanks for the help guys =)
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement