• Advertisement
Sign in to follow this  

3D DX8 - Texel to pixel mapping

Recommended Posts

Blast from the past: While checking my different renderers I've come upon a problem in a Direct3d8 renderer, which I'm pretty sure worked fine already a while ago.

Basically I'm drawing a 2d quad with pretransformed vertices, but the first triangle's texture coordinates seem off. I'm offsetting the coordinates by -0.5,-0.5 as per spec and there's not much more to it. The same part with D3D11 using shaders works fine (well, duh). Anyhow, I'm still curious, as the DX8 renderer is still my final fallback and it currently is supported quite well.

Anybody see any glaring mistake?

The code in question is this, the texture is a 16x16 pixel sized cut out from a 256x256 texture, these values are used. FWIW I'm running on Windows 10 with some AMD Radeon 5450.

 

iX = 22
iY = 222
Texture rect from the full texture are is (144,0 with 16x16 size)
The drawn box is drawn scaled up to a size of 40x40 pixels
m_DirectTexelMapping.offset is a 2d vector with values -0.5, -0.5

I've set min-, mag- and mip-mapping filter to nearest.

struct CUSTOMVERTEX
{
  D3DXVECTOR3   position; // The position
  float         fRHW;
  D3DCOLOR      color;    // The color
  float         fTU,
                fTV;
  };	  
  
  CUSTOMVERTEX          vertData[4];	  
  
  float   fRHW = 1.0f;	  
  GR::tVector   ptPos( (float)iX, (float)iY, fZ );
  GR::tVector   ptSize( (float)iWidth, (float)iHeight, 0.0f );	  
  
  m_pd3dDevice->SetVertexShader( D3DFVF_XYZRHW | D3DFVF_DIFFUSE | D3DFVF_TEX1 );	  
  
  vertData[0].position.= ptPos.x + m_DirectTexelMappingOffset.x;
  vertData[0].position.= ptPos.y + m_DirectTexelMappingOffset.y;
  vertData[0].position.= (float)ptPos.z;
  vertData[0].fRHW        = fRHW;
  vertData[0].color       = dwColor1;
  vertData[0].fTU         = fTU1;
  vertData[0].fTV         = fTV1;	  
  
  vertData[1].position.= ptPos.x + ptSize.x + m_DirectTexelMappingOffset.x;
  vertData[1].position.= ptPos.y            + m_DirectTexelMappingOffset.y;
  vertData[1].position.= (float)ptPos.z;
  vertData[1].fRHW        = fRHW;
  vertData[1].color       = dwColor2;
  vertData[1].fTU         = fTU2;
  vertData[1].fTV         = fTV2;	  
  
  vertData[2].position.= ptPos.x            + m_DirectTexelMappingOffset.x;
  vertData[2].position.= ptPos.y + ptSize.y + m_DirectTexelMappingOffset.y;
  vertData[2].position.= (float)ptPos.z;
  vertData[2].fRHW        = fRHW;
  vertData[2].color       = dwColor3;
  vertData[2].fTU         = fTU3;
  vertData[2].fTV         = fTV3;	  
  
  vertData[3].position.= ptPos.x + ptSize.x + m_DirectTexelMappingOffset.x;
  vertData[3].position.= ptPos.y + ptSize.y + m_DirectTexelMappingOffset.y;
  vertData[3].position.= (float)ptPos.z;
  vertData[3].fRHW        = fRHW;
  vertData[3].color       = dwColor4;
  vertData[3].fTU         = fTU4;
  vertData[3].fTV         = fTV4;	  
  
  m_pd3dDevice->DrawPrimitiveUP(
                  D3DPT_TRIANGLESTRIP,
                  2,
                  &vertData,
                  sizeof( vertData[0] ) );
	

 

God, I hate this borked message editor, it's so not user friendly. This POS message editor really loves to f*ck up code formatting.

Share this post


Link to post
Share on other sites
Advertisement

D3D9 was released Jan, 2003... I don't see anything standing out to me in your code snippet mind you.  But honestly you need to drop support for this, it's not worth the time of day to maintain.  If you release anything to the public you will have a maintenance nightmare on your hands.  Mostly in the form of infected out of date OS support calls/complaints.  And if you claim to support their OS they will ridicule your software on public forums and leaving negative reviews.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
  • Advertisement