Pixel Shader texel lookup

Started by
4 comments, last by RobM 15 years, 12 months ago
In a pixel shader, is it possible to retrieve a colour at an arbitrary texel from a texture that is passed to the shader as a constant? What I'm thinking about doing is blending 2 (or 3) textures in a shader using only one texture coordinate. The reason for this is to enable me to store a terrain chunk using less memory and to render it using less transforms. So if I only have one texture coordinate for 2 or 3 textures, how do I blend them at different resolutions? I pass in a constant factor (per terrain chunk) for each of the other one/two textures which is used in the pixel shader to retrieve the different coordinates. I'm not sure if this method is already used and I'm new to vertex and pixel shaders but I wondered if it could be done and whether it might be an interesting option for compressing larger static terrains in memory. So: 1 low res detail texture stretched over, say, 3 x 3 terrain chunks (512x512) (including the alpha map) 1 high res detail texture stretched over 1 terrain chunk (512x512) When the terrain chunk is rendered - a factor of 3 is passed in to the pixel shader along with the low res texture. Only the high-res detail texture coordinates are used, and the other coordinates are calculated by getting the texel colour from the passed-in texture at x * factor, y * factor. If the xy coordinates of the pixel are available to the pixel shader, could the pixel shader do it's own texel colour lookup and remove the need to store any texture coordinates in the vertex altogether? Thanks for any help, apologies if this is badly worded.
Advertisement
A pixel shader does have the ability to sample a texture at arbitrary locations. So your idea of having a single texture coordinate and then multiplying by a scaling factor to get a texture coordinate for your low-res texture is perfectly valid, and you shouldn't have any trouble implementing it.

As for the "x and y coordinates" of the pixel, which coordinates exactly are you talking about? Object space? World-space? View-space? Screen-space? For the most part, the pixel shader will only have whatever coordinates your vertex shader passes on to it. However if you can pass one of these on to the pixel shader and you can calculate your texture coordinate from that value, then no you don't need to store texture coordinates in your vertices. Your shader code can specify whatever values you want as the texture coordinate, so you could derive it from any information you have access to or that you can calculate in the shader.
Thanks for the response, that's good news.

Regarding the x & y coords, I was playing with the idea that if you had a regular grid of vertices which represented a terrain chunk, and you had the _object space_ x & _Z_ coordinates available in the pixel shader, you would not need to store any texture coordinates as you could just use the x & z ones (scaled to 0.0 - 1.0 obviously).

The one static terrain chunk would obviously need to be transformed and drawn for each visible terrain chunk and the Y heights of each sampled from some kind of heightmap in the vertex shader. I'm not sure if I'm putting my idea across too well, so apologies if not. I think this is similar to how geoclipmapping works.

One further question, I read recently that it's faster to carry out the transform and lighting using vertex and pixel shader operations instead of doing it in the T&L part of the pipeline. Is this a valid option and if so, how would you tell the device to not transform and light the vertices?

Thanks again
Quote:Original post by RobMaddison
Thanks for the response, that's good news.

Regarding the x & y coords, I was playing with the idea that if you had a regular grid of vertices which represented a terrain chunk, and you had the _object space_ x & _Z_ coordinates available in the pixel shader, you would not need to store any texture coordinates as you could just use the x & z ones (scaled to 0.0 - 1.0 obviously).



That sounds perfectly fine, you'll be able to just pass the object space coordinates right through to your pixel shader. Of course if you just normalize to 1 though you'll get the common problem that your textures will be stretched and distorted on terrain area where the change in y is relatively steep.

Quote:Original post by RobMaddison
The one static terrain chunk would obviously need to be transformed and drawn for each visible terrain chunk and the Y heights of each sampled from some kind of heightmap in the vertex shader. I'm not sure if I'm putting my idea across too well, so apologies if not. I think this is similar to how geoclipmapping works.


If you go this route be careful, as this requires using vertex texture fetch. In general doing this tends to be quite slow, and its support is limited to SM3.0 GPU's by Nvidia and SM4.0 GPU's by ATI and Nvidia. I also believe some GPU's are picky about which texture formats you can sample in the vertex shader, and may require you to use fp32. Is your terrain going be dynamic?


Quote:Original post by RobMaddison
One further question, I read recently that it's faster to carry out the transform and lighting using vertex and pixel shader operations instead of doing it in the T&L part of the pipeline. Is this a valid option and if so, how would you tell the device to not transform and light the vertices?

Thanks again


Well on most modern GPU's using fixed-function certainly isn't going to be any faster, since that stuff is just emulated using shaders anyway. So for simple cases you could probably optimize better in your shaders, and get better performance.

As for disabling the fixed-function operations, this would depend on the API. I know very little about OpenGL, but in D3D9 the fixed-function pipeline is only active if you have no shaders bound to the device. If there are shaders bound, those will be used for all transformation and pixel processing. In D3D10 the fixed-function pipeline is gone altogether.
Quote:Original post by MJP
As for disabling the fixed-function operations, this would depend on the API. I know very little about OpenGL, but in D3D9 the fixed-function pipeline is only active if you have no shaders bound to the device. If there are shaders bound, those will be used for all transformation and pixel processing.
OpenGL is exactly the same.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Thanks for the responses guys, that's really helpful information.

This topic is closed to new replies.

Advertisement