Jump to content
  • Advertisement
Sign in to follow this  
Last Attacker

Rendering a landscape from a Texture

This topic is 4372 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I've been looking through docs, Google, etc. to try and find out how to do this (the whole day) and only got a partial answer. Basically what I want to do is send a texture to the video card in RGB (or FLOAT, doesn't matter) form. Then from the video card, use the texture to render a heightmapped landscape. I know that the new shader stuff has a Vertex Texture Fetch function that does exactly what I want but apparently, you can do this also by using (if I can call it this) before-shader-extensions. IOW, by not using SHADERS, I would like to accomplish just that. I figured that I can make a buffer on my Video Card (yes, I have a nVidia 6-series card) to copy the texture to (I know how to do that [wink]). Then manipulate the texture on the card by expanding it for having 3 coordinates of type float since the texture has only 1 float per pixel which is not enough for rendering a heightmap (don't know how to do that [sad]). Then tell the video card to draw that. Why would I want to do it from a texture? Well textures for heightmaps are 1/3 the size of the vertex equivalents, so its sent faster to the video card. Also, you can perform texture compression to make it even better! Does anyone know how this can be done, even roughly equivalent (to draw a heightmap from a texture in video memory)? Thanks

Share this post


Link to post
Share on other sites
Advertisement
What you could do is the following:

1. Load Heightmap texture. Its best if it is greyscale, with black being low.

2. Create a plane with X*Y number of polygons. I think the easiest way to do this is to have the plane be a multiple of the size of the texture. What I mean is, if the texture is 256x256, then the plane is assigned a step size (the space between the vertices) which multiplies that size. Say, a step size of 10 would mean that the plane would be 2560x2560 units with 10 units between each vertex.

3. For each pixel in the heightmap, set the corresponding vertex's (i.e. pixel (0, 0) will mean vertex at position (0, 0, 0)) height to a value (displacement strength) multiplied by the floating point value of the RGB color on that pixel divided by 3 (average since it is greyscale).

Share this post


Link to post
Share on other sites
I don't think there are any cards can actually "produce" verticies, this may be different in the up and new geometry shader but for the most part I think you're just limited to vertex and pixel manipulation.

Heightmap is the way to go.

(Don't quote me on this, I haven't done much shader work)

Share this post


Link to post
Share on other sites
Ok the thing is, I know how to make a client-side (OpenGL based) version of the whole Heightmap thing. Sorry, my question probably wasn't clear enough.

I would like to tell OpenGL to render a heightmap landscape from a texture found in the video card memory. I don't want to send glVertex3f() commands from the CPU to the GPU the whole time (via OpenGL). So I need some OpenGL commands that could interpret the texture as a heightmap for landscape rendering. Imagine the performance increases one can expect from landscape rendering.

I hope its more clear now [wink].

Share this post


Link to post
Share on other sites
I've got a quick question:

how do I receive colors from point x,y in glTexture object?

Share this post


Link to post
Share on other sites
That depends on how you store the image data (pixel data). For example, in my app, I have a Texture class with a dynamic array of unsigned chars (byte, 0-255) that stores each pixel's color (RGB or RGBA depending on bit-depth).

There are some useful tutorials on loading TGA, JPG, BMP and other common formats laying around the net. I recommend TGA or PNG.

Share this post


Link to post
Share on other sites
I *can* load it and assign to a texture, but I need to access a single pixel. Do you know how should i do it?

I use the code from Beginning OpenGL book (http://glbook.gamedev.net/boglgp/) and load it from tga file.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, heightMap->GetWidth(),heightMap->GetHeight(), 0, heightMap->GetImageFormat() + GL_RGB, GL_UNSIGNED_BYTE, heightMap->GetImage())


Did this help, or should I paste the file loading code?

If not, how would you get the red color in for example your program? Some pseudocode would be helpful

update: is using glGetTexImage a good way to do it? will i be able to receive 1 pixel, divide it into colors and then check their values?

[Edited by - EliteWarriorManTis on July 22, 2006 6:41:13 PM]

Share this post


Link to post
Share on other sites
I am by no means an expert, but I've read that if you use the RAW format, it basically stores each pixel as a byte and then each byte's is stored like 0xRRGGBBAA or something of that sort. Then, you can simply iterate through the byte[] array stored in the RAW file. I don't know any of the details, though, but I'd say [google] the RAW format.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!