Jump to content
  • Advertisement
Sign in to follow this  
MARS_999

Question on RTT texture creation

This topic is 4618 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am unsure of this, but if I do this
float *texture = new float[size * size * 4];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, size, size, 0, GL_RGBA, GL_FLOAT, texture );	



vs.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, size, size, 0, GL_RGBA, GL_FLOAT, NULL);	



will the image quality be any different? What's the difference between the two methods? This if for RTT creation texture types. Thanks for clearing up my confusion.

Share this post


Link to post
Share on other sites
Advertisement
The final parameter, pixels, is the data which the image is created from (EG, if its a picture of a dog, your final texture will be of that dog). If this argument is set to a valid pointer, the function tries to intepret that as the image data. If it is set to NULL, the function allocates texture memory to store an image of the supplied width, height, ect.

Share this post


Link to post
Share on other sites
Thanks Liam, but I am know how to use the function to setup a image I load from e.g. .tga files ect... What I am wondering is for RTT types does it matter if you use NULL or would one get a better image quality e.g. unsigned byte vs. float if you loaded the texture as a float type vs. unsigned byte type?

float *p = new float[x*y*4];
//vs.
unsigned char *p = new unsigned char[x*y*4];


Hope this clears my question up. ;)

Share this post


Link to post
Share on other sites
You will get the most precision by using the format in which the textures are stored in your file. Converting them yourself to another format, only to have the OpenGL driver convert them to the graphics card internal format, is a sure way to lose quality. So the best plan here is to keep the format of the original file.

Share this post


Link to post
Share on other sites
Quote:
Original post by ToohrVyk
You will get the most precision by using the format in which the textures are stored in your file. Converting them yourself to another format, only to have the OpenGL driver convert them to the graphics card internal format, is a sure way to lose quality. So the best plan here is to keep the format of the original file.


Ok, I will make a note of that. But what about when you don't load a texture from a file e.g. shadowmaps you just call NULL how can one get better image quality besides moving the depth buffer to 24bit which ATI doesn't support only 16

Share this post


Link to post
Share on other sites
The type of an empty array will make no difference to the texture storage (it takes a void pointer anyway), it is the second to last argument to glTexImage2D that determines the source format:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, size, size, 0, GL_RGBA, GL_FLOAT, texture );
However, I doubt that has any effect on the internal format of the texture, OpenGL will convert the texture to its prefered internal format anyway.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!