Jump to content
  • Advertisement
Sign in to follow this  
MrWereWolf

Why doesn't NVidia increase the texture units number?

This topic is 4832 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was wondering... why doesn't NVidia increase the texture units number in their video cards? NVidia cards support 4 texture units since GF3. That was about 3 years ago... Why isn't this number increasing? I don't think that it is so difficult to implement, if 4 textures are already supported. In case you need to use more than 4 textures per triangle, it still would be still faster than multipassing. The interesting thing is that ATI cards support 8 texture units since Radeon 8500 (I think), which also came out a long time ago, isn't there a rivality between these two companies? I don't get it. //Mr.WereWolf//

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Oxyacetylene
Eh? I've got a GeforceFX 5200 in my laptop, and I'm sure it supports 8 texture units.


Through programmable pipeline, it supports 16 texture image units and 8 texture coordinate sets. See here for info about all hardware: http://www.delphi3d.net/hardware/listreports.php

Share this post


Link to post
Share on other sites
I've just tested it in my application, and it definitely supports 8 textures in the fixed function pipeline, as opposed to 4.

(this is just using 1 set of texture coordinates though)


EDIT: Actually, strike that, it might be doing it in software, my frame rate goes through the floor when I have more than 4

Share this post


Link to post
Share on other sites
Anything that uses >4 textures sounds complicated enough to be a shader's job, so its fair enough that the fixed function pipeline can only use 4. I think that these days, they assume any complicated rendering will use shaders. How come you want so many tex units anyway? (using fixed-function that is)

But then again, you can always multi-pass.

-Twixn-

Share this post


Link to post
Share on other sites

I only checked some video-card info sites, and only 4 texture units are reported.

But if you say it can handle more, I believe you.

//Mr.WereWolf//

Share this post


Link to post
Share on other sites
Quote:
Original post by MrWereWolf

I only checked some video-card info sites, and only 4 texture units are reported.

But if you say it can handle more, I believe you.

//Mr.WereWolf//


It can handle more using the programmable pipeline. Supporting more in the fixed function pipeline doesn't really make sense.

Share this post


Link to post
Share on other sites
GetIntegerv( GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS_ARB, &max_combined_texture_image_units ); // min 2
GetIntegerv( GL_MAX_TEXTURE_IMAGE_UNITS_ARB, &max_texture_image_units ); // min 2
GetIntegerv( GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS_ARB, &max_vertex_texture_image_units ); // min 0
GetIntegerv( GL_MAX_TEXTURE_UNITS, &max_num_texture_units );

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!