Jump to content
  • Advertisement
Sign in to follow this  
remigius

ATI vertex texture fetch in HLSL

This topic is 4502 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I was just wondering, has there been any change in the current ATI SM3 hardware assortment, in the sense that it now does allow for (efficient) vertex texture fetches HLSL? The original topic (including my "piece of bullshit" comparisson) on this was also posted in this forum, so I thought I'd ask around here again [smile] I've *finally* decided to get myself a GeForce 7800 to try to implement GPU GeoClipmaps using these vertex textures as described in the GPU Gems 2 book, but without proper support on ATI hardware I can't very well use it in anything but a tech demo. Well, not without a backup codepath anyway. Would anyone happen to have some (firsthand) experience to offer about using vertex textures in HLSL, preferrably on cards from both vendors? Thanks in advance!

Share this post


Link to post
Share on other sites
Advertisement
I'm not aware of anything having changed since the thread you mentioned. ATI don't support proper vertex texturing via the API - you have to use R2VB instead. Basically means you've gotta implement two paths, one for Gf6/Gf7 and another for X1k's.

Jack

Share this post


Link to post
Share on other sites
*sigh* Thanks Jack

Ah well, I was already thinking about implementing the GeoClipmaps on the CPU as well for compatibility with SM2 hardware, so I guess that'll have to do for ATI 'SM3' hardware too. I haven't got a clue how to use R2VB in Managed DirectX and I'm not inclined to waste my time on finding out how or even if it's implemented [headshake]

I'm wasting enough time as is :p

I don't know if there's any public info on SM4 specs available yet, but will NVidia's 'proper' way of doing vertex textures become the minimum for SM4 hardware? Even an educated guess would be very welcome, if available.

Share this post


Link to post
Share on other sites
See this post about managed R2VB.

R2VB is available on all SM2 ATI cards and up, so if SM2 cards are the lowest you're supporting, it's worth trying an R2VB path.

As for SM4 docs, the D3D10 docs are in the last 3 SDKs, and fully describe what D3D10 hardware will support.

Share this post


Link to post
Share on other sites
Quote:
Original post by remigius
I don't know if there's any public info on SM4 specs available yet, but will NVidia's 'proper' way of doing vertex textures become the minimum for SM4 hardware? Even an educated guess would be very welcome, if available.
As Eyal said, the D3D10 documentation should give you a pretty good idea.

Resources are substantially more flexible in how and where they can be used in the D3D10 pipeline. I'm not aware of a specific "vertex texturing" or "render to vertex buffer" feature - more that they would be specific uses of a more generalised technology.

hth
Jack

Share this post


Link to post
Share on other sites
Quote:
See this post about managed R2VB.


Hmm, I feel quite stupid about missing that one, thanks for pointing it out.

I just checked out the source and it looks pretty neat, but on my X850 I got a few artifacts with polygons seemingly not getting rendered at times. Guess I'll have to take that up with Acid2 [wink]

Quote:
As for SM4 docs, the D3D10 docs are in the last 3 SDKs, and fully describe what D3D10 hardware will support.


Edited: Ok, never mind, I really should head off to bed... With all the stream rerouting on SM4 hardware, vertex texturing obviously seems to be just a specific application of this general concept (and perhaps even redundant?).

In any case, thanks and good night :)

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
I'm not aware of anything having changed since the thread you mentioned. ATI don't support proper vertex texturing via the API - you have to use R2VB instead. Basically means you've gotta implement two paths, one for Gf6/Gf7 and another for X1k's.

Jack

Popular myth about R2VB: It's only for X1k cards.

Reality: It is for all cards since the Radeon 9500.

Just thought I'd lend my two cents.

Share this post


Link to post
Share on other sites
Quote:
Original post by Cypher19
Popular myth about R2VB: It's only for X1k cards.

Reality: It is for all cards since the Radeon 9500.

Just thought I'd lend my two cents.

Why didn't ATI release their nifty extension way back then? If they had actually grown a little support, maybe it would have become really widespread (even becoming standardized into D3D and on other manufacturer's hardware).

Share this post


Link to post
Share on other sites
Quote:
Original post by remigius
Edited: Ok, never mind, I really should head off to bed... With all the stream rerouting on SM4 hardware, vertex texturing obviously seems to be just a specific application of this general concept (and perhaps even redundant?).


Vertex texturing might still be used. After all, especially on cards that use the same pipelines for vertex and pixel shaders (like ATI's is supposed to be), you should be able to get fast texture access in vertex shaders.

Still, I supposed that a lot of algorithms will use streams and stream output instead, or direct buffer accesses (even constant buffer accesses, since constant buffers in D3D10 support 4096 constants, enough for example for the 64x64 values used by the geo clipmaps algorithm).

Share this post


Link to post
Share on other sites
Quote:
Why didn't ATI release their nifty extension way back then? If they had actually grown a little support, maybe it would have become really widespread (even becoming standardized into D3D and on other manufacturer's hardware).


Maybe they found it was possible as they were conducting research on D3D10 features (and realized that the requirements needed on the hardware were met on their older chips?), or as a part of the planned Uberbuffer support for OGL? Alternatively, why did it take so long for ATi to bring on adaptive AA when the X1k released, temporal AA when the Xx00 series was launched, even though both of those work on the Radeon 9x00 cores?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!