• Advertisement
Sign in to follow this  

ATI vertex texture fetch in HLSL

This topic is 4329 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I was just wondering, has there been any change in the current ATI SM3 hardware assortment, in the sense that it now does allow for (efficient) vertex texture fetches HLSL? The original topic (including my "piece of bullshit" comparisson) on this was also posted in this forum, so I thought I'd ask around here again [smile] I've *finally* decided to get myself a GeForce 7800 to try to implement GPU GeoClipmaps using these vertex textures as described in the GPU Gems 2 book, but without proper support on ATI hardware I can't very well use it in anything but a tech demo. Well, not without a backup codepath anyway. Would anyone happen to have some (firsthand) experience to offer about using vertex textures in HLSL, preferrably on cards from both vendors? Thanks in advance!

Share this post


Link to post
Share on other sites
Advertisement
I'm not aware of anything having changed since the thread you mentioned. ATI don't support proper vertex texturing via the API - you have to use R2VB instead. Basically means you've gotta implement two paths, one for Gf6/Gf7 and another for X1k's.

Jack

Share this post


Link to post
Share on other sites
*sigh* Thanks Jack

Ah well, I was already thinking about implementing the GeoClipmaps on the CPU as well for compatibility with SM2 hardware, so I guess that'll have to do for ATI 'SM3' hardware too. I haven't got a clue how to use R2VB in Managed DirectX and I'm not inclined to waste my time on finding out how or even if it's implemented [headshake]

I'm wasting enough time as is :p

I don't know if there's any public info on SM4 specs available yet, but will NVidia's 'proper' way of doing vertex textures become the minimum for SM4 hardware? Even an educated guess would be very welcome, if available.

Share this post


Link to post
Share on other sites
See this post about managed R2VB.

R2VB is available on all SM2 ATI cards and up, so if SM2 cards are the lowest you're supporting, it's worth trying an R2VB path.

As for SM4 docs, the D3D10 docs are in the last 3 SDKs, and fully describe what D3D10 hardware will support.

Share this post


Link to post
Share on other sites
Quote:
Original post by remigius
I don't know if there's any public info on SM4 specs available yet, but will NVidia's 'proper' way of doing vertex textures become the minimum for SM4 hardware? Even an educated guess would be very welcome, if available.
As Eyal said, the D3D10 documentation should give you a pretty good idea.

Resources are substantially more flexible in how and where they can be used in the D3D10 pipeline. I'm not aware of a specific "vertex texturing" or "render to vertex buffer" feature - more that they would be specific uses of a more generalised technology.

hth
Jack

Share this post


Link to post
Share on other sites
Quote:
See this post about managed R2VB.


Hmm, I feel quite stupid about missing that one, thanks for pointing it out.

I just checked out the source and it looks pretty neat, but on my X850 I got a few artifacts with polygons seemingly not getting rendered at times. Guess I'll have to take that up with Acid2 [wink]

Quote:
As for SM4 docs, the D3D10 docs are in the last 3 SDKs, and fully describe what D3D10 hardware will support.


Edited: Ok, never mind, I really should head off to bed... With all the stream rerouting on SM4 hardware, vertex texturing obviously seems to be just a specific application of this general concept (and perhaps even redundant?).

In any case, thanks and good night :)

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
I'm not aware of anything having changed since the thread you mentioned. ATI don't support proper vertex texturing via the API - you have to use R2VB instead. Basically means you've gotta implement two paths, one for Gf6/Gf7 and another for X1k's.

Jack

Popular myth about R2VB: It's only for X1k cards.

Reality: It is for all cards since the Radeon 9500.

Just thought I'd lend my two cents.

Share this post


Link to post
Share on other sites
Quote:
Original post by Cypher19
Popular myth about R2VB: It's only for X1k cards.

Reality: It is for all cards since the Radeon 9500.

Just thought I'd lend my two cents.

Why didn't ATI release their nifty extension way back then? If they had actually grown a little support, maybe it would have become really widespread (even becoming standardized into D3D and on other manufacturer's hardware).

Share this post


Link to post
Share on other sites
Quote:
Original post by remigius
Edited: Ok, never mind, I really should head off to bed... With all the stream rerouting on SM4 hardware, vertex texturing obviously seems to be just a specific application of this general concept (and perhaps even redundant?).


Vertex texturing might still be used. After all, especially on cards that use the same pipelines for vertex and pixel shaders (like ATI's is supposed to be), you should be able to get fast texture access in vertex shaders.

Still, I supposed that a lot of algorithms will use streams and stream output instead, or direct buffer accesses (even constant buffer accesses, since constant buffers in D3D10 support 4096 constants, enough for example for the 64x64 values used by the geo clipmaps algorithm).

Share this post


Link to post
Share on other sites
Quote:
Why didn't ATI release their nifty extension way back then? If they had actually grown a little support, maybe it would have become really widespread (even becoming standardized into D3D and on other manufacturer's hardware).


Maybe they found it was possible as they were conducting research on D3D10 features (and realized that the requirements needed on the hardware were met on their older chips?), or as a part of the planned Uberbuffer support for OGL? Alternatively, why did it take so long for ATi to bring on adaptive AA when the X1k released, temporal AA when the Xx00 series was launched, even though both of those work on the Radeon 9x00 cores?

Share this post


Link to post
Share on other sites
Good morning,

I was googling after this some more and I found this interesting article comparing the two techniques (VT & R2VB). They also made the (apparent) mistake of dating the advent of R2VB wrong, but other than that it cleared things up a bit.

ATI supposedly released 12 demo's on R2VB quite recently (on March 30 2006, according to the article), so probably as a countermove against the NVidia/Havok publications. If R2VB really has been around since the Radeon 9500, ATI really needs to work on its PR and relations with MS, to get things into the SMx specs. Like with instancing that's also supported since the Radeon 9500, they should have mentioned (and explained?) that in their product specifications instead of the meaningless hyper-turbo-smart rubbish that seems to clutter those pages now (also, replying to registered developer applications would help too ;).

Anyway, one last question to my problem at hand. Am I right that NVidia <SM3 cards won't support R2VB? If so, which route would you pick? Ideally I should of course try to implement CPU, R2VB and Vertex Texture paths, but I don't think it'd be realistic to create and upkeep these 3 paths, regardless of the question if I actually have the necessary skills to do so [smile]

Share this post


Link to post
Share on other sites
R2VB has been in the drivers only since 05.9, so it's rather new. Probably introduced with the X1x00 generation to provide an alternative to VT and because it's more useful with a 32 bit float pixel pipeline. Why put the docs out just now? I have no idea. NVIDIA is a lot better at pushing its hardware features at developers than ATI is.

IMO R2VB has nothing to do with Havok/NVIDIA. I've read that ATI will release its own physics API, which will be more efficient than can be done over D3D.

I think you're right that NVIDIA won't support R2VB. They could, but likely won't.

Share this post


Link to post
Share on other sites
Quote:
Original post by ET3D
IMO R2VB has nothing to do with Havok/NVIDIA. I've read that ATI will release its own physics API, which will be more efficient than can be done over D3D.

That would be cool, but hopefully it's not an enormous hack like the Instancing one and the entire R2VB 'api'. I guess it was pretty much the only way to fit it in, though, seeing how they really couldn't change D3D at all.

Share this post


Link to post
Share on other sites
Sounds like a whole new API from the information I've read..

Quote:
GPGPU.org news:
ATI has also announced preliminary plans to enable GPGPU development by publishing a detailed spec and a thin abstraction interface for programming the new GPUs.


Referencing:
Quote:
ExtremeTech Article:
The third future project at ATI is dramatically improved support for the GPGPU scene.
...
ATI plans to remedy that by publishing a detailed spec and even a thin "close to the metal" abstraction layer for these coders, so it can get away from using DirectX and OpenGL as an interface to the cards. Those are fine graphics APIs, but they're less than optimal for general purpose computing.


Can't find anything official from ATI though.

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
Quote:
GPGPU.org news:
ATI has also announced preliminary plans to enable GPGPU development by publishing a detailed spec and a thin abstraction interface for programming the new GPUs.

I think this is very cool. At GDC the guy from Microsoft said it's impressive that ATI lets developers program shaders at microcode level on the Xbox 360. NVIDIA didn't let people get so close to the metal in the original Xbox.

So I guess that ATI is trying to bring this kind of power to the PC. Things like command buffers, more direct control of memory... Take away the overhead and limitations of Direct3D (though possibly at the expense of some more management work).

Only problem I see with this plan is D3D10. Unless ATI releases their new API soon, there won't be much point in it. Even if the do release it soon, there probably won't be that much point in using it for D3D10 cards. (I can still see an ATI API being a little better than D3D10 for accessing their chips, but not nearly as significantly as for D3D9.)

Share this post


Link to post
Share on other sites
Quote:
Original post by ET3D
I think this is very cool. At GDC the guy from Microsoft said it's impressive that ATI lets developers program shaders at microcode level on the Xbox 360. NVIDIA didn't let people get so close to the metal in the original Xbox.


This is some stupid comment, Microsoft are the one responsible for the design of the software on Xbox, Nvidia (and ATI) only bring the hardware (and not even the true silicon chip in the case of ATI).

LeGreg

Share this post


Link to post
Share on other sites
Quote:
Original post by LeGreg
Quote:
Original post by ET3D
I think this is very cool. At GDC the guy from Microsoft said it's impressive that ATI lets developers program shaders at microcode level on the Xbox 360. NVIDIA didn't let people get so close to the metal in the original Xbox.

This is some stupid comment, Microsoft are the one responsible for the design of the software on Xbox, Nvidia (and ATI) only bring the hardware (and not even the true silicon chip in the case of ATI).

This isn't true at all. IHV's contribute a lot on the software side of things with their drivers. They have lots of custom shader optimizations, custom extensions, ect. Of course, on the Xbox platform, you are a lot closer to the actually hardware, but you still have a lot of development from ATI going into it. What Eyal was refering to are special opcodes that you can use in shaders that have been specially implemented by ATI.

Share this post


Link to post
Share on other sites
LeGreg, think of shader assembly that's actual assembly language. The D3D assembly language is more like MSIL or Java bytecode. It's a low level language that's then compiled into the actual assembly language of the chip.

Share this post


Link to post
Share on other sites
Quote:
Original post by ET3D
Think of shader assembly that's actual assembly language. The D3D assembly language is more like MSIL or Java bytecode. It's a low level language that's then compiled into the actual assembly language of the chip.


ET3D, i'm just saying that Microsoft did the whole thing on Xbox, exposing and not exposing. Nvidia only lent them the hw documentation and sold them the chips. This is first hand info, not hearsay :)

LeGreg

Share this post


Link to post
Share on other sites
Quote:
Original post by LeGreg
Quote:
Original post by ET3D
Think of shader assembly that's actual assembly language. The D3D assembly language is more like MSIL or Java bytecode. It's a low level language that's then compiled into the actual assembly language of the chip.


ET3D, i'm just saying that Microsoft did the whole thing on Xbox, exposing and not exposing. Nvidia only lent them the hw documentation and sold them the chips. This is first hand info, not hearsay :)


My understanding from talking to 360 developers (I'm not one myself [sad]) is that it's different this time around - and I think thats what Eyal was trying to point out. ATI != Nvidia [wink]

Jack

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement