ATI vertex texture fetch in HLSL

Started by
18 comments, last by jollyjeffers 17 years, 12 months ago
Good morning,

I was googling after this some more and I found this interesting article comparing the two techniques (VT & R2VB). They also made the (apparent) mistake of dating the advent of R2VB wrong, but other than that it cleared things up a bit.

ATI supposedly released 12 demo's on R2VB quite recently (on March 30 2006, according to the article), so probably as a countermove against the NVidia/Havok publications. If R2VB really has been around since the Radeon 9500, ATI really needs to work on its PR and relations with MS, to get things into the SMx specs. Like with instancing that's also supported since the Radeon 9500, they should have mentioned (and explained?) that in their product specifications instead of the meaningless hyper-turbo-smart rubbish that seems to clutter those pages now (also, replying to registered developer applications would help too ;).

Anyway, one last question to my problem at hand. Am I right that NVidia <SM3 cards won't support R2VB? If so, which route would you pick? Ideally I should of course try to implement CPU, R2VB and Vertex Texture paths, but I don't think it'd be realistic to create and upkeep these 3 paths, regardless of the question if I actually have the necessary skills to do so [smile]
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
Advertisement
R2VB has been in the drivers only since 05.9, so it's rather new. Probably introduced with the X1x00 generation to provide an alternative to VT and because it's more useful with a 32 bit float pixel pipeline. Why put the docs out just now? I have no idea. NVIDIA is a lot better at pushing its hardware features at developers than ATI is.

IMO R2VB has nothing to do with Havok/NVIDIA. I've read that ATI will release its own physics API, which will be more efficient than can be done over D3D.

I think you're right that NVIDIA won't support R2VB. They could, but likely won't.
Quote:Original post by ET3D
IMO R2VB has nothing to do with Havok/NVIDIA. I've read that ATI will release its own physics API, which will be more efficient than can be done over D3D.

That would be cool, but hopefully it's not an enormous hack like the Instancing one and the entire R2VB 'api'. I guess it was pretty much the only way to fit it in, though, seeing how they really couldn't change D3D at all.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
Sounds like a whole new API from the information I've read..

Quote:GPGPU.org news:
ATI has also announced preliminary plans to enable GPGPU development by publishing a detailed spec and a thin abstraction interface for programming the new GPUs.


Referencing:
Quote:ExtremeTech Article:
The third future project at ATI is dramatically improved support for the GPGPU scene.
...
ATI plans to remedy that by publishing a detailed spec and even a thin "close to the metal" abstraction layer for these coders, so it can get away from using DirectX and OpenGL as an interface to the cards. Those are fine graphics APIs, but they're less than optimal for general purpose computing.


Can't find anything official from ATI though.

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Quote:Original post by jollyjeffers
Quote:GPGPU.org news:
ATI has also announced preliminary plans to enable GPGPU development by publishing a detailed spec and a thin abstraction interface for programming the new GPUs.

I think this is very cool. At GDC the guy from Microsoft said it's impressive that ATI lets developers program shaders at microcode level on the Xbox 360. NVIDIA didn't let people get so close to the metal in the original Xbox.

So I guess that ATI is trying to bring this kind of power to the PC. Things like command buffers, more direct control of memory... Take away the overhead and limitations of Direct3D (though possibly at the expense of some more management work).

Only problem I see with this plan is D3D10. Unless ATI releases their new API soon, there won't be much point in it. Even if the do release it soon, there probably won't be that much point in using it for D3D10 cards. (I can still see an ATI API being a little better than D3D10 for accessing their chips, but not nearly as significantly as for D3D9.)
Quote:Original post by ET3D
I think this is very cool. At GDC the guy from Microsoft said it's impressive that ATI lets developers program shaders at microcode level on the Xbox 360. NVIDIA didn't let people get so close to the metal in the original Xbox.


This is some stupid comment, Microsoft are the one responsible for the design of the software on Xbox, Nvidia (and ATI) only bring the hardware (and not even the true silicon chip in the case of ATI).

LeGreg
Quote:Original post by LeGreg
Quote:Original post by ET3D
I think this is very cool. At GDC the guy from Microsoft said it's impressive that ATI lets developers program shaders at microcode level on the Xbox 360. NVIDIA didn't let people get so close to the metal in the original Xbox.

This is some stupid comment, Microsoft are the one responsible for the design of the software on Xbox, Nvidia (and ATI) only bring the hardware (and not even the true silicon chip in the case of ATI).

This isn't true at all. IHV's contribute a lot on the software side of things with their drivers. They have lots of custom shader optimizations, custom extensions, ect. Of course, on the Xbox platform, you are a lot closer to the actually hardware, but you still have a lot of development from ATI going into it. What Eyal was refering to are special opcodes that you can use in shaders that have been specially implemented by ATI.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
LeGreg, think of shader assembly that's actual assembly language. The D3D assembly language is more like MSIL or Java bytecode. It's a low level language that's then compiled into the actual assembly language of the chip.
Quote:Original post by ET3D
Think of shader assembly that's actual assembly language. The D3D assembly language is more like MSIL or Java bytecode. It's a low level language that's then compiled into the actual assembly language of the chip.


ET3D, i'm just saying that Microsoft did the whole thing on Xbox, exposing and not exposing. Nvidia only lent them the hw documentation and sold them the chips. This is first hand info, not hearsay :)

LeGreg
Quote:Original post by LeGreg
Quote:Original post by ET3D
Think of shader assembly that's actual assembly language. The D3D assembly language is more like MSIL or Java bytecode. It's a low level language that's then compiled into the actual assembly language of the chip.


ET3D, i'm just saying that Microsoft did the whole thing on Xbox, exposing and not exposing. Nvidia only lent them the hw documentation and sold them the chips. This is first hand info, not hearsay :)


My understanding from talking to 360 developers (I'm not one myself [sad]) is that it's different this time around - and I think thats what Eyal was trying to point out. ATI != Nvidia [wink]

Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

This topic is closed to new replies.

Advertisement