Archived

This topic is now archived and is closed to further replies.

Future of shaders

This topic is 5040 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I think that hand-coding vertex programs in future will be reserved for vertex programs which need to run very quickly. You can compare it to Assembler vs. C/C++; you use C++ to write the bulk of your code, then use Assembler to write pieces which need particular optimization. In the same way, you can use Cg and GLslang to code the majority of your vertex programs, and hand-code specific vertex programs which need to be quick. Of course, this depends completely on the optimization employed by the Cg and GLslang compilers, which is likely to be very high. If it is, you'll never need to hand-code vertex programs again...


Windows 95 - 32 bit extensions and a graphical shell for a 16 bit patch
to an 8 bit operating system originally coded for a 4 bit microprocessor,
written by a 2 bit company that can't stand 1 bit of competition.


[edited by - iNsAn1tY on February 20, 2004 6:49:23 AM]

Share this post


Link to post
Share on other sites
As someone on the opengl.org forums said, its a bit like writting assembler for a chip you know nuffin about, sure you know the instruction details but thats it, you know nuffin about the cache system, pipelining, how it handles out of order execution.
The guys at ATI and Nvidia writing the backend compilers for Cg and GLSL DO know this infomation, so i''m more inclinded to trust the output of a shader generated by them for speed than by hand (well, maybe not atm but soon certainly)
Also, factor in what works well for one card might not on another and you get a good reason not to use assembler.

That said, its certainly worth at least being aware of the assembler way of doing things, simply because the backends still aint really up to the job, but give it a year (or less) and this might well have changed

Share this post


Link to post
Share on other sites
One of the cool things about OGLSL is that it compiles your shader at runtime, so the driver can take full advantage of whatever features/instructions the current hardware offers. So even if you know everything there is to know about current hardware and write a perfect asm level shader right now, that same shader might very well be suboptimal on future hardware generations, whereas a OGLSL shader will likely perform much better.

Share this post


Link to post
Share on other sites
Now we just have to get good driver support from all vendors. And when that comes - LET'S ROCK

I was frustrated for never learning assembly, due the the fact that shaders use them - then HLSL and Cg came, which partly solved my problem, but optimizations had to be made by hand. Now GLSL is coming and.. well.. it's soooo great from what I read about it.

ATI finally put up a sample app. Check it out everyone Right here

[edited by - klajntib on February 20, 2004 8:04:15 PM]

Share this post


Link to post
Share on other sites
I dont think ASM style gragment/vertex programs are hard to write at all. I learned everything I know from the ARB whitepapers and a few examples, and I used what I lerned to write a perpixel lighting program from scratch.
Granted Cg, HLSL, GSlang, etc make writing shaders faster, I disagree that C style shaders were written because ASM style shaders were too difficult to learn.
I had no prior expirence with ASM before I started writing shaders.
I too wish they were more widely supported. I know only 3 people that can run the shader code that I write :-/

Share this post


Link to post
Share on other sites
Hlsl are good for writing long shaders, quick prototyping and for bringing you closer to your problem space. Maybe it''s still true that the gffx has problems running arb shaders. Thus, I think glslang might outrun it because nv will optimize it for their hw better than generic arb shader. Though, you can also use nv asm shaders which are fast but then they only run on nv cards. So yeah, I think the next gen cards coming out in april will make glslang feasible.

Share this post


Link to post
Share on other sites