Sign in to follow this  
discman1028

Learning about the old days - GPUs

Recommended Posts

I feel somewhat unfortunate not to have been born in the old days. I didn't grow up learning about successive graphics generations. Falling right into the world or extremely flexible shaders, every day the limits (instruction count, texture fetch) just seem to keep expanding and expanding beyond my (and most hobbyists') needs. I have done well reading and playing catch-up, but I still want to take a trip back in time. I like learning A before B, so I am trying to maximize performance and maximize graphics possibilities (shadows, reflections, etc) using solely the fixed function pipeline. I feel I learn more about the hardware that way, and how things came to be. It is working so far. I play in DX9c and NVidia FX5900 (SM2.0). So that is where my question comes in: is it safe to say that as long as I don't touch shaders/FX effects in D3D, I'm only using fixed-function functionality? (Even if the fixed-function pipeline is already emulated in vertex/pixel shaders in more recent hardware (since when, DX8? anyone know?), this is not of importance to me.) I want to avoid going back in time with respect to hardware/software (DX7 API + GPU). Thanks.

Share this post


Link to post
Share on other sites
Quote:
Original post by discman1028
So that is where my question comes in: is it safe to say that as long as I don't touch shaders/FX effects in D3D, I'm only using fixed-function functionality?

Nothing is "safe" in the game industry... nor anywhere in programming in general, not only graphics programming.
However, it is somewhat "safe" to say that.
If you access Diret3D directly (not through an engine, and pherhaps the D3DX utility library) without using shaders, you're most likely not using programable-fuction functionality (as opposed to fixed-func)
There are also some flags to be set in Vertex Buffers, which determine if they will be transformed in hard- or software. But I don't know DX 9 behaviour (I now use 3D engines) but in the DX7 days that meant using the hardware TnL or the CPU. Take into account that the hardware TnL is NOT a shader, and it is considered fixed functionality.

Quote:

I want to avoid going back in time with respect to hardware/software (DX7 API + GPU).

Shame, because you could really benefit from it (from what you say you want to learn)
I also notticed that many (good) samples from DX7 were removed in the DX 9.0 SDK, and still wondering why. (probably because they didn't look "next-genish")

Cheers
Dark Sylinc

Share this post


Link to post
Share on other sites
Quote:
Original post by Matias Goldberg
I also notticed that many (good) samples from DX7 were removed in the DX 9.0 SDK, and still wondering why. (probably because they didn't look "next-genish")


More likely because everyone is running away from the FFP like the horribly restrictive system is and embracing shaders as such including outdated information just serves to bloat the SDK and confuse people (as well as being pointless as technology moves forward and learning old hacks, because thats what they were, is far from usefull; learning the new hacks is a much better use of your time).

Share this post


Link to post
Share on other sites
Quote:
Original post by phantom
learning the new hacks is a much better use of your time


Well, case in point, when I learned about multitexturing I thought it was interesting. Now that you can do whatever you want in shaders, it seems moot to learn about multitexturing, but I'm suspecting some of the per-pass shader limitations are connected (in the hardware capability sense) to the max stages limitations in the d3d caps, back in the day.

So, back in the day, the limitations could be correlated to the hardware unit. Now, it's harder to understand why limitations are in place, without that prior working knowledge. Being a hardware guy, I like to understand more than an API.

I can say that I am familiar with graphics techniques and hacks... but I am not so familiar with why they are necessary from a hardware standpoint. At least not to the point that satisfies me. It seems that looking at chronology (Direct3D evolution) has proved a great way to learn about why things are the way they are.

Probably the best way to summarize it is an analogy. A lot of students are dropped into CS curriculum without more than a sentence or two describing why OO programming is the way to go these days. Then they go ahead with learning OO, never trying alternatives. Similarly, I felt that I have not experimented thoroughly with graphics methods. There were all these tricks back in the day for doing perspective texture mapping and shadows... once they fully support these things in hardware, it will be (closer to) a flip of a switch to turn them on. Where's the learning there? Gotta go back in time. :) Or work on a software renderer of your own, another venture that I have found educational.

Share this post


Link to post
Share on other sites
Quote:
Original post by phantom
learning the new hacks is a much better use of your time


Amen to that.

Quote:
Original post by discman1028

Well, case in point, when I learned about multitexturing I thought it was interesting. Now that you can do whatever you want in shaders, it seems moot to learn about multitexturing, but I'm suspecting some of the per-pass shader limitations are connected (in the hardware capability sense) to the max stages limitations in the d3d caps, back in the day.

So, back in the day, the limitations could be correlated to the hardware unit. Now, it's harder to understand why limitations are in place, without that prior working knowledge. Being a hardware guy, I like to understand more than an API.

I can say that I am familiar with graphics techniques and hacks... but I am not so familiar with why they are necessary from a hardware standpoint. At least not to the point that satisfies me. It seems that looking at chronology (Direct3D evolution) has proved a great way to learn about why things are the way they are.

Probably the best way to summarize it is an analogy. A lot of students are dropped into CS curriculum without more than a sentence or two describing why OO programming is the way to go these days. Then they go ahead with learning OO, never trying alternatives. Similarly, I felt that I have not experimented thoroughly with graphics methods. There were all these tricks back in the day for doing perspective texture mapping and shadows... once they fully support these things in hardware, it will be (closer to) a flip of a switch to turn them on. Where's the learning there? Gotta go back in time. :) Or work on a software renderer of your own, another venture that I have found educational.


While I read your post, I remember once a guy claiming happily that he made "one of the biggest HDR hacks in history" (he wasn't actually serious about that claim) by using Modulate 4x blending. When I read his "hack" I thought then... his sort of right. No way we can compare HDR to Modulate 4x effect. HDR can produce much better results, and it is design to improve the overall quality of the render process by dinamically adjusting it depending on the exposure, while modulate 4x was static by definition and wasn't thought for that (not even close).

However, if you see "HDR Cube Map" example included in the DX 9.0c SDK, turning HDR on and off shows a big difference and makes one think how good it looks (because the light looks very weak otherwise). But if you start thinking a bit about it, modulate 4x (or 2x) would do the trick and achieve similar effect (very usefull for low-end hardware).
Of course modulate 4x is a hack and since it is LDR, it could produce artifacts.

The conclusion is not that modulate 4x is a replacement for HDR (which, repeatedly I'm saying it's not) but instead, how many other tools we have to enhance with our imagination, that may have been forgotten or just skipped.
This example may be a bit outdated since right now many have cards capable of doing HDR at reasonable frame rates, but that wasn't true when HDR was new.
(And when modulate-4x came out, not many cards where able to achieve it, while right now we can even do it with shaders [lol])

But I have to agree with phantom, learning old stuff might be a waste of time and confusing. i.e. The DX 7 SDK recommends having vertices count about 2000.... [lol] (on that time, vertex transformations were a bottleneck)

Quote:

everyone is running away from the FFP like the horribly restrictive system is and embracing shaders (...)

It is horrible restrictive, but during development, it is really nice to just write "EnableFog(True)" or "SceneBlend(add)" instead of writting shader code, when we quickly want to see some result.
discman1028: but in the end, you'll want to know what's inside "EnableFog(True)" and shaders are very handy for that.

Share this post


Link to post
Share on other sites
Quote:
Original post by Matias Goldberg
discman1028: but in the end, you'll want to know what's inside "EnableFog(True)" and shaders are very handy for that.


How does having shader hardware help me learn about what the hardware fog unit does?

Share this post


Link to post
Share on other sites
Quote:
Original post by discman1028
How does having shader hardware help me learn about what the hardware fog unit does?

Because there is no "hardware fog unit". There hasn't been for years. Even before vertex programs were directly exposed to users, there still wasn't. It just didn't make much sense to have quite so much special purpose functionality on the same chip. Instead, GPU manufacturers implemented it using something much more similar to vertex programs than to the user-friendly-ized "glFog". So if you really are interested in "learning more about the hardware", you'll run from fixed-function and you won't look back.

Share this post


Link to post
Share on other sites
Quote:
Original post by Sneftel
Because there is no "hardware fog unit". There hasn't been for years.
It's my understanding that NVIDIA had a dedicated hardware fog unit in their pipeline at least as late as the GeForce 6 series (NV4x), although I can't remember why.

Share this post


Link to post
Share on other sites
That would be weird, since it's such trivial per-vertex functionality. Still, I guess I don't have any direct evidence that it isn't. s/fog/multitexturing, then. [smile]

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
Quote:
Original post by Sneftel
Because there is no "hardware fog unit". There hasn't been for years.
It's my understanding that NVIDIA had a dedicated hardware fog unit in their pipeline at least as late as the GeForce 6 series (NV4x), although I can't remember why.


Quote:

That would be weird, since it's such trivial per-vertex functionality. Still, I guess I don't have any direct evidence that it isn't. s/fog/multitexturing, then.


Weird yes. But there are three types of fogging (and may be more). Pixel Table Fog, Vertex Fog, and Volumetric Fog.

Vertex Fog it's what we've been discussing. Since it was made in the CPU before vertex shaders, almost 99% of Direct3D capable cards supported it.

I don't know how Pixel Table Fog works, but it's card & driver specific. But even an old NVIDIA Vanta (older than the TNT!) supported it.

And volumetric fog is an advanced technique (implemented w/ shaders), an implementation is described in this paper (note the whitepaper is from nVIDIA) However, it requieres a GeForce FX or higher. Although a SM 2.0 compatible will do it.
Pherhaps that's what it is reffering to.

Cheers
Dark Sylinc

Share this post


Link to post
Share on other sites
I don't think there's much to learn from the fixed function. A lot of it was derived from arbitrarily or from old SGI hardware using GL. The number of texture stages, for example, doesn't mean a thing. Fixed function hardware never supported this number of stages, and neither did pre-SM2 programmable hardware. SM2 hardware can do a lot more. So you can't really learn anything from this.

If you're interested in what the fixed functionality does, ATI's FixedFuncShader sample should be a good place to look at.

Share this post


Link to post
Share on other sites
Quote:
Original post by Sneftel
Quote:
Original post by discman1028
How does having shader hardware help me learn about what the hardware fog unit does?

Because there is no "hardware fog unit". There hasn't been for years. Even before vertex programs were directly exposed to users, there still wasn't. It just didn't make much sense to have quite so much special purpose functionality on the same chip. Instead, GPU manufacturers implemented it using something much more similar to vertex programs than to the user-friendly-ized "glFog".


Can someone elaborate more on this? What did glFog() do before the days of this "something similar to vertex programs", if anything? And is there a concrete definition somewhere of what the "something similar to vertex programs" is?

Quote:
Original post by ET3D
If you're interested in what the fixed functionality does, ATI's FixedFuncShader sample should be a good place to look at.


Thanks, this was a perfect reference for me. It's great that you can do a diff to visualize how the fixed function calls and the shader implementations differ. What's interesting to me is why and where they would differ at all. The pdf has a small blurb of an inference:

Quote:
Due to what we believe are numerical precision issues (e.g., dword vs. float4), we often get small errors in our shader. The “green” errors can be turned on and off from the View menu or by pressing D.


Though those small "green" errors don't seem to be FF vs shader related. The FF vs shader related diff shows up in "yellow" (before you press "D"). They (ATI) don't expunge at all on why FF vs shaders would be different at all. Hmmmm. Any ideas?

Share this post


Link to post
Share on other sites
Quote:
Original post by discman1028
Though those small "green" errors don't seem to be FF vs shader related. The FF vs shader related diff shows up in "yellow" (before you press "D"). They (ATI) don't expunge at all on why FF vs shaders would be different at all. Hmmmm. Any ideas?


In retrospect, it's possibly b/c "only portions of the FF pipeline are implemented in HLSL". And/or it's probably just small differences like order of operations in the shader and fused multiply-add's causing slightly different accuracies, etc.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this