Sign in to follow this  
blanky

OpenGL Re-Learn DX10?

Recommended Posts

Hey guys, I've been wanting to ask this question for a while, it might've already been discussed but I found no reference of it, sorry. If I learn DX9 (D3D), would I have to relearn DX10 when it comes out? What I mean is, would it have a completely different API? Or would it be the same (obviously with add ons). I'm sorry if I dont state this clearly, please ask me to elaborate if I didn't. This is a reason why I want to use OpenGL instead, but then again I'm not very fond of OpenGL's extensions.

Share this post


Link to post
Share on other sites
DirectX 10 won't be in your hands for a very long time. Anyway, as a software developer you will have to learn new languages and APIs for the rest of your life (the most valuable thing you can learn is how to learn, then you'll be infinately adaptable). Only the people working on DX10 will be able to tell you exactly how much the API will change, but if you accept the fact that, as a software developer, you'll have to frequently learn new things, you'll be able to handle it when it comes. That said, they're planning on some big changes, that's not to say your DX9 experience won't be useful when DX10 comes around though.

Share this post


Link to post
Share on other sites
Learning DX9 will help you out with DX10 *a lot*. There are changes in DX10, but all can be taken in stride if you already know the fundamentals of it. I recommend taking a look at some intro DX9 tutorials and getting familiar with the API. Then, when the beta for DX10 comes out, get it and start looking at that stuff. It shouldn't be too long at all.

If you don't really know too much about graphics programming, now would be a perfect time to pick up the fundamentals of it (underlying math and techniques). Remember that in D3D10, there will be no FFP, so you will have to write all of your own shaders, so you must have the necessary background for that, too.

Share this post


Link to post
Share on other sites
Quote:
if you accept the fact that, as a software developer, you'll have to frequently learn new things, you'll be able to handle it

That's some good advice there. DirectX10 is a WindowsVista only component, so you won't see it in a 'final' form until it's released (currently 2H-06). You've got *plenty* of time to learn DX9. Learning DX9 will not be a waste of time.

Quote:
Remember that in D3D10, there will be no FFP, so you will have to write all of your own shaders, so you must have the necessary background for that, too.

If you're learning D3D9, you really want to focus on the programmable pipeline. If there is one implementation skill that you'll be able to take from D3D9->D3D10 it'll be the concepts/methods of the programmable pipeline [smile]

Quote:
when the beta for DX10 comes out, get it and start looking at that stuff. It shouldn't be too long at all.

A relevant quote from Chuck Walbourn:
Quote:
The runtime components will be inbox for Windows Vista Beta 2. The Direct3D 10 documentation, samples, headers, libs, reference rasterizer, etc will be entering public beta in a DirectX SDK release in sync with Beta 2.

Which basically leaves the question as to when Windows Vista Beta 2 is out [wink]. I'll refrain from commenting, but I've seen enough talk online that seems to be fairly logical/'accurate'.

hth
Jack

Share this post


Link to post
Share on other sites
Thanks guys, of course I understand learning new APIs, I was just curious if the API would change all that much (functions, etc.) But you're right, there's lots of time anyways.

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
Quote:
Remember that in D3D10, there will be no FFP, so you will have to write all of your own shaders, so you must have the necessary background for that, too.

If you're learning D3D9, you really want to focus on the programmable pipeline. If there is one implementation skill that you'll be able to take from D3D9->D3D10 it'll be the concepts/methods of the programmable pipeline [smile]

I'd be a little wary of that logic. Not that there's anything wrong with the programmable pipeline, but I've noticed a fair number of people coming up who don't understand what the programmable pipeline is, because they don't understand what preceeded it and what precisely is replaces. I'm not saying you should spend a huge amount of time working with fixed function, but get to know it as well.

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
Quote:
Original post by jollyjeffers
Quote:
Remember that in D3D10, there will be no FFP, so you will have to write all of your own shaders, so you must have the necessary background for that, too.

If you're learning D3D9, you really want to focus on the programmable pipeline. If there is one implementation skill that you'll be able to take from D3D9->D3D10 it'll be the concepts/methods of the programmable pipeline [smile]

I'd be a little wary of that logic. Not that there's anything wrong with the programmable pipeline, but I've noticed a fair number of people coming up who don't understand what the programmable pipeline is, because they don't understand what preceeded it and what precisely is replaces. I'm not saying you should spend a huge amount of time working with fixed function, but get to know it as well.

If you don't intend to be using D3D9 for a long duration of time, I'm not quite sure why one would learn a deprecated technology such as the FFP. Many of the challenges people have with the programmable pipeline are the result of them not understanding fundamental concepts of graphics programming. The FFP allows for this lack of knowledge, which is exactly where people get into trouble. For example, learning how to use the FFP to enable fog (for instance) isn't going to help you implement a fog shader.

I don't think that the PP is necessarily replacing the FFP, because for a while now, the FFP has existed more or less as a library of shaders that D3D just set up for you. Of course, the API for this library wasn't the best, as it was layered directly into the other API (as a result from when programmable shaders weren't available).

Share this post


Link to post
Share on other sites
Is it 100% confirmed they are going to cut out the FFP? If it is, I hope there will be some alternative API rolled in to replace the FFP...

Shaders are all nice and dandy, but there is so much funtionality stowed away in the FFP which is gonna require additional effort to code in shaders (anything having to do with multiple texture stages, alpha operations etc). As an alternative, more powerful API for the pipeline shaders are just fine, but if it's gonna be the complete replacement for the FFP it needs more work... A new API shouldn't make things more complex by taking away levels of abstraction IMHO.

The FFP for example works out of the box, while you'll have to code your own shader to get started with DX10, at least from the sound of it. That's not too much work, but a number of simple default shaders, that perform the same functionality as much used FFP settings, would really be nice... especially for beginners who want to focus on getting things rendered in the first place and NOT HOW on earth to render them.

But since...

Quote:
...the FFP has existed more or less as a library of shaders that D3D just set up for you.


...I guess they'll at least release these FFP shaders for use to get started with. Come to think of it, how will DX10 support non-DX9 (< ps2) hardware? Guess not at all then, right?

[edit]

Seems I got worried again for nothing, going from the PDC talk on ZBuffer. Since at least Vista will still ship with DX9, we can still use it, right? ZMan also said in the article that no DX10 hardware exists yet, so I reckon it's gonna be 1 or 2 years before it gets really accepted and used, no?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster

Hi,

Biggest changes in DX came around DX5-DX7 to DX8. After DX8 the changes has been more like refining and tuning little things. Difference between DX9 and DX8 aren't so big anymore.

When ever there has been a new DX available, I have upgraded my DX part of the code and it has been worth it.

Cheers






Share this post


Link to post
Share on other sites
Thanks for your feedback, since I'm now also very much interested in this :)

However, I read that DX10 will completely break backward compatibility with DX9 (introducing the geometry shader and numerous other features), so I guess we're looking at something similar to the DX5-DX7 to DX8 shift. Would anyone happen to have some good links on what is going to change with DX10 exactly, so I can read it firsthand? :)

On a releated note, I just came across Shadergen for OpenGL, which converts OpenGL FFP settings to a shader for you, complete with compatibility checks against OpenGL versions and video drivers. Does anyone know if something similar exists for DirectX / HLSL? It looks like a great tool, even more so with DX10's reported lack of the FFP.

Share this post


Link to post
Share on other sites
Just to re-iterate and to express my views on the subject.
Learning an API is trivial, what is important though is that you understand the fundamentals such as the pipeline and how everything ties in and some hardware knowledge. With this knowledge you can extend and use different API's and still be comfortable.

The reason I am saying this is the following...
When coding anything you need to understand what you are coding. Take for example a simple direct3d app. You know the rendering pipeline goes in the following order

Stage #1 -> Input Vertex data...
Stage #2 -> Transform and Lighting
Stage #3 -> Clipping/ Culling/ Rasterization
Stage #4 -> Pixel processing
Stage #5 -> Testing (Alpha, depth, stencil...)
Stage #6 -> Output to FrameBuffer

When you are comfortable with this knowledge and how each stage interacts with another you can easily understand that in the start of your app (excluding initializing the rendering device etc...) you need to store your data in a vertex buffer or a storage buffer. You then can using a vertex shader in the vertex processing stage do transform and lighting etc.

PS: It's always good to study the theory and then look at the implementation as it always works well when you understand what you want to do and when it comes to learning a new API you basically just bridge/extend your knowledge and plug in new parts of the pipeline like geometry shaders and input semantics...

I hope this helps.
Take care.

Share this post


Link to post
Share on other sites
Quote:
Original post by remigius
However, I read that DX10 will completely break backward compatibility with DX9 (introducing the geometry shader and numerous other features)
As a general concept, introducing new features never breaks backwards compatability - it's removing old features that does that.

Quote:
Would anyone happen to have some good links on what is going to change with DX10 exactly, so I can read it firsthand? :)
Seek out the Meltdown 2005 slides, or the PDC stuff. They contain a decent overview of what's changing.

In short summary: learning DX9 with a focus on the programmable pipeline (instead of the FFP) will put you in good stead for DX10. Names of functions will change, but the important stuff - the way the API gets used - will be familiar. (Learning an API shouldn't mean memorising the function names anyway).

Share this post


Link to post
Share on other sites
Quote:
Original post by superpig
As a general concept, introducing new features never breaks backwards compatability - it's removing old features that does that.


For good measure I'll submit that the accursed Java foreach loop almost did. In regard to platform changes, new features CAN render old code obsolete or disfunctional when the behaviour of statements/API calls changes, failing to replicate the old behaviour using the new features/approaches.

This might well be the case with the introduction of geometry shaders, as this fundamentally alters the pipeline. It shouldn't have to be breaking backward compatibilty, but it just might be. Just to expand your general concept :)

Quote:
In short summary: learning DX9 with a focus on the programmable pipeline (instead of the FFP) will put you in good stead for DX10. Names of functions will change, but the important stuff - the way the API gets used - will be familiar.


I agree, but the current PP still is not as 'fleshed out' as the FFP. When doing some texture blending I can use the texture stages in the FFP without any additional effort, but with the PP I'll have to write a new shader from scratch (or am I missing something here?). It's inconvenient, but it's not my main problem with solely focussing on the PP.

The FFP serves as a layer of abstraction between the application and the graphics pipeline. Setting lights, fog or texture states on a device will perform some documented behaviour, which is easy to understand and predictable. When you're working directly with shaders, you don't know what's going on inside the shader until you take a look at it, which makes it harder to understand, predict and re-use than the FFP.

It may be a purist's discussion, but I feel this is a downside to the pure-PP approach. This might be where SAS compliant fx files come in, but information on this is a bit sketchy from what I've seen... Or maybe I didn't look in the right place and someone also has a link for that? :)

Share this post


Link to post
Share on other sites
With regards to the push for use of the PP - in all fairness it's not a new concept. we've had 4 (5?) years of being able to use both FF and PP, and it was always a matter of time before it became the dominant feature.

For absolute beginners to graphics theory I can see why the FF might be useful - you can, within reason, get a long way without knowing much about the inner workings of graphics algorithms. However, this is probably a bad thing - knowing how something works can be quite important (at the very least, a useful skill) and hiding it all away can introduce other problems.

My favourite example of this is the texture cascade / fixed function texture blending. I *hate* that with a passion. It effectively masks a simple tree of possibilities and, when written down, isn't that complicated... but I find it ridiculously slow (development time, not runtime!) to work with. Hardware support for it is fairly good, but it's far from perfect - so you still have to be careful using any obscure components in your code. OTOH you can usually express exactly what you want very concisely, very clearly, using a pixel shader. You write it how you want to, in a way that makes more sense than setting 10-20 render states and leaving the implementation details hidden.

Quote:
When you're working directly with shaders, you don't know what's going on inside the shader until you take a look at it, which makes it harder to understand, predict and re-use than the FFP.

Strictly my opinion of course, but I completely disagree with this statement. I got pointed to this article yesterday, which shows a way in which to build lighting models from fundamental component functions. Using the effect interface you can then wrap it all up with a single technique name: OrenNayarWithBlinnPhongSpecular. You call a SetTechnique() in your code and you know what you're going to get. Strikes me as much more elegant than all the configuration and potential problems you can get via the render state system.


One thing to bare in mind though is that Microsoft are aware that a lot of people are fairly familiar with the FF method, and given the huge differences with D3D10 they're bound to include some stuff to help people migrate. It's simply not in their interest to kick everyone back to square-1 [smile]

Quote:
Since at least Vista will still ship with DX9, we can still use it, right?

You'll still be able to run applications compiled against previous versions of DirectX. Microsoft would be shooting themselves in the foot if they culled that much backwards compatability. Whilst it might not be the preferred route, developing for current 9.0c under WinXP should yield an executable that is compatable with current OS's as well as Vista. Could well be the best choice for some people.
Quote:
ZMan also said in the article that no DX10 hardware exists yet, so I reckon it's gonna be 1 or 2 years before it gets really accepted and used, no?

Yeah, there's always a transition period. There's even DX9.L thats the Vista-specific version of DirectX9 to consider. As for the DX10 hardware - I'd expect some DX10 parts to be available at time of release, or very soon after it. New OS's tend to drive a new investment in hardware, so for money alone it's good for IHV's to have something ready for us to spend our money on [grin]

hth
Jack

Share this post


Link to post
Share on other sites
It is in the interest of both Microsoft as well as the IHV's to have hardware available when a major product that makes extensive use of it is released. I would be surprised if there WASN'T hardware available when Vista launches.

As far as the FFP goes, I think it boils down to what your background is. If you have a graphics theory background, then shaders are much more understandable because you can just type in the math equations you already know. If your background is not in graphics, then the FFP masks away a lot of details that you would normally have to worry about. The downside to this is that the FFP then becomes a black-box, in which you don't really understand what's going on underneath.

For example, say you wanted to blend two textures together based on the alpha value of a material. To do this in a shader, you would just do:

vTextureColor0 * fMaterialAlpha + vTextureColor1*(1.0f-fMaterialAlpha)

This is easy to see conceptually. But doing this in using the FFP requires around 5 or 6 SetSamplerState calls with a ton of different parameters to select the appropriate texture, channel, blending mode, and alpha mode.

One of the reasons for the move to a programmable pipeline was that the moment you start to do anything even remotely complicated in FFP, you easily get mired down in a sea of renderstate settings that takes an expert to configure properly. Not only does the developer need to know the conceptual operation they want to do, but they also need to understand how to translate that into a ton of renderstate settings to get it working.

Seeing that DirectX was originally, and still is intended mainly for graphics professionals, it makes sense for Microsoft to let developers specify their rendering techniques in the way that is most natural to them: graphics theory. The DirectX docs are meant to show professional developers how to implement the graphics techniques they know from a theoretical point of view in the DirectX API. They aren't really meant to teach graphics programming to beginners, although there are some sections of the SDK docs that do cover a few of the basics.

neneboricua

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628375
    • Total Posts
      2982310
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now