Jump to content
  • Advertisement
Sign in to follow this  
Quat

d3d12 Features?

This topic is 1396 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I haven't found any announcement for d3d12, but I am wondering what new features might come. I'm guessing a version of d3d12 will be used on the next xbox, just like a version of d3d9 is used on the current xbox. What new hardware features would you like to see in d3d12 capable hardware?

Share this post


Link to post
Share on other sites
Advertisement
I haven't heard any info either, but I would personally like to see a programmable rasterizer stage. It may be a little far fetched, but it would be cool to be able to programmatically control how a primitive is rasterized, which would open the door for special rendering modes that can be used for hardware based irregular shadow mapping, paraboloid projections, etc... It would be fun to play with, but I doubt that it would be practical due to the parallelization of the current rasterizer.

The other obvious wishes are to have unordered access views accessible to the complete set of programmable stages. I can't think of a use off the top of my head, but it would make a pretty flexible pipeline if you had scatter write access from the complete pipeline.

What are the features that you are looking for?

Share this post


Link to post
Share on other sites
I was going to mention wants along the lines of more flexible/general write access and use of buffers but after trying to specify I realised I'm not sure enough of the current limitations to do so. However I can state one specific:

InterlockedAdd() to float types as well as uint and int types.

Also:

HLSL recursion.

HLSL classes, with methods. You gave us programmable shaders. Thank you. Now let us program them properly please.

Share this post


Link to post
Share on other sites

I'm pretty strongly against the idea of shader languages trying to become more like C++. IMO HLSL and Cg are perfect for their intended use, which is intensely-optimized kernels with lots of heavy math and zero side effects. Once you start throwing in OOP abstractions and start supporting C++ features like operators and constructors you really start to deviate that, and performance can suffer.It's because of this that I have negative opinion of Cuda, which can waste tons of performance just copying data from register to register so that it enforce proper C++ semantics. You can avoid it of course by carefully writing your code, but I would much rather prefer that the language and compiler are tailored towards optimal performance for common GPU workloads.

Recursion is a bit of a similar problem, because it requires GPU shader units to support a legit stack which can cause all kinds of problems with the way GPU's schedule threads and handle divergence.

I can second this opinion - it only takes a moment to remember the drastically different assembly outputs from a minor change to a dynamic loop in SM3.0 and to realize that shaders still need too much fine tuning for high level abstractions. I think eventually it will become feasible, but I don't see it happening any time soon.

Share this post


Link to post
Share on other sites
classes with methods eh? You mean like this?


Right. My bad, I could have sworn I read somewhere, only today before making that post, that there was no such thing in HLSL. :/ Must've been an out of date page, which I can't find now O.o . Thanks for the heads up. (I will need to try some of that stuff out though to understand how it works. For example talking about interfaces and classes on the same page and about how one has to initialize interfaces CPU side confuses me.)

Also, "program them properly"? What on earth does that mean? Frankly classes are a very poor method to describe shaders which are, at their heart, functional operations.[/quote]

Not being facetious but, what it usually means? If for example you wanted to code a shader that was a lot more involved than what can be adequately described with a few user implemented functions and structures you might wish you could use OOP. Some highly parallelisable physics simulations are a good example of that. One of my shaders right now has, at the moment, though these numbers will increase by a lot as the shader develops, 7 defines, 6 structs, 2 cbuffers, 5 buffers, 1 static array, and 10 functions. I don't know if you think that's a lot for a shader or not, but imo its messy, will get messier, and OOP would help.

Imagine if you were to code something similar in C++ for example and made all of these things globals. Sure it could work, and sure it could be fast, but from what I've seen most developers would strongly recommend OOP, even if only for good practice. This is essentially what I feel I've had to do in my shader due to (I think) not using OOP (but perhaps its only due to inexperience with HLSL?). It basically feels like "spaghetti code".

So much so that classes ... are recommended to be avoided if you want decent performance.[/quote]

Well that's part of what I would like to see change then. I make no claims to know how that would be done or if its even possible. This topic asked for what we would like to see from future version of DirectX, I don't think it asked only for expert opinions. But it does puzzle me. No one says that OOP necessarily hurts CPU performance, in fact OOP and performance are usually cited as orthogonal? I think I've been told before that its due to different architecture of the CPU and GPU, probably even in this thread, but understanding that is a bit beyond me at the moment.

Share this post


Link to post
Share on other sites

Also, "program them properly"? What on earth does that mean? Frankly classes are a very poor method to describe shaders which are, at their heart, functional operations.


Not being facetious but, what it usually means?
[/quote]

Properly implies 'correct', so what you are saying is that OOP = correct programming. Which frankly is wrong and a very bad mindset to get into.
(Also, OOP has nothing todo with "class"es either, it's a paradigm is all.)

It's perfectly possible to write clean code without using OOP, you just have to write... clean code.


Imagine if you were to code something similar in C++ for example and made all of these things globals. Sure it could work, and sure it could be fast, but from what I've seen most developers would strongly recommend OOP, even if only for good practice. This is essentially what I feel I've had to do in my shader due to (I think) not using OOP (but perhaps its only due to inexperience with HLSL?). It basically feels like "spaghetti code".
[/quote]

While they might appear to be globals the 'global' data is more like 'pre thread references' to global data and is nothing more than supplying a thread with a bunch of pointers in C++ and saying "and don't do it wrong." (and this only applies to UAVs you can write).

If your code is spaghetti then that is your doing :)

But it does puzzle me. No one says that OOP necessarily hurts CPU performance, in fact OOP and performance are usually cited as orthogonal? [/quote]

No, OOP can certainly hurt performance; often the performance loss is acceptable due to the increase of speed of programming. Amusingly graphics programming is one area where a lack of OOP due to an increase in processing and performance is becoming more common.

So, in short;
- OOP is not 'correct programming'
- OOP is not right for all situations and shouldn't be applied as such.

Share this post


Link to post
Share on other sites
The next XBox will use Direct3D 12? I was guessing it will use Direct3D 11. Any info guys?

Share this post


Link to post
Share on other sites

The next XBox will use Direct3D 12? I was guessing it will use Direct3D 11. Any info guys?


considering the current consoles still are expected to be the norm for five more years... I don't see why d3d12 or 13 couldn't be possible, other than the fact that nintendo and sony have already announced that they are developing the new consoles already, meaning they are going to be using hardware from now, and not future hardware.

Share this post


Link to post
Share on other sites

This topic is 1396 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Guest
This topic is now closed to further replies.
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!