Jump to content

  • Log In with Google      Sign In   
  • Create Account

The future of graphic APIs


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
20 replies to this topic

#1 Rld_   Members   -  Reputation: 1520

Like
2Likes
Like

Posted 15 July 2013 - 06:46 AM

Hey people,

 

I am in the (pre)process of making a 3D engine/framework to fool around in. This is mainly a learning thing for me at the moment, but I do want to keep an eye on the future. Most likely this thread will be more of a "rubber ducking" thing rather than a "Help me choose" thread, but I value good argumented opinions.

 

Now one of the bigger choices I am facing is whether to use OpenGL or DirectX, but I am not entirely sure which one would be the best pick with the future in mind together with learning new stuff.

 

The thing is. I made a couple of 3D frameworks for smaller assignments and I used ("modern") OpenGL for that, giving me an advantage over the fact that I already know a good deal about it, but in terms of using the graphics API in itself, there is less to learn in contrast to DirectX, which I only really was able to scratch the surface of in the past with D3D9.

 

One of the things that I am wondering about however is the future of both of these APIs.

 

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted. 

 

I also noticed that some companies (like valve) started switching to OpenGL for the sake of multiplatform compatibility and I don't really see DirectX going multiplatform anytime soon (if ever).

 

Also with the exception of the xbox consoles, I think most of the consoles use OpenGL or something more similar to OpenGL.

 

What do you think the future has in store for us? I did google a bit, but I can't really find any nice articles of the direction of graphics APIs for the (near) future.

 

What would you choose to do if you are chasing a career in graphics programming? Expand your knowledge to a broader spectrum, or go more in depth with what you already know?



Sponsor:

#2 Waterlimon   Crossbones+   -  Reputation: 2638

Like
3Likes
Like

Posted 15 July 2013 - 09:12 AM

The APIs are getting closer and closer to how the hardware works so there is not much space for differences at the API level. This means they will expose rougly the same functionality, just with different names for certain concepts and how you use them.

 

That leaves pretty much two things to consider:

-Support across different platforms

-Architecture (ease of use)

 

The first is plus for OpenGL, the second for DirectX since the state machine approach of openGL is apparently broken and makes some things not as nice as with directX. Im not sure if that is going to a better direction (likely is) but for that reason you might want to at least try directX to see how things are implemented there.

 

There are also things like GPGPU computing, which you might want to use alongside the graphics API. There might be differences in how the graphics API interfaces with whatever GPGPU API between directX and openGL.


o3o


#3 MarkS   Prime Members   -  Reputation: 887

Like
0Likes
Like

Posted 15 July 2013 - 09:20 AM

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted.


This I find questionable. There is so much going on behind the scenes that the only way for this to be possible is for nVidia to write their own API for their hardware. I wouldn't put it past them, and they have already done similar with CUDA, but I doubt this.

What would you choose to do if you are chasing a career in graphics programming? Expand your knowledge to a broader spectrum, or go more in depth with what you already know?


If I could go back and do this all over again, I would cut back my time spent learning OpenGL and add time learning DirectX. Both APIs have their merits, but I only know OpenGL and that puts me at a distinct disadvantage. If I were to want to make a career out of this, I would want as broad a skill set as possible.

#4 Hodgman   Moderators   -  Reputation: 31818

Like
7Likes
Like

Posted 15 July 2013 - 09:45 AM

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted.

This was the state of affairs before Glide/GL/DX/etc arrived on the scene. It was a nightmare for developers.
Sure, we'd all love to pare back GL/D3D a bit these days so they're a thinner abstraction, but going back to a driver free-for-all would be terrible.
 
Interpreting this statement another way though, your source could be implying that with the arrival of OpenCL/CUDA/DirectCompute, the GPU hardware is becoming more and more open to how it is used, rather than forcing us to follow the traditional pipeline specified by GL/D3D. That sentiment is definitely true -- GPU's have certainly become "GPGPU's" or "compute devices", that are extremely flexible.
 

I also noticed that some companies (like valve) started switching to OpenGL for the sake of multiplatform compatibility and I don't really see DirectX going multiplatform anytime soon (if ever).

IMHO GL's "portability" is a poisoned chalice. Every different driver contains a different OpenGL implementation. Even on a single OS, like Windows7, you've got half a dozen (or more) different OpenGL implementations that you will need to test your game on, and possibly have to make changes for. Khronos makes the spec, but they don't actively enforce 100% compliance with it.

Doing professional QA for a GL engine on Windows/Linux (or any engine on Mobile/Web) is a complete nightmare. You need a tonne of different devices/GPUs and a lot of time.

I made a personal decision (other opinions will vary!) to simply choose D3D on Windows because I found the maintenance costs to be lower, due to it's behaviour being dictated (and tested/validated/enforced) by Microsoft.
On MacOS, the situation is similar with Apple being a benevolent dictator ruling over GL, ensuring it's implemented properly, so GL isn't as flaky there.
On Linux, there is no benevolent dictator. D3D9 can be mostly emulated in Wine, and GL support is entirely up to the driver.
On mobile, GLES support varies wildly from device to device. You'll really want to test your code on every single different device... sad.png
On web, your user's browser might have WebGL in some capacity, or flash, probably, maybe. But web has always been a compatibility/porting nightmare.
On consoles, you've probably only got the choice of using some proprietary API that's different to all of the above.
So personally, I choose to use multiple APIs -- the most natural/stable one for each platform, D3D9, D3D11, GL3, GLES2, etc...
 

Also with the exception of the xbox consoles, I think most of the consoles use OpenGL or something more similar to OpenGL.

No, unless by "similar to OpenGL" you mean that they have a plain-C interface, or unless you'd also say that D3D is similar to OpenGL (which is kinda is) biggrin.png
The exception is the PS3, which provides it's own native graphics API (which everyone should be using), and also provides a crappy wrapper around it called PSGL, which makes it look similar to GL.
 

What would you choose to do if you are chasing a career in graphics programming? Expand your knowledge to a broader spectrum, or go more in depth with what you already know?

Both tongue.png
I jumped back and forth between D3D and GL at different points in time, and the more you do it, the more they both feel the same. Graphics programming should eventually be about what you make with the APIs, not the irrelevant details of how you use them. I'd much prefer to hire a graphics programmer who can implement different types of lighting systems (forward/deferred/tiled/clustered/etc), post-processing effects, materials, special effects, and so on, on any single API at all, rather than someone that knows all APIs inside out, but can't demonstrate practical use of them.
The knowledge of how to achieve some effect on top of an API is portable to all APIs of the same class (e.g. D3D9 vs GL2, D3D11 vs GL4, etc), and it is a much broader and deeper skill set. Learning the nuts and bolts of a particular API's interface is more rigid, structured learning, which any graphics programmer can do in time. Once you've learned one API, picking up another is pretty straightforward.

Edited by Hodgman, 15 July 2013 - 10:05 AM.


#5 Rld_   Members   -  Reputation: 1520

Like
0Likes
Like

Posted 15 July 2013 - 10:40 AM


The first is plus for OpenGL, the second for DirectX since the state machine approach of openGL is apparently broken and makes some things not as nice as with directX. Im not sure if that is going to a better direction (likely is) but for that reason you might want to at least try directX to see how things are implemented there.

 

I heard counter arguments though, that OpenGL in its current state is more robust. Can you clarify?

 


No, unless by "similar to OpenGL" you mean that they have a plain-C interface, or unless you'd also say that D3D is similar to OpenGL (which is kinda is)
The exception is the PS3, which provides it's own native graphics API (which everyone should be using), and also provides a crappy wrapper around it called PSGL, which makes it look similar to GL.

 

Well. I actually only programmed on the PSP and PS3 and in general it just felt pretty much the same as OpenGL, that's actually what I meant :)

 


Interpreting this statement another way though, your source could be implying that with the arrival of OpenCL/CUDA/DirectCompute, the GPU hardware is becoming more and more open to how it is used, rather than forcing us to follow the traditional pipeline specified by GL/D3D. That sentiment is definitely true -- GPU's have certainly become "GPGPU's" or "compute devices", that are extremely flexible.

 

I think that is most likely the case indeed. I'm not from the age of driver free programming, but I can imagine that there will always be a layer between or else there will most likely be a commercial entity that will make it.



#6 Ravyne   GDNet+   -  Reputation: 8155

Like
1Likes
Like

Posted 15 July 2013 - 10:50 AM

Both major APIs are converging towards the hardware, and the hardware from competing vendors is converging towards similar logical programming models, if not physical architecture. Beyond that, I think the trend will be to push programmability down into more stages of the pipeline -- truly programmable tessellation seems a shoe-in at some point, lots of smart people have said programmable texture sampling would be nice (although that gets rather hard to implement in hardware). Currently, the most modern GPUs are very optimized for compute, but still clearly graphics-first; I think in the future GPUs will flip this around and compute will really be the first-class citizen, but they'll maintain the necessary fixed-function components (texture sampling, ROPs, etc) to maintain leading graphics performance.

 

The only really divergent thing we know that's going to happen, is that nVidia is going to start putting ARM CPU cores in their GPUs. That'll probably have a lot of interesting applications that people are yet to think of.



#7 mhagain   Crossbones+   -  Reputation: 8277

Like
1Likes
Like

Posted 15 July 2013 - 11:20 AM

Up to a year ago I would have unreservedly recommended D3D - it's a cleaner API, drivers are more robust, better tools and support, and more consistent behaviour on different hardware/driver combos.  Nowadays - I'm not so sure.  I would have hoped that MS had learned their lesson from Vista - locking D3D versions to Windows versions is not a good idea - but it seems that they haven't.  None of that takes away from the fact that D3D9 and 11 are still the best-in-class of their generations, and even with the new XBox being 11.2 it's a safe bet that the majority of titles will still target vanilla 11 in the PC space for at least the next few years.

 

OpenGL's portability is not as big a deal as it's often made out to be.  Even if you don't give a damn about portability, you can still hit ~95% of the PC target market (assuming that the latest Steam survey is representative).  Unless you're going for a very specific, very specialized target audience where you know for a fact that the figures are different - don't even bother worrying about it.

 

The really big deal with OpenGL is that it does allow new hardware features to be accessed without also requiring an OS upgrade.  You can also drop in handling for them without having to also rewrite the rest of your renderer.  That of course needs to be balanced against the driver situation (a safe bet is to just copy whatever id Software titles do as you can be certain that at least is well supported) and the mess of vendor-specific extensions (which can be seriously detrimental to hardware portability).

 

Longer term it is always most beneficial to know both, of course.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#8 Krypt0n   Crossbones+   -  Reputation: 2672

Like
2Likes
Like

Posted 15 July 2013 - 11:26 AM

firstly, that's not a rant, but my point of view :)

 

 

From sources I am not sure I am allowed to mention, I heard that NVidia is experimenting with ditching the graphics APIs as a whole and instead lets us program the graphics card directly. I am also not sure how valid this is, it can also be something misinterpreted.

This was the state of affairs before Glide/GL/DX/etc arrived on the scene. It was a nightmare for developers.

 

secondly, I see it the other way around, it was unified, nice and easy to develop before the various APIs arrived. you've written your software in c, everything, from game logic, to sound mixing, to rasterizing triangles, blending pixels etc. was just one unified code.

you could write a triangle rasterizer in combination with voxel (heightmap) tracer like commanche or outcast. all you cared was one unified code base.

 

then those APIs came up, which started a real nightmare from a programmer's point (no matter wheter software or hw api, aka commandbuffer). it was the only way to get access to the speed of rasterization HW, those first version, like S3, were not even faster than CPUs, but were running in parallel, so you could do other stuff and in total it was faster. with Glide, GL, DX you've then completely lost your freedom to the devil.

back then, you've worried how many triangles or vertices or sprites you could render, that turned now to how many API calls you can do! imagin how retarded that actually is. with low level access on consoles with 10y HW, you can manage 10times the drawcalls of newest PC HW, just due to API limitations. and those are not getting better, but worse every time. DX was already like twice slower than oGL on windowsXP (that's what NVidia claimed in some presentation), Vista+DX10 should speed things up, by introducing state objects, but games for DX9 and DX10 showed that DX9 ran actually fast, for the simple reason, that DX9 games sparsely updated just few needed states, while a state object always changes the states, even if 90% of it is equal to the previous one (drivers don't do state guarding, it's not their job). DX11 with compute, you got another slow down, vendors recommend to not switch more than twice a frame between compute and 3d rendering, as this is a pipeline reconfiguration, they have to flush, halt, reconfigure, restart the pipe. (it's less bad nowadays, but starting with DX11 that was the case).

 

we can now handle more lights than we can handle objects on PC (without instancing) and if you look at games like Crysis from 2005 and compare to current games, you will see the same amount of drawcalls. just due to API limitations.

 

 

GPU vendors try to sell APIs like the solution for big incompatibility problems, but that's really just marketing. Look on CPUs, you can run 30y old code on current CPUs, you can recompile your c/c++ code to x86,arm,mips,powerpc and mostly 'just run' it.

 

programming GPUs 'without an API' doesn't mean you write your commandbuffer on HW level, that's not the point, that's the retarded start of APIs. Writing for GPUs would mean, that you create your c++ code (or whatever language you prefer that compiles). you compile it to an intermediate language (e.g. llvm opcode) and execute it on some part of the GPU. that part would start 'jobs' that do what you've intended to do. 

similar to the culling code DICE runs on compute, but for everything. you can transform all objects with simple c++ code, you can apply skinning, water simulation. you can draw triangles or trace heightmap voxels, if you want you can use path tracing or simple draw 2d sprites for partcile without any API calls from your desktop application to the gpu!

 

Nowadays even NVidia and ATI start to be unhappy with the 'API', which actually rather means that they want other APIs, but MS is just not updating as frequent as back then, the industry just does not care, most games would still run nicely on DX9, current consoles ARE DX9 level of HW.

 

so, anyone who wants a truly future API, should write 99% of the engine in OpenCL/Cuda. (I recommend the intel SDK, you can profile, debug etc. in Visual studio, just like normal  c++). you can push 100k drawcalls @ 60Hz if you like, you can keep DCT compressed textures on GPU and decompress tiles of them on demand, if you want. you can bake lightmaps on demand if you like to (like quake1 did with surface caches). you can implement some sub-d, you can do occlusion culling while drawing, on GPU, with 0 latency, you can filter PTX textures without hacking around to get proper filtering on borders.

 

 

 

and lets not even asking "isn't it slow". Vertexshader ran slower than TnL, and proper Pixelshader (see GeforceFX) were also ridiculously slow compared to 'hard wired pixel pipelines'. you could fit 3540 voodoo graphics chips into your GTX680 transistor limit, rasterizing 10+Billion triangle/s, 10x of what the GTX680 can. of course, that's naive+useless math, just like comparing a pure gpgpu pipeline with some hard wired triangle rasterizers.

 

 



#9 Chris_F   Members   -  Reputation: 2461

Like
2Likes
Like

Posted 15 July 2013 - 03:05 PM

I'm really not happy with either API. sad.png

 

Direct3D 10 and 11 have been pretty solid (more so than OpenGL IMO), but I really don't trust where Microsoft has been heading and it seems worse to me every year (e.g. not making the latest graphics API available on all their popular operating systems). It's always been a problem for me that Direct3D is tied to a single company and OS, and now it seems it may be increasingly tied to specific versions of that OS (artificial scarcity anyone?)

 

The fact that OpenGL is, well, open and available to a wide variety of platforms is great. That's a HUGE advantage over Direct3D. Unfortunately, I think the design of OpenGL is shit, to be honest, and then you combined that with the poor quality of the various drivers. I would love to see a new version of OpenGL (OGL 5 maybe?) that isn't based off a new set of hardware features, but is instead a ground up redesign of the API, with no focus on making it compatible with the previous versions and instead focus on making things work well. Maybe they could start by copying Direct3D 11 and then improve from there? biggrin.png I can dream.

 

What are you to do? Those are really your only two options if you want accelerated 3D today, and that is a damn shame, I think. What I would really love to see is for companies like AMD and Nvidia to open up their hardware specs and drivers. Maybe then it would be easier for competitive drivers or even competitive APIs to emerge. Maybe there will soon be a massive shift in CPU architecture. Instead of a handful of heavy-weight cores you'll have hundreds of light-weight cores. It would basically be like a GPU, only more freely programmable (no more drivers!) and at that point you could implement your OpenGL or alternate graphics API entirely in software. Again, I can dream.



#10 phantom   Moderators   -  Reputation: 7563

Like
2Likes
Like

Posted 15 July 2013 - 03:44 PM

I would love to see a new version of OpenGL (OGL 5 maybe?) that isn't based off a new set of hardware features, but is instead a ground up redesign of the API, with no focus on making it compatible with the previous versions and instead focus on making things work well. Maybe they could start by copying Direct3D 11 and then improve from there? biggrin.png I can dream.


They tried that, it was known as Longs Peak and got scrapped in favour of GL3.0 to much uproar from many.
Both NV and AMD were firmly behind it but it got killed about 6 months before GL3.0's release due to reasons which were never really explained.

(Many people blamed CAD companies at the time but I heard from an IHV employee who was working on the LP spec that it wasn't them - my personal theory is that Apple and/or Blizzard put the boot in as Apple probably had no desire to redo their API and Blizzard wanted cross platform coverage with latest features... but that's just my opinion..)

#11 swiftcoder   Senior Moderators   -  Reputation: 10364

Like
1Likes
Like

Posted 15 July 2013 - 07:09 PM


What do you think the future has in store for us?

OpenGL ES.

 

The install base (smart phones, tablets, embedded devices and WebGL) already dwarfs the install base of either DirectX or desktop GL. Plus, through judicious use of libraries, it runs anywhere DirectX does.

 

Does it offer all the latest and greatest features? Not yet, but it's catching up. And how many games actually require DX11, anyway?


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#12 Chris_F   Members   -  Reputation: 2461

Like
0Likes
Like

Posted 15 July 2013 - 07:12 PM


And how many games actually require DX11, anyway?

 

A lot of the best looking ones. biggrin.png



#13 swiftcoder   Senior Moderators   -  Reputation: 10364

Like
0Likes
Like

Posted 15 July 2013 - 07:22 PM

A lot of the best looking ones. biggrin.png

They can use DX11 features, sure. But how many games actually require DX11?

 

The only one I am currently aware of is Crysis 3.


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#14 MJP   Moderators   -  Reputation: 11761

Like
0Likes
Like

Posted 16 July 2013 - 12:07 AM

 

A lot of the best looking ones. biggrin.png

They can use DX11 features, sure. But how many games actually require DX11?

 

The only one I am currently aware of is Crysis 3.

 

Crysis 3 doesn't actually require DX11-level functionality, it can run on DX10 hardware.

However I suspect that once PS4/XB1 games start coming out you'll see a lot of multiplat games with PC versions that require DX11-level functionality, since that's what's available on the consoles. That's exactly what happened with PS3/XB360 and SM3.0 hardware.

Anyway I think the best improvement you could get from a GPU-oriented API would be to get rid of the "bind to shader registers" model. Modern GPU's don't work like that anymore, and are capable of being much more generic in terms of accessing memory. It would be cool to be able have access to this flexibility on PC hardware, and also eliminate some of the ridiculous driver overhead.


Edited by MJP, 16 July 2013 - 12:10 AM.


#15 Ingenu   Members   -  Reputation: 930

Like
1Likes
Like

Posted 16 July 2013 - 02:53 AM

I did write a simple design for a graphics API on Beyond3D : http://forum.beyond3d.com/showthread.php?t=63565

Unfortunately no-one publicly commented on it (it's not nearly finished though) :(

 

I really want something way simpler and way more empowering than what we have, anyone who knows how GPU are working can just see that OpenGL/D3D are major obstacles in using them efficiently. Unfortunately some of that is due to OS stability/security :(

(Also if you worked on current gen consoles, you probably know of smaller API)


-* So many things to do, so little time to spend. *-

#16 mhagain   Crossbones+   -  Reputation: 8277

Like
0Likes
Like

Posted 16 July 2013 - 03:10 PM

Well one of the main reasons why APIs even exist in the first place is the huge variety of hardware.  A common API that abstracts those differences is absolutely essential otherwise you end up writing a different rendering back end for every piece of graphics hardware you want to support, and hope that it doesn't explode when vendors bring out a new generation.  API overhead is a tradeoff you make in order to get that, and - of course - consoles don't have this problem because all XBox 360s (for example) have the same hardware.  That's just not an apples-to-apples comparison.

 

Of course NVIDIA would like you to be coding direct to their hardware; that would give them a potential performance and quality advantage over their competitors.  It's their own benefit that they're really thinking of here.  AMD made similar kinda noises a couple of years back too.

 

Regading potential for quality - all current APIs are already there, and have been for some time.  John Carmack noted this back in April 2000: http://floodyberry.com/carmack/johnc_plan_2000.html#d20000429

 

 

Mark Peercy of SGI has shown, quite surprisingly, that all Renderman surface shaders can be decomposed into multi-pass graphics operations if two extensions are provided over basic OpenGL: the existing pixel texture extension, which allows dependent texture lookups (matrox already supports a form of this, and most vendors will over the next year), and signed, floating point colors through the graphics pipeline. It also makes heavy use of the existing, but rarely optimized, copyTexSubImage2D functionality for temporaries.

 

The truth is that everything else is just gravy.  You get cleaner, more efficient, higher performing ways of doing things, but contrary to what marketing departments would like you to think, there's absolutely nothing you can do in D3D11 that you can't also do in D3D9 (or even OpenGL ARB assembly shaders).  You will be doing it differently, for sure, and it may not be viable for real-time with the older APIs, but the capability remains there.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#17 Titan.   Members   -  Reputation: 159

Like
2Likes
Like

Posted 17 July 2013 - 03:37 AM


The truth is that everything else is just gravy.  You get cleaner, more efficient, higher performing ways of doing things, but contrary to what marketing departments would like you to think, there's absolutely nothing you can do in D3D11 that you can't also do in D3D9 (or even OpenGL ARB assembly shaders).  You will be doing it differently, for sure, and it may not be viable for real-time with the older APIs, but the capability remains there.

That's a ridiculous statement. Of course you can "do everything with older API's just not in real time". Just like you could compute everything on the CPU then write the output in a texture then render it, or you could also "compute" everything with a pen and paper then push buttons to display the output, if you don't do any mistakes you'll also have the same output.

That's the point of new APIs, make thing easier and faster, and faster mean new algorithms allowed for real time simulation.

You think you are thinking outside the box while you are reinventing the wheel.


Edited by Titan., 17 July 2013 - 03:39 AM.


#18 cowsarenotevil   Crossbones+   -  Reputation: 2102

Like
0Likes
Like

Posted 17 July 2013 - 04:08 AM

 


The truth is that everything else is just gravy.  You get cleaner, more efficient, higher performing ways of doing things, but contrary to what marketing departments would like you to think, there's absolutely nothing you can do in D3D11 that you can't also do in D3D9 (or even OpenGL ARB assembly shaders).  You will be doing it differently, for sure, and it may not be viable for real-time with the older APIs, but the capability remains there.

That's a ridiculous statement. Of course you can "do everything with older API's just not in real time". Just like you could compute everything on the CPU then write the output in a texture then render it, or you could also "compute" everything with a pen and paper then push buttons to display the output, if you don't do any mistakes you'll also have the same output.

That's the point of new APIs, make thing easier and faster, and faster mean new algorithms allowed for real time simulation.

You think you are thinking outside the box while you are reinventing the wheel.

 

 

On the contrary, I think he makes a point that has some practical value, whereas you're taking what he says to some literal, logical extreme for pretty much no reason and not offering any useful information at all.

The truth is, most "D3D11" features can be implemented (or at least approximated) in a perfectly practical way in D3D9, etc. whereas computing everything on the CPU probably isn't practical at all.


-~-The Cow of Darkness-~-

#19 Titan.   Members   -  Reputation: 159

Like
0Likes
Like

Posted 17 July 2013 - 05:12 AM

You want to keep it practical, well, which functionality could you really implement on DX9 that wouldn't completely ruin performance. tessellation with geometry shaders ? and how would you do instancing ? 

And if this evolution didn't brought anything "meaningful", which one really did since the first programmable graphic pipeline ?

 

btw, I do think compute everything on the CPU would be practical (not efficient), that's actually what this thread is somewhat, about the future of the APIs: give more freedom and let program directly the hardware... so getting closer to how CPU programming works.



#20 mhagain   Crossbones+   -  Reputation: 8277

Like
0Likes
Like

Posted 17 July 2013 - 05:44 PM

 


The truth is that everything else is just gravy.  You get cleaner, more efficient, higher performing ways of doing things, but contrary to what marketing departments would like you to think, there's absolutely nothing you can do in D3D11 that you can't also do in D3D9 (or even OpenGL ARB assembly shaders).  You will be doing it differently, for sure, and it may not be viable for real-time with the older APIs, but the capability remains there.

That's a ridiculous statement. Of course you can "do everything with older API's just not in real time". Just like you could compute everything on the CPU then write the output in a texture then render it, or you could also "compute" everything with a pen and paper then push buttons to display the output, if you don't do any mistakes you'll also have the same output.

That's the point of new APIs, make thing easier and faster, and faster mean new algorithms allowed for real time simulation.

You think you are thinking outside the box while you are reinventing the wheel.

 

 

I don't believe it's a ridiculous statement but I do understand where you're coming from.

 

The statement was made with reference to claims often made that D3D11 can somehow offer better image quality, higher quality effects, etc.  Those claims are what is baloney; as soon as you have the ability to do arbitrary computations on the GPU, and sufficient precision for storage of temporaries as required, then you can effectively do anything.  Yes, new APIs offer better ways of doing it via new capabilities (actually they don't; new hardware offers the better way, the API just exposes that way to the programmer); yes, you can likewise do the very same in software, but saying that just reinforces the point, and I believe it's a valid point.  API evolution is no longer a major determining factor in image quality, but the capability of an API (and the underlying hardware) to achieve that level of image quality at reasonable framerates is where the majority of evolution has been focussed recently.

 

I think we're actually saying the same thing but coming from different directions here.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS