Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


phantom

Member Since 15 Dec 2001
Online Last Active Today, 06:37 AM

#5235905 Is OpenGL enough or should I also support DirectX?

Posted by phantom on 20 June 2015 - 01:39 PM

Will be means the future. Microsoft constantly obsoletes its own APIs as well, so really it is a poor decision for anyone who is not part of a big company to bother with using it.
 
You will be able to use vulkan on windows systems, so really there will be no point to DirectX any more.


Everyone obsoletes APIs, however D3D sticks around - DX9 was here for a long time, DX11 was announced 7 years ago, DX12 I would expect to have a pretty long life time as well as the primary windows graphics interface.

Yes, you will be able to use Vulkan, just no one knows when.
D3D12 will be here in just over a month.
Vulkan is still MIA when it comes to anything public and was last reported as 'in flux'.

So, you can start developing with an API which works (D3D11), an API which has docs, beta drivers, good performance reports (D3D12) or you can wait around for Vulkan which is being developed by the same group who mismanaged OpenGL for over a decade.

Personally I'd go with D3D, because say what you like about MS they get shit done and with the last 3 graphics APIs they have done well (D3D10 as a good, if not doomed, API) and by all accounts D3D12 is another sound result.
Khronos and the ARB (who went on to form the Khronos group for graphics), on the other hand, have historically not managed very well and are showing worrying signs of lapsing back in to the old habits which plagued OpenGL.


#5235903 Is OpenGL enough or should I also support DirectX?

Posted by phantom on 20 June 2015 - 01:29 PM

Slight thread clean up; lets keep it on topic, not call other people names or drag other unrelated topics in to this thanks.

I might start handing out warnings to people otherwise...


#5235631 Is OpenGL enough or should I also support DirectX?

Posted by phantom on 19 June 2015 - 02:16 AM

You can use it with GL too; it's only a container format and the data payload is the same whatever graphics API you are using.


Yes true, the down side is that outside of windows you'll need a third party library to load dds files or its a case of roll your own. It's a feature rich format which I wouldn't fancy creating a parser for myself. Best to stick to windows for that IMHO and leverage that particular advantage...


Yeah, but if you are using OpenGL then chances are you shouldn't be using D3DX libs anyway even on Windows if only for consistency's sake - and as others have pointed out there are a few libs out there which already read .dds structured files (also the header isn't that hard to pass for the common stuff).

Ultimately however I'd tend towards a custom header in front of the payload with just the details you need it in, which can include a custom 'type' ID so once you have parsed the DDS header during the data pipeline you can translate the various blogs into a simpler, game ready, format. However that is overkill when starting out thus why I didn't originally suggest it smile.png


#5235568 Is OpenGL enough or should I also support DirectX?

Posted by phantom on 18 June 2015 - 04:06 PM

Not just that, but in DirectX there is the dds format which can hold texture arrays, texture cubes, mipmaps, and all sorts of other stuff in one convenient container format. For directx definitely investigate it...


You can use it with GL too; it's only a container format and the data payload is the same whatever graphics API you are using.

Really .dds should be your first stop when loading game ready textures, it can handle everything you graphics card can after all.
PNG and TGA are source images and shouldn't be anywhere near your final game.


#5233887 Skydome

Posted by phantom on 09 June 2015 - 01:53 PM

Also, to eliminate any intersection with scene geometry, I don't even worry about the actual geometry/size of the skydome, I just disable depth writing when I draw it, so that everything that's drawn after the skydome will overlap it by default, even though the skydome is really just almost like a hat the camera is wearing around with a 1-unit radius.


You also burn fill rate/pixel shader instructions you don't need to, which depending on the scene could be quite a waste.

Example; you are in a town, you can see a section of sky but the majority of your view is covered in building - most of your drawing was a waste.

Sky box/sphere/plane/whatever last.


#5232581 skinned mesh triangle count

Posted by phantom on 03 June 2015 - 10:53 AM

all tests used fixed function pure device non-indexed hw vertex blending - no shaders, no effects. i've only measured a 2-3% difference in speed between all skinning methods, with non-indexed fixed function pure device hw vertex blending the slowest, and indexed HLSL the fastest. so HLSL vs HW vertex blending might increase the triangle budget for a given LOD by less than 5% at best. IE i can have an additional 750 tris on my 15K tri mesh and run at the same speed w/ HLSL - ooh and ahh! i'm SO impressed! yeah right.


You do realise that 'fixed function' hardware hasn't existed in GPUs for a good 10 or more years, right?
EVERYTHING you do on a GPU since around 2002 has been using shaders under the hood; in fact that small performance difference could be due to you using index triangles instead of non-indexed triangles, which is a bit of a non-brainer.

In short your sarcastic "I'm so impressed" is nothing more than a statement against your own misunderstanding of how to do things...

and you continue to ask bad questions; with no details about target platforms or performance levels this whole question and the performance metrics you are waffling about are meaningless... the fact you got anyone to reply to this thread to start with is amazing given the total lack of information.


#5229902 win32 cpu render bottleneck

Posted by phantom on 19 May 2015 - 02:46 PM

Yeah, I think we are done here; the OP is clearly of no mind to learn anything and I, for one, am bored of the rambling run on posts.

Xoxos; if you can come up with a coherent post/topic which gets to the point without the rambling then feel free to try and start this again. However the first sign of the attitude you've shown here and that thread will also be locked.

To be clear; rambling posts, things about your life we don't need to know/care about, unrelated observations (VST programming), claims that people are 'keeping knowledge/information from [me]' and referring to people as 'kids' in the rude and dismissive manner you have thus far shown.

Also learn some patience; 3 hours is no time for a reply and posting 3 or 4 times in a row (off topic no less) also will not get you anywhere.


#5227193 Unreal Engine UMG System

Posted by phantom on 04 May 2015 - 12:29 PM

False equivalence; youtube, facebook etc have complicated algorithms and a lot of data so that they can scan uploaded files to check to see if they are infringing on copyright or not.

UE4 lacks both the code and the data to try and make that choice thus UE4 can not tell if you made something or not.


#5226814 Unreal Engine UMG System

Posted by phantom on 02 May 2015 - 03:07 AM

UE4 doesn't know you didn't make it, how could it?
If it isn't importing then the format can't be right; check the log output as that might have more information.


#5225265 Steam's compensated modding policy

Posted by phantom on 24 April 2015 - 12:26 PM

I have a great deal of respect towards Valve but they are a greedy bunch.
Not only does a digital copy of a game cost more than a physical copy but only 25% goes to the mod authors?


As Josh Petrie says, the percentages are set by the developer; I believe Valve only take a 10% cut of the pie which is a good 20% less than the cut they tend to take for a full game.

The publisher/developer cut can apparently be reduced to zero if someone so chooses.


#5224909 Unity 5 or Unreal engine 4

Posted by phantom on 22 April 2015 - 02:47 PM

However given that Unity has been around for some time and UE4 has only been publically released for a year now the stats are also going to be twisted in favour of Unity.

There isn't a game which can be made in one which couldn't be made in another; the question of how easy it would be is another matter, same with the various feature sets you get, but if you can make <game type> with one then its a certainty you can make <game type> with another.


#5223631 Why high level languages are slow

Posted by phantom on 16 April 2015 - 04:21 AM

But the language informs implementation - the way the C# language and associated runtime function requires compromises which, when writing idiomatic code which does not fight the language, results in a slower performance level. These are language design choices directly impacting things.

This is run time performance pure and simple and, unless you are going to start waving Pro-Language Flags around, no reasonable person can argue otherwise because the nature of the language removes the control of layout and placement.

You can argue good vs bad development all you like, in the context of this discussion it doesn't matter - it matters even less when the good vs bad is always a defensive and falls back to the tropes of "I've seen bad C++ code and good C# code which contradicts this so it must be wrong" because it is not wrong.

More to the point the continued refrain of "use the best tool for the job" isn't required either; neither the author nor people in this thread have argued otherwise so the constantly repeating of this line feels like a defensive 'must not upset anyone' whine more than anything else.

This thread isn't required.
The discussion here isn't productive.
Any honest user of a language would have looked at this for what it is - a comparison in a specific situation - nodded and got on with their lives.

Instead we have two pages of people trying to defend a language from points which were never made and conclusions not drawn to.. what? feel good about using it? Feel like 'all languages are equal'? Not upset some precious flower who might feel bad because their language of choice has a flaw which can't be bypassed without some excessive thinking?

Ugh...


#5223509 Why high level languages are slow

Posted by phantom on 15 April 2015 - 01:49 PM

And in practice, if you use an array of reference types and pre-allocate all the objects at the same time, they tend to end up sequential in heap memory anyway - so that does mitigate some of the cache performance issues.


But do they?
Do they ALWAYS?
Do you have any guarantee for this?
What hoops do you have to jump to make sure? (Pre-allocate and initialise all. Never replace. Always copy in. Never grow. I'm guessing those constraints at least to maybe get this).

Which is the point of the article/blog; you are already fighting the language and the GC to try and maybe get a certain layout, perhaps.

C++; vector<foo>(size) - job done.

Now, for many and many problems that isn't an issue but it is important to know that it could be an issue, that you have no guarantees, and that even if you do get your layout you could well be hurting anyway because your structures might be bigger than you think (I couldn't tell you the memory layout of a .Net object so I couldn't say 100%) and come with more access cost (ref vs dir) and other things which, in the situation discussed, will make things slow.

(There was an article, I've lost the link to it now, which was talking about efficacy, big-O, vector vs list and their performance impacts. The long and the short of it was for million items, for sorted insertion followed by iteration vector was ALWAYS faster than list by a pretty large margin. Preallocation of elements were done. A few other tricks were pulled. But as the list size increased vector continued to out perform list all the time. Cache is king.)
 
 

I simply think that people who try to paint the entire language (or the entire swath of "high level" languages, whatever that means) as "slow" because "it doesn't do this specific thing as fast as this other language" is rather... misinformed. Or at the very least trying to start an argument. But hey, we now have this thread, so I guess they succeeded smile.png (well, this has been more a discussion then an argument)


The painting however was done with context, in a particular set situation of a memory bound operation. In that situation C#/.Net is slow.

This is a fact. It simply has too much working against it.

And that's ok, because anyone reading it with an ounce of 'best tool for the job' will read that, nod, and then continue to use it if it is the best tool for the job.

It might look like I'm arguing C++'s corner vs C# like some rabid fanboy but I'm not.
I think C# and the .Net family of languages are great. If I have a job I think suits them then I'll reach for them right away; hell the only reason I don't reach for F# is because I've not had enough use of it in anger to get a complete handle on the language.

But if I'm doing high performance memory traffic heavy code then you'd better believe I'm reaching for C++ because it simply gives you better control and in that situation is faster.
(OK, to be fair, if the work can be heavily parallelised then I'm probably reaching for some form of compute shader and a GPU but you get my point.)

Trying to argue that this isn't "true" or isn't fair because your language of choice happens to be a problem in the situation pointed out... well... *shrugs*


#5223431 Why high level languages are slow

Posted by phantom on 15 April 2015 - 08:47 AM

I don't recall him saying he hates anything, he was just pointing out why languages like C# and Java with their design choices are slower than a language like C++ which pushes more control back to the developer are the way they are performance wise.
More to the point he points out that for many people they either don't care or it isn't a problem.

And there is no getting around the fact he is right.

C# with it's heap-for-all-the-things and various other things will cause you cache misses.
GCs can cause horrible problems with cache lines and unpredictable runtime issues.

Also he calls out the .Net Native stuff and points out that while it'll help instructions it won't help with layout of memory and memory latency is a horrible problem which isn't getting better.

I also take issues with 'it would take you a few extra years to write faster code in C++' bullstat you just pulled out of thin air; more so given the case he points out (cache issues, memory layout) that C++ would naturally allow you to write faster code much much easier. ("My code is going slow, hey how about instead of a vector<Foo*> I use vector<Foo> instead..." - good luck doing that as easy as that in .Net).

End of the day languages have their pros and cons; I see nothing 'wrong' in what he said and nor did he conclude that 'high level languages are bad' just that you should be aware of things and why they are as they are.


#5223099 Vulkan is Next-Gen OpenGL

Posted by phantom on 14 April 2015 - 01:56 AM

I'm willing to bet we see the first implementations/drivers for Windows around Siggraph time as that tends to be when OpenGL stuff gets vomited out and this is the same people.

The good news is I would expect both NV and AMD to have working drivers at the same time as Vulkan is Mantle so AMD have it easy and NV should just be throwing resources at it to make it work.




PARTNERS