Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Dec 2001
Offline Last Active Private

#5229902 win32 cpu render bottleneck

Posted by phantom on 19 May 2015 - 02:46 PM

Yeah, I think we are done here; the OP is clearly of no mind to learn anything and I, for one, am bored of the rambling run on posts.

Xoxos; if you can come up with a coherent post/topic which gets to the point without the rambling then feel free to try and start this again. However the first sign of the attitude you've shown here and that thread will also be locked.

To be clear; rambling posts, things about your life we don't need to know/care about, unrelated observations (VST programming), claims that people are 'keeping knowledge/information from [me]' and referring to people as 'kids' in the rude and dismissive manner you have thus far shown.

Also learn some patience; 3 hours is no time for a reply and posting 3 or 4 times in a row (off topic no less) also will not get you anywhere.

#5227193 Unreal Engine UMG System

Posted by phantom on 04 May 2015 - 12:29 PM

False equivalence; youtube, facebook etc have complicated algorithms and a lot of data so that they can scan uploaded files to check to see if they are infringing on copyright or not.

UE4 lacks both the code and the data to try and make that choice thus UE4 can not tell if you made something or not.

#5226814 Unreal Engine UMG System

Posted by phantom on 02 May 2015 - 03:07 AM

UE4 doesn't know you didn't make it, how could it?
If it isn't importing then the format can't be right; check the log output as that might have more information.

#5225265 Steam's compensated modding policy

Posted by phantom on 24 April 2015 - 12:26 PM

I have a great deal of respect towards Valve but they are a greedy bunch.
Not only does a digital copy of a game cost more than a physical copy but only 25% goes to the mod authors?

As Josh Petrie says, the percentages are set by the developer; I believe Valve only take a 10% cut of the pie which is a good 20% less than the cut they tend to take for a full game.

The publisher/developer cut can apparently be reduced to zero if someone so chooses.

#5224909 Unity 5 or Unreal engine 4

Posted by phantom on 22 April 2015 - 02:47 PM

However given that Unity has been around for some time and UE4 has only been publically released for a year now the stats are also going to be twisted in favour of Unity.

There isn't a game which can be made in one which couldn't be made in another; the question of how easy it would be is another matter, same with the various feature sets you get, but if you can make <game type> with one then its a certainty you can make <game type> with another.

#5223631 Why high level languages are slow

Posted by phantom on 16 April 2015 - 04:21 AM

But the language informs implementation - the way the C# language and associated runtime function requires compromises which, when writing idiomatic code which does not fight the language, results in a slower performance level. These are language design choices directly impacting things.

This is run time performance pure and simple and, unless you are going to start waving Pro-Language Flags around, no reasonable person can argue otherwise because the nature of the language removes the control of layout and placement.

You can argue good vs bad development all you like, in the context of this discussion it doesn't matter - it matters even less when the good vs bad is always a defensive and falls back to the tropes of "I've seen bad C++ code and good C# code which contradicts this so it must be wrong" because it is not wrong.

More to the point the continued refrain of "use the best tool for the job" isn't required either; neither the author nor people in this thread have argued otherwise so the constantly repeating of this line feels like a defensive 'must not upset anyone' whine more than anything else.

This thread isn't required.
The discussion here isn't productive.
Any honest user of a language would have looked at this for what it is - a comparison in a specific situation - nodded and got on with their lives.

Instead we have two pages of people trying to defend a language from points which were never made and conclusions not drawn to.. what? feel good about using it? Feel like 'all languages are equal'? Not upset some precious flower who might feel bad because their language of choice has a flaw which can't be bypassed without some excessive thinking?


#5223509 Why high level languages are slow

Posted by phantom on 15 April 2015 - 01:49 PM

And in practice, if you use an array of reference types and pre-allocate all the objects at the same time, they tend to end up sequential in heap memory anyway - so that does mitigate some of the cache performance issues.

But do they?
Do they ALWAYS?
Do you have any guarantee for this?
What hoops do you have to jump to make sure? (Pre-allocate and initialise all. Never replace. Always copy in. Never grow. I'm guessing those constraints at least to maybe get this).

Which is the point of the article/blog; you are already fighting the language and the GC to try and maybe get a certain layout, perhaps.

C++; vector<foo>(size) - job done.

Now, for many and many problems that isn't an issue but it is important to know that it could be an issue, that you have no guarantees, and that even if you do get your layout you could well be hurting anyway because your structures might be bigger than you think (I couldn't tell you the memory layout of a .Net object so I couldn't say 100%) and come with more access cost (ref vs dir) and other things which, in the situation discussed, will make things slow.

(There was an article, I've lost the link to it now, which was talking about efficacy, big-O, vector vs list and their performance impacts. The long and the short of it was for million items, for sorted insertion followed by iteration vector was ALWAYS faster than list by a pretty large margin. Preallocation of elements were done. A few other tricks were pulled. But as the list size increased vector continued to out perform list all the time. Cache is king.)

I simply think that people who try to paint the entire language (or the entire swath of "high level" languages, whatever that means) as "slow" because "it doesn't do this specific thing as fast as this other language" is rather... misinformed. Or at the very least trying to start an argument. But hey, we now have this thread, so I guess they succeeded smile.png (well, this has been more a discussion then an argument)

The painting however was done with context, in a particular set situation of a memory bound operation. In that situation C#/.Net is slow.

This is a fact. It simply has too much working against it.

And that's ok, because anyone reading it with an ounce of 'best tool for the job' will read that, nod, and then continue to use it if it is the best tool for the job.

It might look like I'm arguing C++'s corner vs C# like some rabid fanboy but I'm not.
I think C# and the .Net family of languages are great. If I have a job I think suits them then I'll reach for them right away; hell the only reason I don't reach for F# is because I've not had enough use of it in anger to get a complete handle on the language.

But if I'm doing high performance memory traffic heavy code then you'd better believe I'm reaching for C++ because it simply gives you better control and in that situation is faster.
(OK, to be fair, if the work can be heavily parallelised then I'm probably reaching for some form of compute shader and a GPU but you get my point.)

Trying to argue that this isn't "true" or isn't fair because your language of choice happens to be a problem in the situation pointed out... well... *shrugs*

#5223431 Why high level languages are slow

Posted by phantom on 15 April 2015 - 08:47 AM

I don't recall him saying he hates anything, he was just pointing out why languages like C# and Java with their design choices are slower than a language like C++ which pushes more control back to the developer are the way they are performance wise.
More to the point he points out that for many people they either don't care or it isn't a problem.

And there is no getting around the fact he is right.

C# with it's heap-for-all-the-things and various other things will cause you cache misses.
GCs can cause horrible problems with cache lines and unpredictable runtime issues.

Also he calls out the .Net Native stuff and points out that while it'll help instructions it won't help with layout of memory and memory latency is a horrible problem which isn't getting better.

I also take issues with 'it would take you a few extra years to write faster code in C++' bullstat you just pulled out of thin air; more so given the case he points out (cache issues, memory layout) that C++ would naturally allow you to write faster code much much easier. ("My code is going slow, hey how about instead of a vector<Foo*> I use vector<Foo> instead..." - good luck doing that as easy as that in .Net).

End of the day languages have their pros and cons; I see nothing 'wrong' in what he said and nor did he conclude that 'high level languages are bad' just that you should be aware of things and why they are as they are.

#5223099 Vulkan is Next-Gen OpenGL

Posted by phantom on 14 April 2015 - 01:56 AM

I'm willing to bet we see the first implementations/drivers for Windows around Siggraph time as that tends to be when OpenGL stuff gets vomited out and this is the same people.

The good news is I would expect both NV and AMD to have working drivers at the same time as Vulkan is Mantle so AMD have it easy and NV should just be throwing resources at it to make it work.

#5222992 Alternatives to global variables / passing down references through deep "...

Posted by phantom on 13 April 2015 - 01:15 PM

One thing that I think it is worth pointing out is that, in practice, with a well designed system I've never once encountered the 'deep call tree' problem with regards to passing things around. That's not to say I haven't seen people get 'bothered' by having to pass things down two levels, instead butchering designs to add a g_Renderer because they didn't want to do it.. but frankly I put that down to lazy thinking.

I would argue that Cozzie has hit the nail on the head with the mentioning of the SRP with regards to what the classes are doing.

Take the 'player' example given. It is controlling the thing it renders, when it renders, the sound it makes and even has some idea about 'shooting' - maybe fine for a simple application but very quickly things like this become and interconnected nightmare.

Many things can be solved by decoupling and abstraction.
Take rendering, for example, instead of the 'player' telling the renderer 'draw this', as things are created a 'renderable' is created and handed off to the renderer. The higher logic never talks to the renderer again but might touch the handle to the renderable to flip internal state which the render can act upon.

Sound it a similar thing; you might have a handle to a sound but instead of instructing the sound system directly to 'play a sound' instead a message might be broadcast to say 'play this handle' via the message system. This decouples you from knowing if the sound system even exists AND allows other things to 'hook in' to listen for events.

Shooting is another decoupled area; instead of the player 'shooting' the player might well have a weapon to which the 'shoot' logic is handed off to. That in turn could have some effect handles which is sends a message out to say 'hey, do this'.

At best the 'player' in these situations acts as a container and coordinator, it might not even have the logic internally to do things like listen to input; instead an 'input controller' might be assigned which could be a local input, an AI input or even a remote player driving the same logic and actions.

The point I'm trying to drive at is that, in many cases, when people think they 'need everything everywhere' or 'have to pass it down loads of layers' the truth of the matter is the design is often at fault. Be that down to a lack of experience or not is another matter, but across a number of (AAA) games now and a few engines which have avoided globals in favour of locally passed in parameters I've never experienced either of those situations play out with a good design. (And in one case the removal of a 'global' system for one locally passed down even simplified the codebase and made life easier for all concerned. )

#5221583 Looking for step my step guide for visual studio

Posted by phantom on 06 April 2015 - 03:31 AM

I think you need to look at existing engines again; UE4 doesn't mandate that (heck, if it did how do you think you'd have AI opponents in games?) and I dare say Unity and CryEngine don't either - all of the major engines would let you assign an AI component to things you've spawned and let them control the movement, you just have to hook it up.

Just because a bunch of games have taken the select-unit-and-click method doesn't mean engines don't have the ability to do what you want; hell, I dare say if you had access to the source for SC2 or Grey Goo you could modify them to do just what you want.

#5221271 What are your opinions on DX12/Vulkan/Mantle?

Posted by phantom on 04 April 2015 - 02:38 AM

But then I don't have a 1:1 mapping with old and new API.

And this is a good thing; you can't take a system you designed to get around the problems of an old API and map it to a new API where the problems no longer exist and expect it to be optimal.

Sure, you can make Vulkan/DX12 work like DX11/GL3/4 era APIs but you'll start reproducing driver work and generally make things less than optimal. (Ran into this problem a while back; designing new renderer layer, based too closely on DX11 so the PS4 version was a nightmare. If we had waited a bit to see the new hardware then chances are it would have been more towards the PS4 way of doing things and DX11 made to emulate that in some way.)

Right now it is like you've worked out a great way to cook burgers, and you can cook burgers really well, but now people want beer and you are trying to work out how to apply your grill to serve them beer without doing any extra work.

This is also a great argument against close API abstraction layers as they don't react well to massive API changes.

Honestly, if you can, I'd take some time to strip your abstraction layers and rebuild so that the Vulkan/DX12 way of doing things is your primary abstraction and make OpenGL work with that instead. If nothing else the old way of doing things IS going away, even if it takes a little while, so getting on top of it sooner rather than later is probably a good move.

If you can't do that, because you have a game to ship or something, then I'd forget DX12 support for now, finish what you need to finish and then go back and rebuild.

End of the day an abstraction which is based around the current DX11/OpenGL limitations and functionality will be broken and slow going forward, you are probably better off redesigning with the new method in mind.

#5219552 Google Play Alpha Testing

Posted by phantom on 27 March 2015 - 03:30 AM

OK, so just to keep this topic updated for any future reference smile.png

Got someone to create a private Google Community with at least my account in it.
Having joined that community I added it to the Alpha Testers group.
Upon visiting the url you get to let you become an alpha tester I was able to see a page to let me join the test.

At this point I'm waiting for Something to happen so that I can in fact download the app - apparently this can take a few hours? (wut?)

Account in group remains the same account the app was published with - the fact that it doesn't "just work" remains frankly insane.

#5219266 Why didn't somebody tell me?

Posted by phantom on 26 March 2015 - 03:45 AM

Better than trying to throw Kosh to the wind...

#5219138 Google Play Alpha Testing

Posted by phantom on 25 March 2015 - 01:46 PM

Yeah, it looks like having some kind of group is the only way; I'd rant about how dumb it is but frankly I'm tired of moaning about how bad Android is...