Jump to content

  • Log In with Google      Sign In   
  • Create Account

phantom

Member Since 15 Dec 2001
Offline Last Active Private

#5027394 [C#/C++]Multithreading

Posted by phantom on 30 January 2013 - 06:39 PM

Yeah, that thing is a pain... our old build system was based on Python which basically meant all build 'setup' was single threaded (which involved working out a dependency graph; quick on small asset counts but as the assets increased so a comedy wait time was introduced before it started building) and while the external tools were run outside python because of how it was designed you had to spin up 100s of threads in order to launch and wait for them to finish... (GIL is released when waiting on an external process.)

 

Fortunately this became a big enough problem that a C#/.Net re-write was allowed \o/




#5027356 [C#/C++]Multithreading

Posted by phantom on 30 January 2013 - 04:48 PM

He is correct; multi-threading is not simple.. or to put it better; "multi-threading so things don't run the chance of crashing and other problems while still maintaining performance is not simple".

This is the normal case of gamers, having heard about something, demanding it without really thinking about what's involved in a move like this when you are trying to build on an ever expanding 10 year old game which was originally designed and built back when multi-core systems were not the norm.

Could they do it?
Yes, given time all things are possible... but it'll take a lot of time and a lot of pain (and probably a few dodgy patches long the way) to do so.
Based on the comments in thread re:lag in game things have certainly improved server side over the years (I seem to recall the early lag they talked about when I played around launch; jump into a system with a fair few people around the gate and things started to stall a bit...)


#5027190 Branch or two diffrent shaders for "help" objects.

Posted by phantom on 30 January 2013 - 09:13 AM

Do not use dynamic branching (if statement) in  your shader, it will only slow the retail version down.

This is not strictly correct as it depends on hardware generation and the coherency of the branching and the code being branched around.

You have to think in terms of groups of threads.
If all the threads in a group take the 'if' or 'else' branch then your overhead will be minimal as the other code will not be run.
If some of the threads take one branch and some take the other then you'll end up executing both paths and the results/execution lanes get masked by the hardware to give you the correct result.

HOWEVER this can still result in a speed up if used correctly, even on DX9 hardware, if you take into account how threads are batched.
For example on a console game we had a system which used a mesh with a texture on to define a road; the road surface had bump maps etc on them and was a reasonably costly shader.
The texture itself was made up of three ares;
- solid colour where alpha = 1
- a wavey boundary area (some pixel alpha blended between 0 and 1)
- solid colour where alpha = 0

Because of this some groups of pixels had all alpha 0, some all alpha 1 and some with a mixture.
As the boundry condition was a thin section and the 'else' path was one instruction by introducing a branch on the diffuse alpha value large numbers of pixels could skip all the complicated processing and the overall effect was a large speed up.

Now, while in the OP's case I wouldn't use a branch there is no reason to avoid them completely - you just have to think about how they are going to branch and if the overhead is acceptable.


#5027005 C++ how to avoid calling destructor

Posted by phantom on 29 January 2013 - 06:05 PM

Personally, I always use pointers in container templates. This avoid unnecessary memory copies and constructor/destructor calls.
I'm a bit of a memory nazi though, so I prefer doing the new and delete myself.


Unfortunately your paranoia over copying means that you are pretty much dropping performance all over the place as between the de-referencing and the data misses you are giving the CPU a hard time.

Memory is SLOW, if you can you ALWAYS want to keep things as close together as you can when processing to take advantage of pre-fetching and caching of data. Stalling out the CPU for many cycles while it wanders off to find the data you've just asked for is not a productive use of anyone's time.

Edit: Part of me thinks that everyone should be made to program on a system with limited cache and an in-order arch. just so they can appreciate how many sins an OoO processor covers up...


#5021604 Is Clustered Forward Shading worth implementing?

Posted by phantom on 14 January 2013 - 05:17 PM

Not really; deferred might have solved some problems with regards to lights but it brought with it a whole host of others with regards to memory bandwidth, AA issues, problems integrating different BRDFs, transparency and other issues which required various hoops to be jumped through.

Going forward hybrid solutions are likely to become the norm, such as AMD's Leo demo which mixes deferred aspects with a forward rendering pass to do the real geometry rendering which can get around pretty much all of those problems (but brings its own compromises).

The point is; all rendering has trade offs and you'll find plenty of "advanced" engines which use various rendering methods - hell, the last game I worked on was all forward lit using baked lighting and SH light probes because it was the only way we were going to hit 60fps on the consoles.

Edit: also a good and advanced engine WONT force you to take one rendering path, it will let the game code decide (the engine powering the aforementioned game can support deferred as well as forward at least...)


#5021136 VS2010 problem with multiple declaration in for loop

Posted by phantom on 13 January 2013 - 11:32 AM

You could always use the Std lib Algorithm's to side step the mistakes smile.png

 

m_bindings.erase(std::remove_if(std::begin(m_bindings), std::end(m_bindings), 
			        [&binding](Binding &b) { return (b.device == binding.device) && (b.diKey == binding.diKey); })
		, std::end(m_bindings));



 




#5021114 GLSL: Unique shader and Data Corruption!

Posted by phantom on 13 January 2013 - 09:58 AM

GPUs tend to compute both sides of an 'if' statement (true and false cases) then after it has done this it picks the side that matches the 'if' statement outcome.

 

This depends very much on the GPU in question; anything in the last few years will have true branching support and only execute the code required by the outcome of the conditional compare the executing work group.

 

That means that if all your threads in the work group evaluate to 'true' it will ONLY take the true path, if 'false' then only the false path.

Now, if some threads are true and some are false then it will execute both paths BUT the GPU will mask out threads which aren't required to execute as such they have no impact on the running of the other threads.




#5020584 Does Microsoft purposely slow down OpenGL?

Posted by phantom on 11 January 2013 - 08:18 PM

But today there is no question that DirectX 11 is the clear winner. This is why even Sony® (competitor of Microsoft®) uses this API for PlayStation 4 (with just a few modifications).

 

Sony is using DirectX 11 for the PS4? Is that rumour or unreleased insider knowledge? If the latter, don't risk breaking any NDAs.

 

The only reports I can find on the matter is this rumor (from eight months ago), which was later corrected by another rumor to say the PS4 will be running OpenGL natively.

 

It isn't and it isn't.

 

I think I can say that without the NDA Ninjas breaking my door down anyway...




#5020581 Does Microsoft purposely slow down OpenGL?

Posted by phantom on 11 January 2013 - 08:12 PM

(in reply to some other post, not quoting since the forum screws up my posts anyway and its a pain to fix every time)
I don't quite see where the info that microsoft purposely slowed down OpenGL for Vista came from, i was under the impression that the OpenGL->D3D wrapper they added in Vista only were supposed to replace the insanely slow OpenGL software renderer they had in older Windows versions. (So if anything they made OpenGL without proper drivers faster)

 

There was (and I'm working on memory here I admit) a couple of aspects to it one of which was real with regards to how OpenGL frame buffers would compose with the D3D driven desktop and windows however that one did get sorted out once MS gave a little on it with some pressure from the IHVs.

 

The other is, as you say, regarding the apparent OpenGL->D3D layering which many took to mean (without bothering to look into it, just looking at a slide) that OpenGL would sit on D3D; what it REALLY meant was MS was going/planning to provide a OGL1.4 implementation based on D3D (I'm not sure they ever did in the end at that.)

(At the time this was going down I was using OpenGL, I heard the above did a 'ffs...' and then once I looked at the details realised the panic was rubbish in this regard...)

 

With regards to MS 'slowing down' OpenGL; many many years ago they were on the ARB (pre-2003 I think?) so they had opportunity to do so with regards to the spec but they didn't have to. Back when the ARB was an infighting mess, a running conflict between the interests of ATi, NVidia, Intel, SGI & 3DLabs so getting anything done was a nightmare which is why nothing got done - GL2.0 was the first causality in that war and Longs Peak was the most recent even after they all started to get along..




#5020402 Does Microsoft purposely slow down OpenGL?

Posted by phantom on 11 January 2013 - 12:20 PM

It looks like GL 4.3 & ES 3.0 are finally heading in the right direction and MS got out of the rails with the (again!) decision to make DX 11.1 Windows 8 only.
OpenGL|ES is certainly much saner than OpenGL, in the mobile world it's a good thing indeed.

OpenGL I still consider 'broken' while the bind-to-edit model still exist - it's just too easy to introduce bugs and unexpected behaviour (take VAO's; bind VAO, then bind another buffer and BAM! your original VAO is now changed unexpectedly). Don't get me wrong, OpenGL is improving and needs to because without a strong API to counter it D3D will continue to slow down and coast a bit, but bind-to-edit is just so weak when compared to immutable objects and the explicate edit model of D3D.
(Which I consider annoying as there are at least two features of GL (multi-drawindirect and AMD's PRT extension) which I'd like to play with, but every time I think about using GL it makes me sad :( )

As for DX11.1 - some of it is coming back to Win7 as they need it for IE10 support; I can't recall which bits off the top of my head however, nor can I recall if the interesting bits are.


#5020384 Does Microsoft purposely slow down OpenGL?

Posted by phantom on 11 January 2013 - 11:28 AM


Why you should use OpenGL and not DirectX - Interesting blog post on the subject.

 

"Intresting" and mostly biased, rubbish and wrong.

 

I made a couple of blog posts on here taking the article apart - basically the guy doesn't like DX, has this rose tinted view about OpenGL and feels there is a vast conspiracy to Keep OpenGL Down... which is rubbish.

 

Even the 'zomg! faster draw calls!' point he made is a non-event; on DX9 with 'small' draw calls it was a problem but DX10 and DX11 have since removed it and 'small' draw calls are so far from the norm it isn't worth caring about.

 

(And as someone who was using OpenGL from ~99 until 2008 I have a certain perspective; heck some of the older members might recall me defending aspects of 'GL before the Longs Peak screw up, which is when I said 'bye' to using GL and went to the saner DX10 and now DX11 land...)




#5020041 Lines of Coding Per Day

Posted by phantom on 10 January 2013 - 03:56 PM

The amount of code written in a day is meaningless regardless of if you ask for the least or most amount written - frankly any good engineer will spend more of their time thinking than writing code.

 

I've had tasks where I've spent 2 days just looking at where I'm going to add code and thinking about what I'm going to do without writing a single line.

 

On the flip side I've had tasks where I've chewed through loads of C# code in a day for throw away tools to process data needed for other tasks.

 

Heck lines of code varies per language too.




#5018311 DDS Texture Compression

Posted by phantom on 06 January 2013 - 03:10 PM

sampling a compressed texture uses 1/4 or 1/6 of the memory bandwidth.

 

It also allows the cache to keep more texels in memory at any given moment as they are taking up less space.




#5017831 Camera Zooming In and Out

Posted by phantom on 05 January 2013 - 12:44 PM

To get a zoom effect you can adjust the field of view used in your projection matrix calculation.




#5017505 Skybox, render order and clipping thoughts

Posted by phantom on 04 January 2013 - 03:22 PM

1. I wouldn't render the sky box first; it's going to be a waste of processing as based on that scene most of the skybox simply isn't going to be visible after the rest of the scene is drawn. Depending on how complex your sky shader is that could soon add up.

(Also don't try to do a z-write as a 'z-clear' replacement; the hardware uses z-clear to reinitialise internal structures etc and to make sure early z-rejection is enabled. Writing a z-pass instead of a clear could be counter productive in this case.)

 

2. Doing the sky last also solves your clipping into the sky issue as the sky is always drawn behind other objects regardless of their depth. More importantly you can't just 'push the far plane out further' as this will affect the accuracy of the z-buffer and you'll start to get issues such as z-fighting.






PARTNERS