Jump to content
  • Advertisement
cmpt

Scrum metodology

This topic is 498 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

4 hours ago, swiftcoder said:

A binary search of < 100 or so case statements, on a value which is already in a register... might as well be free, compared to the cache miss you may incur with the virtual function dispatch.

Doesn't conditional branching (conditional jump commands) kill predictive execution?

Of course virtual functions have their own disadvantages. But I always assumed an abundance of conditional jumps has it's performance problems as well. Certainly when there are a 100 of them.

Share this post


Link to post
Share on other sites
Advertisement
11 minutes ago, SillyCow said:

Doesn't conditional branching (conditional jump commands) kill predictive execution?

it is 2 kinds of jumps - near and far. far jump flush pipeline. any not inlined function call uses 2 far jumps - one to enter function and other to exit it.  Conditional branching prevent inlining so have no perfomane advantages over vcalls, especially in case method that called have a enought work to do. Also it no so importent. Nother branches no vcalls have significant fluence. Usually it take no more than 5% of execution time and it generally can not be reduced. 

Share this post


Link to post
Share on other sites
4 minutes ago, Oberon_Command said:

Just because the support is there doesn't mean it's useful.

It not useful without hardware support. But hardware support on home class accelerators has been added 7 years ago with DX 11. Even with that  cuted hardware support you can first rotate and project it and only then tasselate and have a exact normals, also it that can be cheaper then rotate same quantity of vertex and normals that you have after tasselation. also it significantly reduce memory and bus usage that is bottlenec of GPUs, and allow to have continous LODs. 

5 hours ago, Oberon_Command said:

replaced by stuff like tesselating geometry shaders that are probably faster

Really tasselation shader intended to draw NURBS on home class accelerators. DX11  also added new primitives for it. Also CADs really has no other way prior tasselation shader to draw NURBS using home-class hardware without software tasselation to triangles. Really triangle is NURBs of first range.  Than higher range than more glide surface you can have using less vertices.

5 hours ago, Oberon_Command said:

Polygons are still "good enough," anyway

Not for me.

13 minutes ago, Oberon_Command said:

There's also cache utilization to consider.

On CPU just keep your vtables on L2 and dont worry. Also in some cases of processing can be applicable a very usefull trick when you can make virtuall call once per pull of objects of same class instead per each object. It can be used where each call have a tiny calculations and order of processing is not significant. With heavy calculations on virtual method, call overhead is not significant. 

Also virtual calls usualy used where perfomance not critical. For example on collision detection/prediction is critical to fast exclude pairs that exactly can not collide. So it have sence to first check a trajectory hypercapsulas and only for objects that trajectory hypercapsulas intersected  check a precission colliders at time segment where hypercapsules intersects. But only precission colliders that have heavy computations require a double dispatched vcalls, but not for a hypercapsulas that not require vcalls at all. 

On simulation of game logic like healthpack, gunfire modes, UI and other switches, mapping user input to virtual steers etc is not significant at all, but usage of  data driven composite objects significantly increase a flexebility. also it become a fashion to use a entities architecture. But pure entities makes a endless headpain with conclusions becouse ever object dont know exactly who it is. Im using classical OOP with  composite objects that have predefined slots to sparse parts that make some actual work - just something like OOP with entities. And thanks to Borland's sources i begun to use it 25 years ago, and usualy have a archutecture that allow to work with objects by setup/load object - put it to model - and forget  scheme. So why i have to follow a fashion now, if shortly evangelists will claim pure entities problematical and will search something like  im using since my first university year? 

26 minutes ago, Oberon_Command said:

mispredicted branches do come with a performance cost on modern hardware due to deep pipelined CPU architectures

Any far jump reset a pipeline. So simulating dynamic polimorphysm by ad-hoc pholimorphism that require a unlimited code size do ever more slowdowns due code chache misses and endless headpain with realisation due to combinatory explosion of variants that dynamic polimorphism do not have. Just ask evangelists of as-hoc simulation  why into any paper they  never set up a second pholimorphic type for benchmarks code and not use heavy calculations with multiple DLL functions calls into methods as it usualy used . It becouse with second type compiller can not eliminate vjump and table lookup and will have no significant difference with dynamic. really branch detection is computation too. But any computation have a theoretical minimum of required memory request and cpu usage that can not be decreased any case. For example if you need to solve quadric equation you anycase can not avoid computation of square root in common case. Same here. Thay just show you Vieta's formula case instead of common case into benchmarks So concentrate on increacing algo perfomance and flexibility. It will give much more  perfomance in both development and runtime.

 

5 hours ago, Oberon_Command said:

I'm not sure what you mean by "if-if-if", but running code that doesn't add value is obviously bad for performance

.   Just  read about PID regulators for example. It have wery simple formula with 3 factors ajusting wich you can have P, I, D, PD, ID or PID regulation without any branching.  Its principle is a key for very many fast datadriven tricks. Also using of it will significatly simplify  autoaiming, autopiloting, and any automation modeling,  give you increadible control on aiming accuracy and robustness, and by the way  give as much realism as it ever possible.  Just becouse it key of automation control theory and anything that have "auto" use it, even "Sidewinder", "Apollo" and "Raptor". 

6 hours ago, Oberon_Command said:

True, but if you can implement something without polymorphism, why wouldn't you,

If i can implement something without polimorphism it newer be changed. Also it most likely will be calculated on GPU.

6 hours ago, Oberon_Command said:

Perhaps if you're curious about why NURBS isn't used in the major game engines, you should make a post in the graphics forum

Becouse im exactly know why. Try to find a modeller.

Share this post


Link to post
Share on other sites
6 hours ago, Oberon_Command said:

Having "dead data" (as in, data to configure stuff that you aren't using) in memory surely doesn't help with that.

You anycase need it data. And i have it into minimal possible size and concentrated. You offer to spread it around code , to loose any control  on it sizes integrity e.t.c. and by the way lose any flexibility.

Share this post


Link to post
Share on other sites
On 7/7/2018 at 7:15 AM, mr_tawan said:

However, I think ... majority of this discussion is actually off-topic. I guess, @cmpt, if you want to know more about SCRUM and stuffs, it's probably better to open another separate thread (or, better, I'd ask an admin to branch this topic out).

I tried branching further (after the "what is AAA" bit sprouted and got transplanted elsewhere) but it's too difficult to determine which posts to split off, and what to title that split. I agree that there's a lot here that is not about scrum, but I'm darned if I can figure out what it IS about! 

Share this post


Link to post
Share on other sites
16 hours ago, Fulcrum.013 said:
22 hours ago, Oberon_Command said:

Perhaps if you're curious about why NURBS isn't used in the major game engines...

Becouse im exactly know why. Try to find a modeller.

I'm curious, have you done much NURBS modelling yourself?

NURBS-based modelling was a pain in the neck back in CAD systems of the 90s (I was loosely involved with yacht design software of that era, which was all NURBS-based), and it hasn't become any less of a pain since. They are still necessitated for some areas of CAD work requiring mathematically smooth surface definitions (to drive CNC machines and the like), but they have thoroughly obsoleted for most art use-cases by subdivision surfaces. Subdivision surfaces tend to be cheaper to evaluate, and a lot easier for the human in the loop to reason about.

Share this post


Link to post
Share on other sites
29 minutes ago, swiftcoder said:

NURBS-based modelling

This thread has gone way off track from the original question about scrum. Splitting is not an option anymore because so many threads weave in and out. Closing thread. Start new threads to discuss NURBS and whatever else anybody wants to discuss (including scrum). 

Share this post


Link to post
Share on other sites

This topic is 498 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Guest
This topic is now closed to further replies.

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!