Jump to content

  • Log In with Google      Sign In   
  • Create Account


Shannon Barber

Member Since 23 Jun 2000
Offline Last Active Apr 07 2014 11:11 PM
-----

#5120626 Is OBB always the minimum bounding box?

Posted by Shannon Barber on 02 January 2014 - 12:45 AM

No it's not guaranteed to be minimal; it's only guaranteed to be "oriented".

You could force/guarantee yours are minimal ...

 

An AABB/MBB is easy because that's just projections (linear algebra). An OBB/MBB will be difficult as that's a partial-differential-calculus minimization problem.




#5120625 When to use polymorphism

Posted by Shannon Barber on 02 January 2014 - 12:27 AM


In which cases I should use polymorphism ?

 

When the behavior of the class must change at run-time.

 

If you know you will have dogs & cats then can keep a list of dogs separate from a list of cats.

 

If you know you will have dogs, cats, and "some other animals" then you need polymorphism. It is designed to handle that ambiguity.

 

 

The usage of polymorphism has evolved into an Interface / Implementation pairing. You design a completely abstract interface, which is a contract of what the implementation must do. It is more than function signatures, it is a set of guaranteed behavior. (This is not how you must or should use it, that's just how it is commonly done. C++ gives you some more choices.)




#5119967 Easy-to-use Version Control on Windows? Needs to be able to easily ignore cer...

Posted by Shannon Barber on 30 December 2013 - 12:13 AM

Mercurial is the best balance of simplicity and provides a modern source-control tool.

 

You can use TortoiseHg which is very similar to TortoiseSvn so you'll have a low learning curve there.

 

git is a bit more powerful than Mercurial and it's a bit more complicated to use but they have very similar core features.

 

I don't see much reason to use any version control tool other than mercurial or git. If you have a shared/map Windows drive then you have what you need to get started.




#5078854 Casting between different sized integer references

Posted by Shannon Barber on 19 July 2013 - 12:00 AM

Interestingly, GCC 4.8 does not even warn about that code of yours although it's arguably in violation of the standard which says "A reference shall be initialized to refer to a valid object or function" (8.3.2) with "valid" being the important bit.

 

Since originalInt is not of a type that the new reference type can accomodate, it isn't a valid object (well, originalInt itself is a valid object, but result of the cast which the reference is initialized with isn't). You would think that this is obvious to the compiler, too. But maybe it's because of the cast operation. Probably the compiler assumes "programmer said cast, so he knows what he's doing".

He cast a POD to a POD so it's valid as long memory (size) constraints are honored and they are since it's a smaller.




#5078852 Should game objects render themselves, or should an object manager render them?

Posted by Shannon Barber on 18 July 2013 - 11:50 PM

That depends on what "render themselves" means.

If you mean putting OGL/D3D calls right into an object::render method then no, that's a terrible idea.

That makes it too easy to break the graphics when you add a new object into the game.

I do it this way for simple things where performance is a non-issue and I'm in a hurry.

e.g. prototype a replacement for Simulink. (It's 2D drawing the most complex call in the graphics is to turn on anti-aliasing line drawing.)

 

If you mean putting abstracted graphics routines into the object then that's sub-optimal for performance but might be easy to code with.

 

To maximize performance, I believe you need to submitted the graphics to the GPU in batches so that you keep the GPU & CPU both working simultaneously and to maximize performance of the GPU you want to minimize rendering state-changes.

 

Suppose you were making an RTS and 50 of the things on the screen were the same base tank model. You want to change all the states for every tank over and over again. It'd probably be better to draw all the faces & same textures on all the tanks at same time. Then switch to the next texture. Given the small number of pixels the tanks would actually use up, it'd probably be just as fast to draw 50 tanks this way as 2 or 3 the previous way.

 

Some things, like mirror effects, have to be drawn last so you can't even draw then correctly if you attempt to draw them "in line" as you transverse your spatial sorting. You have to queue it for later. Before 'custom shaders' were the norm, we'd write 'shader' based rendering engines. We'd have graphics code that we called a "shader". Different part of the various models would pick which shader drew that part of the model.

I would cull objects with a sphere tree, submit their shaders to the renderer which hash sorted them based on their priority (a constant that is unique and part of the shader). The priorities were carefully picked to minimize renderer state changes and-also guarantee the correct execution order for things like mirrors. With constant custom shaders this would let you draw everything that used the same shader all together, then switch to the next shader. This presumes it is unlikely that you would use the same textures with different shaders; if you're changing the shader I think you're probably going to be changing textures as well. So I sorted by shader first, texture second.

 

When I say "sort" I do no mean an O(n²) sort. I mean something like hash or a priority-heap.




#5078841 Proper usage of branching in [hg]mercurial

Posted by Shannon Barber on 18 July 2013 - 11:02 PM

If you are not maintaining releases and do not have many people working on the code-base, there is no reason to branch - especially not with a modern tool like hg or git.

In hg or git every commit is like a microbranch so you are already befitting in a small way from branch technology.

On small projects at home I never branch. At work we have to maintain a shitload of branches.

 

If you do a release then you should tag that commit (so you can find it easy) then if-and-when you need to fix a bug you can create a branch to support that release and release 1.0.1, 1.0.2, etc...

If the project is large and stable and you want to develop a big new feature, then you can create a 'development branch' to keep the experimental work on that branch until you deem it high enough quality to merge to the mainline.

 

If you have lots of developers simultaneously developing features then you can create 'feature branches' to keep each feature's development from interfering from one-another as they break things.

 

If you had to fix a bug in the 1.0 release then odds are it's still in the mainline code so you'd want to merge that bug fix back to the mainline.

 

If you make development branches then you want to periodically merge the mainline to the development branch first to work out compatibility issues on the development branch (made other people have merge big ass development branches in the mean-time.) Once you have a recent main->branch merge then you do a branch->main merge (this ensure the merge to main is easy).

 

When you're doing small feature branches you might just go-for-it and merge the feature back to the trunk (and if you have trouble, then abort the merge and merge the trunk to the feature branch).

 

I never use the command line for merge management; you really want to see what you are doing.

With hg, merging a branch is not really any different from merging two check-ins on the trunk.




#5048129 User versus System include, quotes and brackets etc.

Posted by Shannon Barber on 29 March 2013 - 03:24 PM

Technically there doesn't have to be a stdio nor stdlib file. The compiler is allowed to supply them internally (or any which way the implementer desires).

 

So

#include "stdio"

is pedantically incorrect and unportable code as it requires the file "stdio" to exist.

#include <stdio>

is how the compiler is told find-the-system-header called 'stdio'.

 

In practice every C/C++ compiler I know of uses files for the standard headers so it ends up not mattering much.

 

Not every compiler will even honor the system folders vs current folder rule. Many require you to give them an explicit list of include directories and they always search them in order.




#5048127 How many classes does a software engineer is expected to work with?

Posted by Shannon Barber on 29 March 2013 - 03:21 PM

The minimum necessary to implement the system correctly.




#5038526 Game: Health regeneration?

Posted by Shannon Barber on 02 March 2013 - 03:19 PM

With computers today it's probably faster to use floating-point than do all the shifting required for fixed-point numbers.

Just be aware that you can never use == or != with floating-point numbers as they are always imprecise by a little bit.




#5037867 Odd usage of friend keyword in C++ class.

Posted by Shannon Barber on 28 February 2013 - 08:48 PM

I would expect different translation units to implement 'class event' differently.

 

It's a zero-overhead hook.

 

This is the kind of code you get when someone gives the green light to C++ but then says but no virtual functions!!!

 

 

Is there a chance that foo is a template class?




#5017957 How to build a 3D Engine?

Posted by Shannon Barber on 05 January 2013 - 06:30 PM

I suggest starting with the NeHe tutorials. Once you understand the basics of 3D, then move on to shaders.

 

If you are more interested in getting a game together than learning about 3D graphics then use an engine like Orge.




#5012314 Design Patterns, which ones should I be using?

Posted by Shannon Barber on 18 December 2012 - 09:32 PM

Small-team + generic-as-possible = dumb.
Back to the drawing-board.

Design patterns are not even an "in sight" problem.


#5012309 Animating a wheel.

Posted by Shannon Barber on 18 December 2012 - 09:23 PM

That's a "bad path". Separate the physics from the graphics and make the graphics slave to the physics.
As-is this sounds like you are relying on the graphics calculations to achieve the spin-result.
This will result in non-deterministic behavior due to a variable calculation step-size (it'll be whatever the spurious free-running rate is... almost random).

Typically you pick a fixed rate to run the physics at. This makes the physics (much more) stable and repeatable.
If you update the graphics more frequently (don't have to) then you interpolate or predict & correct.

So write the physics part that takes an impulse and translates it into an initial rotational velocity and then tweak your spin-down.
Then make the rendering pass take the current angle from the physics engine and calculate a rotation matrix to apply to the wheel.

You create 'synthetic time' and update it at the physics step. Each step you run a pass of the physics engine.
When you have time (are caught up to real-time) you can render a frame. If you get all the work done you can sleep until real-time reaches the next physics step in synthetic-time.

*The primary problem is the inconsistent step size not rounding issues


#5012297 Is C++ too complex?

Posted by Shannon Barber on 18 December 2012 - 09:05 PM

The problem with C++ is not the complexity.
There are many ways that C# and Java are more complex than C++ is.

The problem with C++ is that it is a mess.


#5012293 Alternative to JSON and XML?

Posted by Shannon Barber on 18 December 2012 - 09:03 PM

Wanted to second/third YAML.
Otherwise you're back to .ini files or roll-your-own.

PS LiquidXML is an awesome tool for XML schema creation. I used it through beta to release.
LiquidXML would be most useful with C# or maybe Java; don't think it'd help much with C code.




PARTNERS