Jump to content

  • Log In with Google      Sign In   
  • Create Account

irreversible

Member Since 10 Nov 2005
Offline Last Active Yesterday, 02:59 PM

#5290948 What is the best way to update a Quadtree/Octree?

Posted by irreversible on 10 May 2016 - 05:50 AM

You could use a 1D or 2D hash for the lowest level of detail and index a cache in which you've allocated you QT/OT nodes directly.

Not only does maintaining a semi-duplicate linear data structure (for each level) help with simple updates, it is much more ergonomic in terms of memory management and most updates/neighbor lookups are bound to be MUCH more cache friendly.

Personally I feel like linear grids are almost always the way to go. If you do need coarse control, it makes more sense to mentally adjust how you think about a quadtree and treat it as a set of tiered linear arrays instead.


#5289533 Do you ever get this?

Posted by irreversible on 01 May 2016 - 01:59 AM

When

So yeah - do you ever get this?

 
Not so similar but ... here is it anyway
 
I've had to write algorithms for triangles  of different shapes, from normal to weird lately
 
Every time I thought I had written my algorithm to cover all the weird shaped triangles another weirder one appears and breaks the code.
And then after a while super weird shaped triangles began to appear that broke the code - triangles that are effectively a single line because you could draw a single straight line through all 3 points
 
I guess I probably hadn't read my books well enough, because i think veterans have standards ways of dealing with these weird shapes :lol:

I discovered the plentiful nuances of computational geometry firsthand when I wrote my triangulator. It took me months to iron out all the kinks and handle all special cases, and I'm sure there are still a few around that I simply haven't come across :).

A triangulator now seems like the perfect school assignment for students who think they know better. There's always that one special T junction, degenerate vertex or empty triangle they can't think of :)


#5289382 Do you ever get this?

Posted by irreversible on 30 April 2016 - 12:19 AM

Every now and then I come across the need to refactor or properly integrate some older code. I usually encapsulate my experimental stuff fairly well, so this is mostly reduced to just checking the code, updating some data structures and adjusting the logic from a sandbox to a production grade environment.

 

Every now and then things get weird, though.

 

Occasionally (and by occasionally I mean "on numerous occasions") I discover a very obvious bug in the old code that completely breaks the new code in a way that makes perfect sense. However, for some reason it used to work before without ever failing. Not once. This is usually code that has not been thoroughly tested (or rather hasn't been tested in the real world) and tends to look like a mess. Although - there are occasions when these kinds of curiosities also show up in fairly, or sometimes in the most, tested code, as shown below.

 

Right now I'm in the process of integrating some fairly involved navmesh generation stuff I wrote about two years ago and when I now ran it with exactly the same data set I used when I first wrote and used to debug it, it floored. After stepping through it I found three pretty obvious mistakes, including the following line:

 

{ uint32 cx = v->x; for(;;) { grid[(v->y / iBinSize) * gw + (cx / iBinSize)]->vertices.push_back(v); if(cx <= v->next->x) { break; } cx -= iBinSize; } }

 

Which will crash if cx becomes negative (or in this case overflows), because it is used to index a grid element. And for a given dataset it should crash every single time. Yet somehow, through some bit of sheer dark magic, it didn't in the past. Under seemingly identical circumstances.

 

Similar oddities I've come across are not creating a GL texture, yet through some backwards circumstance the same image is loaded in a completely unrelated part of the engine and for some reason everything works "correctly" for a VERY long time; or stuff like not initializing variables, very obvious overflows, etc.

 

The biggest one I can think of was a tracer related issue in my previous code, which extended way back to 2005 when my code wasn't thread-aware. The tracer essentially used a static local to store parts of the stack, which at one point, as my code became increasingly more multithreaded, started to cause extremely rare random crashes, which didn't really show up in the debugger. So it took me about seven years before I uncovered this monstrosity.

 

All this makes me wonder - what other demons am I burying in my code right now that I will completely forget and will come back to haunt me in several years...

 

So yeah - do you ever get this?




#5288457 a mechanism to fill in for templated overloads

Posted by irreversible on 24 April 2016 - 11:09 AM

Okay, here's one way to do it. I'm guessing it's not the most elegant method, but it does work. In short, the idea is to get rid of push_arg() specialization inside member functions and set up a layer of indirection to pass the arguments to the correct overload. I reckon a snipped will be more illustrative. 

 

scriptengine_base.h

// forward declare an abstact type translator
template <typename T>
void push_arg(IN T arg, IN scriptengine* eng);
 
 
class scriptengine {
   protected:
     // do as before, but this time push_arg() member functions are missing, so the compiler will look at globals
     template <typename T, typename ...Args>
     void push_arg(IN T arg, IN Args ...args)
       { push_arg(arg, this); push_arg(args...); } 
};
 

The overloaded engine class remains the same. The changes are instead added to the application code as translator specializations:

template <>
void push_arg<IActor*>(IN IActor* arg, IN scriptengine* eng)
{
    luaengine* luaEng = static_cast<luaengine*>(eng);
    luaEng->push_arg(arg);
}
 
template <>
void push_arg<const char*>(IN const char* arg, IN scriptengine* eng)
{
    luaengine* luaEng = static_cast<luaengine*>(eng);
    luaEng->push_arg(arg);
} 
 
template <>
void push_arg<float>(float arg, IN scripting::engine* eng)
{
    luaengine* luaEng = static_cast<luaengine*>(eng);
    luaEng->push_arg(arg);
}
 
// ... and so forth for all supported data types

It's actually really simple, but in all honesty I'd been pondering on and off as to how to pull it off for several weeks now.

 

Working with templates is, more often than not, like dancing tango on a melting raft made of ice. With a polar bear. There's usually a solution, but first you have to something about the polar bear.




#5287158 Anyone here a self-taught graphics programmer?

Posted by irreversible on 16 April 2016 - 02:43 AM

I'm also pretty much all self-taught. I started toying with QBasic just about when it was on its way out, but then arbitrarily picked up this book and started feeling my way around the first IDE (the Borland one) and started learning more serious stuff. I remember arrays and the order of logic operators being strangely elusive when I just started out :).

 

Eventually I created my first window in WinAPI and thought "now that I have a window, the rest is easy-peasy". Boy, was I wrong  :P .

 

Like many, I took up OpenGL with the help of NeHe's tutorials and continued reading books, all in my spare time, without really giving up a social life.

 

Eventually I found myself facing the same choice all people face when they graduate from high school. With no clear idea of what I wanted to do, I rolled in IT, with emphasis on software development. But then two strange things happened: as I took more and more courses, it turned out I knew too much for the school I went to, but too little to actually spearhead a sizeable project on my own; what was far more concerning was the feeling that I didn't want to labor away for someone else's dreams. So I graduated and rerolled in a completely different field, all while I started structuring my code into something a bit more cogent, which eventually turned into a something that started to smell like and feel like an engine. As a hobbyist, I find it surprisingly difficult to keep myself focused on a single objective, so pretty much all parts of my game engine are in very active development. Although, after years and years of rewriting and trying out different things, many of them are really starting to fall into place. Which feels awesome. :)

 

As time goes on, I do feel the sting of a lack of further formal education and the absence of the discipline of a professional work place in the back of my head. But I'm still not old enough to give up my dreams :).

 

So yeah - I don't have anything like L. Spiro's video to show for it, but what I can say with confidence is that I've studied and built a lot of stuff that I'm personally really really proud of. And again, that feels awesome:cool:




#5287156 Computing an optimized mesh from a number of spheres?

Posted by irreversible on 16 April 2016 - 02:10 AM

Incidentally - if what you're looking for is a blob-like approximation of your spheres, not an exact union, and you're only dealing with spheres, then it might not hurt to look into metaballs.




#5287153 Computing an optimized mesh from a number of spheres?

Posted by irreversible on 16 April 2016 - 02:03 AM

Hey.

 

I have this idea of making a cool terrain system. I haven't been able to formulate a good Google search for it... The idea is to define a number of intersecting spheres that together form a solid mesh. However, I'm having trouble figuring out how to convert such a group of spheres to a solid mesh. So, my question is:

 

Given two or more intersecting spheres, how would I compute a solid triangle mesh from them without producing any triangles that end up inside the mesh?

 

Bonus question: The same question as above, but this time it also needs to support "anti-spheres" which instead of forming terrain cut holes in existing meshes.

 

 

I would like to avoid a voxel-based solution due to performance and memory reasons...

 

 

Boolean operations are indeed what you're looking for. However, my advice is to offload CSG ops to either an external library (which you'll have to google yourself) or a CAD program as a preprocessing step, which will both have far more robust implementations than what you can whip up on your own in a reasonable amount of time. CSG has a lot of edge cases and care must be taken to avoid degeneracies and precision issues. I know this, because I've gone down the other road and actually implemented my own CSG routine  :ph34r: .

 

Now, if you do find yourself itching to do this on your own, then prepare for A LOT of testing and squinting your nose as your edge cases do not work :P . Of the three primary approaches (the stencil buffer approach, which gives you no resulting geometry; the BSP-tree based approach, which blows up quickly in terms of speed and memory footprint as you give it more complex objects) you'll want to look into the one that is the most stable and can handle large meshes, as painstakingly outlined in this paper by Laidlaw back in 1986. I do believe I encountered one mistake in the paper, though. However, I can't recall what it was and I don't have access to my code right now.

 

The one thing I love about Laidlaw's polygon-based approach is that operations on each mesh are atomic and can thus be batched off to different threads. That is, for all operation types you're essentially looking at the following scheme: 1) calculate intersection of AUB; 2) calculate intersection of BUA; 3) perform a simple discard on the data that you do not need (depending on what operation you're performing) and merge as needed. Steps 1 and 2 are completely separable and can be performed in parallel based only on the input meshes. Step 3 requires synchronization, but is a cheap one. The bad thing is that you probably won't be able to use your existing mesh data structures for it. Good luck :).




#5287088 Is inheritance evil?

Posted by irreversible on 15 April 2016 - 01:43 PM

The key question facing any problem is: "what are the right tools to solve this?"

You don't kill a fly with an elephant gun. But sometimes. Just sometimes. You find yourself up against an elephant.


PS - it's a goddamn metafor.


#5286933 c++ should nested switch statements be avoided?

Posted by irreversible on 14 April 2016 - 04:16 PM

Just because one can, does not necessarily mean one should.


#5286186 Question just to clarify my game structure

Posted by irreversible on 10 April 2016 - 02:41 PM

For startes, there's no "correct" way to do something most things. As a game developer there are three things you want:

 

  • stable code (meaning no crashes)
  • [decently] fast code (meaning it hits your FPS target)
  • and good gameplay (meaning your game is responsive and fun to play)

How you go about it, is in many cases irrelevant. Unreal Engine is one of the more popular game engines right now and just look at the treatment it gets when you brush away the flashy top soil. Despite its popularity it's not their way or the highway. And it will never be :).

 

That being said, what you really want to achieve is for your code to do what you want it to do. If it does that and the above three points are covered, then move on. Come back to it if you feel it's not quite what you want or you get a better idea how to do it. If the code doesn't do what you need it to, then analyze it further and try to figure out why it doesn't do that. Notice that the stress here is on what you want it to do :). That's because it's your game and the question you posed cannot really be answered by anyone else.

 

I could make remarks about the redundant comment and the superfluous else case, or the fact that you call a rectangle your player, who is a rectangle (which might be perfectly okay depending on what your game is - especially if your player is a rectangle) or the fact that you have a typo on the line "if(k == e.VK_J && damageDetected = true)". But at the end of the day none of that would be answering your question :).




#5285792 Should getters and setters be avoided?

Posted by irreversible on 08 April 2016 - 07:45 AM

Like Samoth said, while in many (or in my experience, most) cases it results in bloat and overengineering, using getters and setters is, in some cases, necessary. I personally use them only ever so occasionally when I am absolutely sure that is the type of encapsulation I need. Basic getters and setters do not provide thread safety, they do not result in cleaner code and in my perspective they do not provide too much safety in the way of accidentally writing into a raw member variable. Unless... you have a reason to classify access to these member variables, that is.

 

A somewhat more interesting scenario is providing a seaparate accessor structure, which then friends only those classes that should have access to your base structure. Eg:

struct base {
   private:
     friend struct base_accessor;

     int a;
};

struct base_accessor {
    private:
      friend struct interface1;
      friend struct interface2;

     public:
       inline void set_a(base& b, int a) const { b.a = a; }
       inline int get_a(base& b) const { return b.a; }
}

Now only interface1 and interface2 have access to base::a and need to do so via an instance of base_accessor. My engine codes uses this to limit certain components from accidentally writing into a class to limit accessibility to a given thread. Also, this way you can separate the write interface from the read interface by defining a base_writer and a base_reader, should you need to.

 

Also, getters and setters are invaluable when you're maintaining an intermediate variable, which might be updated only when one of its component variables changes. If you do use them on raw variables, you should, in the very least, declare them inline and make the getter const so you'll have const correctness.

 



#5285577 Input Polling - Low Latency API

Posted by irreversible on 07 April 2016 - 07:40 AM

 

this is the single keyboard input routine i use in all my games:
 
 
// returns 1 if vkey is pressed, else returns 0
int Zkeypressed(int vkey)
{
SHORT a;
a=GetAsyncKeyState(vkey);
if (a & 0x8000) return(1);
return(0);
}
 
 
as i recall getasync is powered by hardware interrupts (IE it reads key state at a very low level from a data structure maintained by the OS and updated whenever a keyboard hardware interrupt occurs), so no latency. 
 
i typically poll at 15 Hz minimum. its about as slow as you can go and still be sufficiently responsive.

 

 

This might be enough for slow games, but if you need to catch double-clicks or fast input with precise timing in general (eg for an FPS or an RTS), then you're looking at sub-10 millisecond intervals, which you can't trap with low-frequency polling. 60Hz translates to a 16.6 ms per tick, which will cause your game to lack responsiveness. This is doubly confusing if you need to respond to key up events.

 

Respond to WM_CHAR/WN_KEYDOWN/WM_LBUTTONDOW and their up versions instead.

 

That being said, unless you need it, synchronization between the input and logic threads can be bothersome, which GetAsyncKeyState() avoids.




#5283533 Naming conventions, software documentation and version control

Posted by irreversible on 26 March 2016 - 04:07 AM

I used to roll with a variant of the Hungarian Notation, but then I realized I live in the future where we have IDEs that show me this stuff and I changed my style to something more easily readable.

 

 

On a separate note, though - over time I've fallen in love with namespaces. Converting a function like: ConvertImageFormat() to image::convert_format() is both better encapsulated, less ambiguous, more intuitive and in many cases faster to type with auto-complete. Namespaces disambiguate things so you can keep your names short while you can selectively enable shorthand accessibility to what functionality you need.

 

For instance my vector class functionality is kept in namespaces like math::vec3_ops::math::vec2_ops::, etc, which are populated with your usual normalize(), length() etc functions. This forms a clear separation of different types of operations performed on vectors, planes, etc. So the following becomes possible:

namespace math {
   namespace vec3_ops {
      real length(const vec3& v) { ... }
      }

   namespace vec2_ops { 
      real length(const vec2& v) { ... }
      }

   namespace vec  {
      using namespace ::math::vec2_ops;
      using namespace ::math::vec3_ops;
      }
}


void somefunc(const math::vec2& v2, const math::vec3& v3)
{
  real len0 = math::vec::length(v2);
  real len1 = math::vec::length(v3);
}


//Or if I'm using a lot of math functionality and I know I'm not mixing my code with something like GLM,
//I can do:

using namespace math;
using namespace math::vec;

void somefunc(const vec2& v2, const vec3& v3)
{
  real len0 = length(v2);
  real len1 = length(v3);
}

 



#5283099 How can I locate a memory leak?

Posted by irreversible on 24 March 2016 - 04:39 AM

I didn't read through the entire thread, but I'll chime in on how I'm doing it.

 

If you're looking for a more hardcore solution, then implementing your own memory manager is the most flexible way to go (I heartily suggest this, although it might not be worth it, depending on your project).

 

My manager pools allocations per thread at fixed page sizes (larger allocations get their own page). To help locate leaks and overruns, I'm:

 

  • managing my own trace functions, which can step through the allocation table and tell me exactly how much was allocated and where it was allocated. Keeping track of this data is priceless, yet pretty much hassle-free AND free performance-wise. I'm hiding all this by allocating via two simple macros: New() and NewArray(). You can achieve the same result by overriding new() and delete(), but either way you're going to have to implement some sort of record pool, which has a pretty fixed complexity cost in a multithreaded environment (eg you can't really cheat and the cost is high either way, since it's difficult to make it transparent using TLS. Plus, you have to make sure of other miscellanea, like not recursively generating new entries, etc).
  • all allocations are managed via smart pointers, which guarantee cleanup to a large degree (although you can always dangle a pointer if you try hard enough). This also opens up avenues for more optimizations (eg pooling small/local allocations separately to minimize time spent on fragmentation map management), although that's probably overthinking it in most cases.
  • all my allocations have implicit alignment that I no longer need to worry about. Memory has been cheap for a decade now, so wasting a couple of dozen bytes per allocation is fine.
  • I'm not using a linked list like most if not all OS pools do, but instead a set of linear allocation tables (separated into pages), where each allocation record has a fixed size. This allows lookup in linear time even when fragmentation is high. Moreover, I can quickly locate a suitable place for an allocation by keeping track of the largest available free segment, the first and last free address and how much total memory is available in each page. Then again, keeping fragmentation low is something you can help on the application end. This alone results in almost constant time allocation in a vast majority of cases. Deallocation is always constant time.
  • and finally, this is the best part: by overallocating each object (which is already implicit in many cases due to forced alignment) and filling the padding with a magic value, I can perform over- and underflow (writing out of bounds) diagnostics. Suppose at one point memory becomes corrupt. I can't pinpoint where it happens, but I can break while debugging and see where it is already corrupt. By looking at the stack trace I can narrow down where the corruption might be from. I can then insert a fairly simple CheckStack() call every few lines, which checks the magic values in all memory pages - if it detects a byte with non-magic value within overaallocated space, I'll know with almost 100% certainty where the write violation occurred. Then it's only a matter of looking at the code. This alone has saved me likely tens if not hundreds of hours of debugging time over the past couple of years. 

 

  • bonus: I always know exactly how much memory I've allocated and I can visualize this data in linear time. All I have to do is make sure to never use raw allocations, which sadly forced me to write my own STL allocator and smart pointer wrappers (which sounds easy, but is not, if you have this kind of a memory manager).
  • bonus 2: it's trivial to enable or disable any behavior depending on whether I want to milk every last bit of performance out of the manager or not.
  • caveat: since the allocation record pool is fixed-size and can therefore maintain only so many entries, balancing the size of the memory pool and record pool becomes a matter of heuristics: many small allocations can overflow the record pool, leaving a part of the memory pool empty; large allocations, in turn, can fill out the memory pool, but leave the record pool unusable. This necessarily results in wasted memory. However, by setting the size of each page to something non-trivial, yet non-offending (eg 4-10 MB and balancing the size of the record pool accordingly to something like a few hundred kB), it's possible to effectively "distribute" the load between what the OS provides me with and my own manager, since any allocations larger than the fixed page size (4-10 MB in this example) get their very own specialized page, which is essentially a custom-wrapped raw allocation.

 

As a sidenote: writing the memory manager took me a few days. Testing it took several months. Making it reliably work with STL took many headaches.




#5281852 casting double* to float*

Posted by irreversible on 18 March 2016 - 07:59 AM

I'm with L. Spiro on this. I don't feel there is a place for any C-style cast in C++. It's far too easy to do something really bad (like casting a float* to a double*) by accident. If a static_cast does not do and that is in any way unexpected for you can think that issue through thoroughly if the whole thing is really a good idea (in most cases, it is not). A C cast will just do its thing.
I also find it significantly easier to detect C++ casts (and their intention) in code I'm reading.

 
Some studios actually ban C++ style casts because they are not as concise as C style.

I think I just found my new favorite coding guideline from that FAQ.

If you believe a warning to be spurious, it may be acceptable to turn it off under certain circumstances. Before you do please talk to someone who knows about these things.






PARTNERS