Jump to content

  • Log In with Google      Sign In   
  • Create Account

irreversible

Member Since 10 Nov 2005
Offline Last Active Today, 08:09 AM

#5296064 what was the first video game?

Posted by irreversible on 11 June 2016 - 07:22 AM

A must watch for any video game enthusiast:

https://m.youtube.com/watch?v=QyjyWUrHsFc


#5295291 Identifying polygons of one mesh inside another mesh?

Posted by irreversible on 06 June 2016 - 10:43 AM

Generally you'd need to do a bunch of vertex-in-mesh tests which can be of wildly different cost (although not necessarily complexity) depending on what kind of geometry you're dealing with (eg if your geometry is concave or convex). If I had to what you outlined myself and I was using something like PhysX, I'd generate low resolution collision meshes that are "good enough" and let the physics API do the tests for me (hopefully on the GPU). PhysX returns points of intersection, which you could then use to extrapolate the faces to conduct more thorough tests. Alternatively you could roll your own BVH of convex collision meshes to quickly approximate which faces are likely the offending ones and then do more elaborate tests.

 

That being said, if I understand your description correctly, what you mean by clipping is essentially Z-fighting. In such a case (and I don't know the first thing about the format you're working with), have you tried tried good old polygon offsetting or simply scaling up the clothing mesh by a small margin? :) If this is the case, then removing some faces will not fix your problem, because your clothing and body geometries probably do not have perfectly overlapping faces, which would either generate holes in your body mesh or leave faces that are still partially Z-fighting the clothing.

 

Depending on a couple of factors, such as the type of clothing and the hardware you're going for, as a somewhat more general approach, you might try rendering your character model in two passes - first the base mesh and then the clothing mesh with either depth testing disabled or, if reading from the depth buffer is not available, by emulating depth bias in the shader (eg if what your favorite API provides is not sufficient).




#5295206 Shape interpolation?

Posted by irreversible on 06 June 2016 - 12:22 AM

Are you absolutely sure you need to do this with actual geometry? How complex is the topology?

 

Morphing in 2D is a lot easier. If you can get away with a screen space transform, you'd be much better off doing that and crossfading the final few frames to fully transition to the target mesh. Morphing still requires identification of features as a preparation step, so you'd have to either extract a silhouette or dominant features (eg the eyes, nose, mouth, etc in case of a monster) via projection. That being said, I can't imagine morphing alone being viable if your transition is slow and there are substantial changes in lighting or if the shapes in question are wildly different and/or complex.

 

By the way, I wouldn't discount some form of a screen space/texturing cheat (pdf, ~4.5 MB) as inferior, in particular if it is combined with actual topological changes.

 

 

 

Does anyone know of a 3D modelling program or open-source library that can take two arbitrarily shaped 3D meshes, that don't necessarily have the same vertex counts, and transform one mesh into the other, without adding or removing vertices?

 

This is impossible without creating very fine-tuned simplifications of both the source and target meshes. Matching their energy (pdf file, ~2 MB) is not necessarily trivial and so far as I know targeting a specific number of faces can be done, but will result in potentially a very slow simplifications step or in an unevenly simplified mesh.

 

 

As an alternative option you might consider a skeletal approach, which relies on a small number of intermediate ("shared") bind pose configurations that are easy to blend between and easy to transition to.

 

Also consider glitch effects to hide the change. Things like sucking the source mesh into a point while growing the target mesh. When it comes to transitions, it's all about speed and collateral fidelity. Can you add smoke or dust to hide the transition? Can you put the meshes in motion to disguise the switch? Some screen shake? The transformers in The Transformers don't really have the amount of gadgetry you can see on screen - they just use perspective, motion and the environment to hide the fact that almost none of the morphing process makes any sense.




#5290948 What is the best way to update a Quadtree/Octree?

Posted by irreversible on 10 May 2016 - 05:50 AM

You could use a 1D or 2D hash for the lowest level of detail and index a cache in which you've allocated you QT/OT nodes directly.

Not only does maintaining a semi-duplicate linear data structure (for each level) help with simple updates, it is much more ergonomic in terms of memory management and most updates/neighbor lookups are bound to be MUCH more cache friendly.

Personally I feel like linear grids are almost always the way to go. If you do need coarse control, it makes more sense to mentally adjust how you think about a quadtree and treat it as a set of tiered linear arrays instead.


#5289533 Do you ever get this?

Posted by irreversible on 01 May 2016 - 01:59 AM

When

So yeah - do you ever get this?

 
Not so similar but ... here is it anyway
 
I've had to write algorithms for triangles  of different shapes, from normal to weird lately
 
Every time I thought I had written my algorithm to cover all the weird shaped triangles another weirder one appears and breaks the code.
And then after a while super weird shaped triangles began to appear that broke the code - triangles that are effectively a single line because you could draw a single straight line through all 3 points
 
I guess I probably hadn't read my books well enough, because i think veterans have standards ways of dealing with these weird shapes :lol:

I discovered the plentiful nuances of computational geometry firsthand when I wrote my triangulator. It took me months to iron out all the kinks and handle all special cases, and I'm sure there are still a few around that I simply haven't come across :).

A triangulator now seems like the perfect school assignment for students who think they know better. There's always that one special T junction, degenerate vertex or empty triangle they can't think of :)


#5289382 Do you ever get this?

Posted by irreversible on 30 April 2016 - 12:19 AM

Every now and then I come across the need to refactor or properly integrate some older code. I usually encapsulate my experimental stuff fairly well, so this is mostly reduced to just checking the code, updating some data structures and adjusting the logic from a sandbox to a production grade environment.

 

Every now and then things get weird, though.

 

Occasionally (and by occasionally I mean "on numerous occasions") I discover a very obvious bug in the old code that completely breaks the new code in a way that makes perfect sense. However, for some reason it used to work before without ever failing. Not once. This is usually code that has not been thoroughly tested (or rather hasn't been tested in the real world) and tends to look like a mess. Although - there are occasions when these kinds of curiosities also show up in fairly, or sometimes in the most, tested code, as shown below.

 

Right now I'm in the process of integrating some fairly involved navmesh generation stuff I wrote about two years ago and when I now ran it with exactly the same data set I used when I first wrote and used to debug it, it floored. After stepping through it I found three pretty obvious mistakes, including the following line:

 

{ uint32 cx = v->x; for(;;) { grid[(v->y / iBinSize) * gw + (cx / iBinSize)]->vertices.push_back(v); if(cx <= v->next->x) { break; } cx -= iBinSize; } }

 

Which will crash if cx becomes negative (or in this case overflows), because it is used to index a grid element. And for a given dataset it should crash every single time. Yet somehow, through some bit of sheer dark magic, it didn't in the past. Under seemingly identical circumstances.

 

Similar oddities I've come across are not creating a GL texture, yet through some backwards circumstance the same image is loaded in a completely unrelated part of the engine and for some reason everything works "correctly" for a VERY long time; or stuff like not initializing variables, very obvious overflows, etc.

 

The biggest one I can think of was a tracer related issue in my previous code, which extended way back to 2005 when my code wasn't thread-aware. The tracer essentially used a static local to store parts of the stack, which at one point, as my code became increasingly more multithreaded, started to cause extremely rare random crashes, which didn't really show up in the debugger. So it took me about seven years before I uncovered this monstrosity.

 

All this makes me wonder - what other demons am I burying in my code right now that I will completely forget and will come back to haunt me in several years...

 

So yeah - do you ever get this?




#5288457 a mechanism to fill in for templated overloads

Posted by irreversible on 24 April 2016 - 11:09 AM

Okay, here's one way to do it. I'm guessing it's not the most elegant method, but it does work. In short, the idea is to get rid of push_arg() specialization inside member functions and set up a layer of indirection to pass the arguments to the correct overload. I reckon a snipped will be more illustrative. 

 

scriptengine_base.h

// forward declare an abstact type translator
template <typename T>
void push_arg(IN T arg, IN scriptengine* eng);
 
 
class scriptengine {
   protected:
     // do as before, but this time push_arg() member functions are missing, so the compiler will look at globals
     template <typename T, typename ...Args>
     void push_arg(IN T arg, IN Args ...args)
       { push_arg(arg, this); push_arg(args...); } 
};
 

The overloaded engine class remains the same. The changes are instead added to the application code as translator specializations:

template <>
void push_arg<IActor*>(IN IActor* arg, IN scriptengine* eng)
{
    luaengine* luaEng = static_cast<luaengine*>(eng);
    luaEng->push_arg(arg);
}
 
template <>
void push_arg<const char*>(IN const char* arg, IN scriptengine* eng)
{
    luaengine* luaEng = static_cast<luaengine*>(eng);
    luaEng->push_arg(arg);
} 
 
template <>
void push_arg<float>(float arg, IN scripting::engine* eng)
{
    luaengine* luaEng = static_cast<luaengine*>(eng);
    luaEng->push_arg(arg);
}
 
// ... and so forth for all supported data types

It's actually really simple, but in all honesty I'd been pondering on and off as to how to pull it off for several weeks now.

 

Working with templates is, more often than not, like dancing tango on a melting raft made of ice. With a polar bear. There's usually a solution, but first you have to something about the polar bear.




#5287158 Anyone here a self-taught graphics programmer?

Posted by irreversible on 16 April 2016 - 02:43 AM

I'm also pretty much all self-taught. I started toying with QBasic just about when it was on its way out, but then arbitrarily picked up this book and started feeling my way around the first IDE (the Borland one) and started learning more serious stuff. I remember arrays and the order of logic operators being strangely elusive when I just started out :).

 

Eventually I created my first window in WinAPI and thought "now that I have a window, the rest is easy-peasy". Boy, was I wrong  :P .

 

Like many, I took up OpenGL with the help of NeHe's tutorials and continued reading books, all in my spare time, without really giving up a social life.

 

Eventually I found myself facing the same choice all people face when they graduate from high school. With no clear idea of what I wanted to do, I rolled in IT, with emphasis on software development. But then two strange things happened: as I took more and more courses, it turned out I knew too much for the school I went to, but too little to actually spearhead a sizeable project on my own; what was far more concerning was the feeling that I didn't want to labor away for someone else's dreams. So I graduated and rerolled in a completely different field, all while I started structuring my code into something a bit more cogent, which eventually turned into a something that started to smell like and feel like an engine. As a hobbyist, I find it surprisingly difficult to keep myself focused on a single objective, so pretty much all parts of my game engine are in very active development. Although, after years and years of rewriting and trying out different things, many of them are really starting to fall into place. Which feels awesome. :)

 

As time goes on, I do feel the sting of a lack of further formal education and the absence of the discipline of a professional work place in the back of my head. But I'm still not old enough to give up my dreams :).

 

So yeah - I don't have anything like L. Spiro's video to show for it, but what I can say with confidence is that I've studied and built a lot of stuff that I'm personally really really proud of. And again, that feels awesome:cool:




#5287156 Computing an optimized mesh from a number of spheres?

Posted by irreversible on 16 April 2016 - 02:10 AM

Incidentally - if what you're looking for is a blob-like approximation of your spheres, not an exact union, and you're only dealing with spheres, then it might not hurt to look into metaballs.




#5287153 Computing an optimized mesh from a number of spheres?

Posted by irreversible on 16 April 2016 - 02:03 AM

Hey.

 

I have this idea of making a cool terrain system. I haven't been able to formulate a good Google search for it... The idea is to define a number of intersecting spheres that together form a solid mesh. However, I'm having trouble figuring out how to convert such a group of spheres to a solid mesh. So, my question is:

 

Given two or more intersecting spheres, how would I compute a solid triangle mesh from them without producing any triangles that end up inside the mesh?

 

Bonus question: The same question as above, but this time it also needs to support "anti-spheres" which instead of forming terrain cut holes in existing meshes.

 

 

I would like to avoid a voxel-based solution due to performance and memory reasons...

 

 

Boolean operations are indeed what you're looking for. However, my advice is to offload CSG ops to either an external library (which you'll have to google yourself) or a CAD program as a preprocessing step, which will both have far more robust implementations than what you can whip up on your own in a reasonable amount of time. CSG has a lot of edge cases and care must be taken to avoid degeneracies and precision issues. I know this, because I've gone down the other road and actually implemented my own CSG routine  :ph34r: .

 

Now, if you do find yourself itching to do this on your own, then prepare for A LOT of testing and squinting your nose as your edge cases do not work :P . Of the three primary approaches (the stencil buffer approach, which gives you no resulting geometry; the BSP-tree based approach, which blows up quickly in terms of speed and memory footprint as you give it more complex objects) you'll want to look into the one that is the most stable and can handle large meshes, as painstakingly outlined in this paper by Laidlaw back in 1986. I do believe I encountered one mistake in the paper, though. However, I can't recall what it was and I don't have access to my code right now.

 

The one thing I love about Laidlaw's polygon-based approach is that operations on each mesh are atomic and can thus be batched off to different threads. That is, for all operation types you're essentially looking at the following scheme: 1) calculate intersection of AUB; 2) calculate intersection of BUA; 3) perform a simple discard on the data that you do not need (depending on what operation you're performing) and merge as needed. Steps 1 and 2 are completely separable and can be performed in parallel based only on the input meshes. Step 3 requires synchronization, but is a cheap one. The bad thing is that you probably won't be able to use your existing mesh data structures for it. Good luck :).




#5287088 Is inheritance evil?

Posted by irreversible on 15 April 2016 - 01:43 PM

The key question facing any problem is: "what are the right tools to solve this?"

You don't kill a fly with an elephant gun. But sometimes. Just sometimes. You find yourself up against an elephant.


PS - it's a goddamn metafor.


#5286933 c++ should nested switch statements be avoided?

Posted by irreversible on 14 April 2016 - 04:16 PM

Just because one can, does not necessarily mean one should.


#5286186 Question just to clarify my game structure

Posted by irreversible on 10 April 2016 - 02:41 PM

For startes, there's no "correct" way to do something most things. As a game developer there are three things you want:

 

  • stable code (meaning no crashes)
  • [decently] fast code (meaning it hits your FPS target)
  • and good gameplay (meaning your game is responsive and fun to play)

How you go about it, is in many cases irrelevant. Unreal Engine is one of the more popular game engines right now and just look at the treatment it gets when you brush away the flashy top soil. Despite its popularity it's not their way or the highway. And it will never be :).

 

That being said, what you really want to achieve is for your code to do what you want it to do. If it does that and the above three points are covered, then move on. Come back to it if you feel it's not quite what you want or you get a better idea how to do it. If the code doesn't do what you need it to, then analyze it further and try to figure out why it doesn't do that. Notice that the stress here is on what you want it to do :). That's because it's your game and the question you posed cannot really be answered by anyone else.

 

I could make remarks about the redundant comment and the superfluous else case, or the fact that you call a rectangle your player, who is a rectangle (which might be perfectly okay depending on what your game is - especially if your player is a rectangle) or the fact that you have a typo on the line "if(k == e.VK_J && damageDetected = true)". But at the end of the day none of that would be answering your question :).




#5285792 Should getters and setters be avoided?

Posted by irreversible on 08 April 2016 - 07:45 AM

Like Samoth said, while in many (or in my experience, most) cases it results in bloat and overengineering, using getters and setters is, in some cases, necessary. I personally use them only ever so occasionally when I am absolutely sure that is the type of encapsulation I need. Basic getters and setters do not provide thread safety, they do not result in cleaner code and in my perspective they do not provide too much safety in the way of accidentally writing into a raw member variable. Unless... you have a reason to classify access to these member variables, that is.

 

A somewhat more interesting scenario is providing a seaparate accessor structure, which then friends only those classes that should have access to your base structure. Eg:

struct base {
   private:
     friend struct base_accessor;

     int a;
};

struct base_accessor {
    private:
      friend struct interface1;
      friend struct interface2;

     public:
       inline void set_a(base& b, int a) const { b.a = a; }
       inline int get_a(base& b) const { return b.a; }
}

Now only interface1 and interface2 have access to base::a and need to do so via an instance of base_accessor. My engine codes uses this to limit certain components from accidentally writing into a class to limit accessibility to a given thread. Also, this way you can separate the write interface from the read interface by defining a base_writer and a base_reader, should you need to.

 

Also, getters and setters are invaluable when you're maintaining an intermediate variable, which might be updated only when one of its component variables changes. If you do use them on raw variables, you should, in the very least, declare them inline and make the getter const so you'll have const correctness.

 



#5285577 Input Polling - Low Latency API

Posted by irreversible on 07 April 2016 - 07:40 AM

 

this is the single keyboard input routine i use in all my games:
 
 
// returns 1 if vkey is pressed, else returns 0
int Zkeypressed(int vkey)
{
SHORT a;
a=GetAsyncKeyState(vkey);
if (a & 0x8000) return(1);
return(0);
}
 
 
as i recall getasync is powered by hardware interrupts (IE it reads key state at a very low level from a data structure maintained by the OS and updated whenever a keyboard hardware interrupt occurs), so no latency. 
 
i typically poll at 15 Hz minimum. its about as slow as you can go and still be sufficiently responsive.

 

 

This might be enough for slow games, but if you need to catch double-clicks or fast input with precise timing in general (eg for an FPS or an RTS), then you're looking at sub-10 millisecond intervals, which you can't trap with low-frequency polling. 60Hz translates to a 16.6 ms per tick, which will cause your game to lack responsiveness. This is doubly confusing if you need to respond to key up events.

 

Respond to WM_CHAR/WN_KEYDOWN/WM_LBUTTONDOW and their up versions instead.

 

That being said, unless you need it, synchronization between the input and logic threads can be bothersome, which GetAsyncKeyState() avoids.






PARTNERS