In my system horizontal merging is a lot more important than vertical because the visibility system is essentially 2D. Every object has an upper and a lower bound so they can be included/excluded in the visibility calculations as the observer moves up and down, but the construction of the line-of-sight geometry is all done in the X-Z-plane (Y is up in my engine).
So I came up with this algorithm (dunno if this is an established way of doing things, but it is working fine for me):
Get a list of objects that we consider for merging, called the "main list" below
Pick the first object in the list, called "reference box", or "ref box" below
Check if the shape of the object is a box, if not we cannot merge it so just submit it as-is
If the shape is a box, we will try to merge it with other boxes
Get a new list, called the "merge list" containing all objects except the ref box
Iterate over all objects in the list and process those that are boxes, sorted by distance to the ref box (closest first)
For each box, check if the upper and lower bounds are the same as the ref box, if not then we cannot merge
If the bounds match the ref box, calculate 2D convex hulls for both shapes in the X-Z-plane, then calculate the areas of the convex hulls
Calculate the convex hull of the combined shape, then calculate the area of that convex hull
Check if the sum of the original areas of the convex hulls roughly equal the new combined area
If the areas are equal, we can merge the boxes together
Continue with the rest of the objects in the merge list, attempting to merge each with with the ever-growing ref box
Remove all processed objects from the main list
Using this technique I managed to nicely merge most of the geometry in my test level. Check out the picture below:
The colored boxes represent box objects. The colors are selected from a list of a few colors and sometimes the same color appears on adjacent objects. From the image you can see that the algoritm works with both world space axis aligned objects as well as rotated ones. You can also see (on the wall of boxes in the lower left corner) that the algorithm does not merge objects vertically.
In the test level above the number of objects decreased from 266 to 45.
Also, it's not really good forum etiquette to remove your questions and replace them with "[solved]". That way no-one else can benefit from the question and answer later on. When your issue is solved, reply to the thread letting other people know what the problem was. Then try to learn from the experience.
The complete navmesh generation is in Sample_SoloMesh::handleBuild(), but you don't necessarily need to do all steps 1-7. Step 2 is concerned with creating the voxel height field. Search for duDebugDrawHeightfieldSolid() to see how the demo app draws the voxels.
Not exactly sure what you mean by squares, but Recast does indeed generate a voxel mold out of the geometry as a intermediate step when generating a nav mesh. I haven't used the voxel mold directly but I don't see why that wouldn't be possible. Try out the recast demo with some of your geometry to see if the results look like something you could use. The demo app can draw the voxels for you.
What exactly don't you understand about the generation phase? The samples that come with Recast are very well documented. Check out Sample_SoloMesh for example.
It does not allow you to mess with the loopback interface though, so you basically have to have two boxes (or two physical network interfaces). Some people build this sort of thing right into their network layer. That way you don't need any separate software at all.
You can certainly do this. The rendering API has no idea where the data originally came from (model file, network stream or even defined at runtime).
You just need a way to define each vertex (a vertex buffer) and some way to define the ordering the vertices into triangles (an index buffer) and fill those with the data you want. For texturing, just make sure your vertices have UV coordinates and set the texture you want to use. The exact details of this depend on the engine/framework you are using and on the shaders used.
In theory you could even generate the data on the GPU completely using eg. the geometry shader, but lets' not go there. My point is just that the CPU does not even need to know about your meshes.
I have a similar system set up. I use fastdelegates for hooking up executable functions to console commands (you can just as easily use std::functions if you want). A very simple system would look like:
After parsing a command you just look up the correct callback in the mRegisteredCommands map and execute it. You can even support passing arguments to the callbacks. If you use fastdelegates you can save all mementos to the same map regardless of the number and types of arguments. You just need a way to recreate the proper callback before doing the actual call.
Another way (and the way I am doing it) is have all registered functions be FastDelegate0<void>:s, ie. parameterless functions, but allowing (and indeed requiring) the functions that are registered with arguments to pop them off a temp storage stack that is filled by the console during the command parsing phase.
You should never use the term “dev kit” to refer to anything but specialized hardware and related development tools
Well, Wikipedia says the following about SDKs: "A software development kit (SDK or "devkit") is typically a set of software development tools that allows the creation of applications for a certain software package, software framework, hardware platform, computer system, video game console, operating system, or similar development platform." I don't agree that devkit necessarily means hardware.
Anyway, that's beside the point. Like Nypyren said, sounds like you are better off using an API such as winforms (or wpf) for what you have in mind. The database connection is easy enough with an ADO.net connector and you can still hook into the GPU if you want to use the hardware for bitmap manipulation with something like SharpDX. As for rendering, in my level editor I am drawing using DirectX into a winforms panel and it works like a charm. It can then obviously be used like any other winforms control.