I would advise against having an input component that reads straight off the input hardware. It makes it unnecessaily difficult to switch the object that is controlled by the player. For example, lets say you have a character that walks over to a car, gets in it, and drives off. At some point you want to switch the controlled character to the car, possibly removing the character object at the same time. While this is certainly possible to with an input component you have to be very careful that the car and the character dont exist both for a frame or two and happen to both read a key press that causes them both to do something only ever a single object should do at any point in time.
Another example is having the capability to control an enemy character during development. Being able to switch to another character by clicking on them is priceless when testing certain gameplay features.
The solution to this is to have an additional layer between the input hardware and the objects, called a controller. In my engine each controller can have zero or one objects called their "avatar" which is the object that controller controls at the moment. I believe Unreal calls this "possessing". Each human player that joins a game gets their own controller (which can also handle things like key-remapping etc that game objects really should not care about).
Communication between the controller and the object is done through messages. This means that you can make a common movement interface for both your player character and enemies and simply switch the avatar object of the controller. The movement will work the same for both. If the player character can jump but the enemies cant they just wont react to the jump message.
Regarding AI characters, I am not really sure having their behavior be driven through a controller is necessary. In networked games it makes sense, but for single player games I usually have a component that drives their internal AI logic. If the object becomes the avatar of a player controller, the AI component simply switches off.
In my system horizontal merging is a lot more important than vertical because the visibility system is essentially 2D. Every object has an upper and a lower bound so they can be included/excluded in the visibility calculations as the observer moves up and down, but the construction of the line-of-sight geometry is all done in the X-Z-plane (Y is up in my engine).
So I came up with this algorithm (dunno if this is an established way of doing things, but it is working fine for me):
Get a list of objects that we consider for merging, called the "main list" below
Pick the first object in the list, called "reference box", or "ref box" below
Check if the shape of the object is a box, if not we cannot merge it so just submit it as-is
If the shape is a box, we will try to merge it with other boxes
Get a new list, called the "merge list" containing all objects except the ref box
Iterate over all objects in the list and process those that are boxes, sorted by distance to the ref box (closest first)
For each box, check if the upper and lower bounds are the same as the ref box, if not then we cannot merge
If the bounds match the ref box, calculate 2D convex hulls for both shapes in the X-Z-plane, then calculate the areas of the convex hulls
Calculate the convex hull of the combined shape, then calculate the area of that convex hull
Check if the sum of the original areas of the convex hulls roughly equal the new combined area
If the areas are equal, we can merge the boxes together
Continue with the rest of the objects in the merge list, attempting to merge each with with the ever-growing ref box
Remove all processed objects from the main list
Using this technique I managed to nicely merge most of the geometry in my test level. Check out the picture below:
The colored boxes represent box objects. The colors are selected from a list of a few colors and sometimes the same color appears on adjacent objects. From the image you can see that the algoritm works with both world space axis aligned objects as well as rotated ones. You can also see (on the wall of boxes in the lower left corner) that the algorithm does not merge objects vertically.
In the test level above the number of objects decreased from 266 to 45.
Also, it's not really good forum etiquette to remove your questions and replace them with "[solved]". That way no-one else can benefit from the question and answer later on. When your issue is solved, reply to the thread letting other people know what the problem was. Then try to learn from the experience.
The complete navmesh generation is in Sample_SoloMesh::handleBuild(), but you don't necessarily need to do all steps 1-7. Step 2 is concerned with creating the voxel height field. Search for duDebugDrawHeightfieldSolid() to see how the demo app draws the voxels.
Not exactly sure what you mean by squares, but Recast does indeed generate a voxel mold out of the geometry as a intermediate step when generating a nav mesh. I haven't used the voxel mold directly but I don't see why that wouldn't be possible. Try out the recast demo with some of your geometry to see if the results look like something you could use. The demo app can draw the voxels for you.
What exactly don't you understand about the generation phase? The samples that come with Recast are very well documented. Check out Sample_SoloMesh for example.
It does not allow you to mess with the loopback interface though, so you basically have to have two boxes (or two physical network interfaces). Some people build this sort of thing right into their network layer. That way you don't need any separate software at all.
You can certainly do this. The rendering API has no idea where the data originally came from (model file, network stream or even defined at runtime).
You just need a way to define each vertex (a vertex buffer) and some way to define the ordering the vertices into triangles (an index buffer) and fill those with the data you want. For texturing, just make sure your vertices have UV coordinates and set the texture you want to use. The exact details of this depend on the engine/framework you are using and on the shaders used.
In theory you could even generate the data on the GPU completely using eg. the geometry shader, but lets' not go there. My point is just that the CPU does not even need to know about your meshes.
I have a similar system set up. I use fastdelegates for hooking up executable functions to console commands (you can just as easily use std::functions if you want). A very simple system would look like:
After parsing a command you just look up the correct callback in the mRegisteredCommands map and execute it. You can even support passing arguments to the callbacks. If you use fastdelegates you can save all mementos to the same map regardless of the number and types of arguments. You just need a way to recreate the proper callback before doing the actual call.
Another way (and the way I am doing it) is have all registered functions be FastDelegate0<void>:s, ie. parameterless functions, but allowing (and indeed requiring) the functions that are registered with arguments to pop them off a temp storage stack that is filled by the console during the command parsing phase.