I suppose I don't have to give you the thousands of obvious examples of what is not possible due to laws of physics, but I'd digress. In a more realistic term "anything is possible" rather means anything is possible in adaptive environments. One can create an MMORPG, given eigther an enormous amount of time, or the will to create a huge team. If your requirements are strictly "alone, in the next few months" than you'd be mistaken to belive its possible. I'll give you that, you can and will finish it at some point if you never lose patience, but oh well. Go ahead and try it, if you actually succeed I'd be glad to say I was mistaken. And don't mistinterpert my intentions, I'm not trying to discourage you but rather point you to a direction that is much more likely to have a positive outcome. If you are already disappointed by some random people saying it can't be done, how would you feel if you came to the conclusion after countless hours of work that it really can't be done?
So I want to make an MMORPG, what do you guys think or recommend.
Don't. Simply, don't. Unless you've got a good chunk of fellow programmers who also have very good experience with programming games (btw, has there ever been an MMORPG in java). And to clarify, since I might appear less helpful, there is nothing wrong with programming an online game for say 32-64 players or such, but MMO stands for enormous amounts of people, speaking 1000-2000 and more at a time, and handling this is just way too much to chew for anything but at least a mid-scale game company. If that is what you want, I refer you to my first "don't". Otherwise, stock up your experience in networking, threading etc... , and I'm pretty sure you'll have to at least write one small scale 2-player like online game before you seriously should tackle anything bigger (unless you want it to become a big ugly mess, which I belive you wouldn't).
It looks like each agent in the game can have its own BT which it updates every frame. But what if many agents share the same behaviour (so they have their own copy of the same tree)? Isn't it a good option to have only one instance of the tree and each agent passes its pointer to the tree during update? This way decision/behaviour logic will be the same, but it will also depend on current owner state because each node will have access to "owner->blackboard->some_data" etc.
Just to add from a general code point of view, this is called Flyweight-pattern, and makes most sense when there is a huge memory overhead, and sharing the instances helps to reduce this. I don't have much experience with AI, but generally this seems applyable.
For your first game project? Forget it. Just write a game, get familiar with the key concepts, spend time learning the libaries you will need to use regardelessly (DirectX, OpenGL), if you really plan on writing a game engine at some time. If you plan on using some libaries to make things easier for you, like SharpDX for graphics or SFML for input etc... , then start with using them in your first game projects and get familiar with them. I wouldn't totally agree with "Don't write a game engine" in the way that this sentence often is put, but I do have to confirm that writing one for your first projects is just ludacris. The reason being, that your first games code will suck. It will, there is no way around it. Writing an engine means writing code that is somewhat meant to last and reused, and if you don't know what you are doing you'll end up writing an engine that is so convoluted and horribly designed that writing everything from scratch would be easier than using it in future projects.
So now that you've written a few games and grown more and more on coding and game programming concepts, if you are still willing to write your own engine, be aware: Its not a task that will make things easier and faster for you. It will always take more time to write your own rendering architecture and graphics facilities than to use an existing solution (like Ogre3D, Irrlicht, etc... I'm not even talking unity or something like that). If you want to really just write games, eigther continue to code them, probably reusing some of your old code, or switch to something like Unity/Unreal/CryEngine and such. I'm not saying that the results of you writing a game engine might not be practical, but you have to see it as an in-depth learning experience, and not something that will save you time on making games. If that is your goal, again, screw the idea of making your own engine. Its never going to pay off. By the time you've probably gotten your engine to a point where its almost finished, even if you'd spend 8 hours a day just coding on it, the technology you used will already be outdated and you'd have to start over.
If you still want to make a game engine, because it doesn't make you uncomfortable that you are basically wasting your time doing so, than I can only give you an advice, sort of like you've already been told: Don't just write an engine, write a game to support it. Ideally, I'd divide your work into two layers: The engine layer, and the game layer. Treat the engine layer with care, spend some time thinking about the design, and keep in mind its the code you want to keep using for multiple projects. Dont hesitate to write code in that layer somewhat quickly to get it to work, but always make sure to come back and refactor it. Now that game, whatever you are working on at the same time, you should treat like just that: A one-time throw-away project. This is the only realy advantage I can see from writing your own engine alongside with a game: You can hack the gameplay/game logic as dirty as you want, because you've already put every bit of code that will be reused in the engine. So on the games side, just write code as fast and ugly as you please, once you're finish, only the engine you've written alongside is going to be carried on.
Which brings me to my final important point in this approach, always start coding from the games side. Don't just "I think I could use that feature at some time, I'm going to put it in the engine", but instead think about what feature your game is going to need next. Think about whether it is supposed to be reusable. This of course depends what your goal is: Do you want to have a RTS-specific engine, or something more general like Unity? In the former case, there'd be probably more code that goes into the engine, while in the latter one most genre-specific code would probably only last in the game. Keep in mind anyway, you can always use the game to experiment and move code that has prooven to work over in the engine, if you want to.
Just some thoughs of top of my head resembling my own experience coding an engine, hopefully some of it helps.
Yeah, those are fake like all the rest, you'd probably download a virus and whatnot (whoever is brave enough to try it, be my quest). Just do a 5 minute google search "is there an xbox360 emulator", and see for yourself.
EDIT: Now to be fair, according to some very recent posts there are actually some working emulators at that time around, but they won't play any games, just some homebrew-stuff, and honestly I wouldn't try any download from a site that claims their emulator can do otherwise.
By the way, push_back won't make a copy either if you provide a move constructor.
Hint(Hint &&) = default;
That is still more wasteful than using emplace-back, since first a Hint-instance is created, and then the move constructor is called for the Hint-instance in the vector. emplace_back only calls the constructor once for the vector instance. Specially here, since all members are POD except for the std::string, having a move-constructor won't help much, move constructor normally only helps when custom memory allocations are made in the class. Not that the difference is a big deal here anyway, but just wanted to add this.
Am I the only one who finds DX9 not all that extremly different than DX11? It could be the matter of fact that I started with shaders right away when learning DX9, and don't get me wrong i do like DX11's interface much more since its way cleaner, but the key concepts still remain. Individual states have been replaced by state objects, cbuffers instead of constant registers, and so on, but honestly, if you have learned DX9 without the FFP, it shouldn't be too hard to switch to DX11, and you'd already have learned a lot of concepts you need for eigther API...
You could reduce some of the passes used for blurring by using a different kind of blur, that does vertical and horizontal blurring in one pass. Don't know its exact name anymore, and don't know how it would affect visual quality, but here it is:
So what sampler state that should be used is normally controlled by the shader, and not the texture except of the what-kind-of-sampling-all-textures-should-have sample state?
Yes, a shader/effect should logically decide what type of sampler it expects. It might seem to be useful to delegate this to the texture instead, but when you think about it, it doesn't make sense. Aside from program-wise settings like the multisample-level, the effect will always somehow expect a certain sampler setting - for example, one effect might want to sample a texture with wrap mode set to clamp to avoid artifacts at the edges, therefore all textures passed to the effect must be sampled with clamp, and therefore the only logical place to put the sampler state is the effect.
I guess I have to sort the billboards by depth now am I correct about this or is there another way?
For pictures that consist only of solid (1.0f alpha) and completely-transparent (0.0f alpha) pixels, you can clip(image.a > 0.0001f ? 1 : -1) in the pixel shader (refered to as "alpha-testing", and continue using the zbuffer. In this case though, you don't need to enable alpha blending at all. Otherwise, if not usign additive/subtractive blending, you have little choice but to do sorting.
This takes the new image * new image alpha + background * (1 - new image alpha). I'm not to sure if the constants are named 100% correct, just look for the correct equivalent. The one thing that was incorrect in your example is therefore the operation on the background (dest), with one the background color is always considered 100% in the output.
The only thing I do with subobjects now is cull them against the frustum right before rendering. Basically not using the renderqueue at all.
To add to what L.Spireo said, in my engine each subset is its own instance being submited to the render queue. Actually a render queue is a very low-level concept, at least in that you don't want to do things like a mesh, and have it process its subsets. IMHO, for the render queue there should not even exist such a thing as a mesh, instead a mesh will submit a render instance with states for setting its vertex, indexbuffers, input layout, etc..., therefore handling subsets is also a part higher up.