1) We still have Game States as individual objects
2) Each Game State has its own list of Entities (probably some kind of "EntityManager")
3) Game States have their own initialization logic: they can create their own entities during init. This logic is executed when the Game State is created or when it becomes active. We have a GameStateManager object to manage Game States life cycle
4) Entity Systems belong to the Game (not the Game States) but they only process those Entities that are contained by the currently active Game State
Please share your thoughts
I think this is reasonable. I assume by "game states", you're talking about things like, main menu, lobbies, actual game play, etc... If you have separate lists of entities, you might possibly want separate instances of each system too. It really depends on your needs. For instance, think about how the rendering system(s) will work when transitioning between game states (can you have more than one game state active at a time?).
in all of these examples, the "state" had its own render, input, and update methods.
That seems a bit strange to me. Why would the render methods differ? At any rate, if the OP is using an ECS, I wouldn't expect the game state to have render or input methods - that functionality should be handled by the render and input systems (which are the same for all game states) and the entities that belong to that game state.
Consider the full on approach where you have an entity that for everything and game objects are bags of entities. So a character running around the game would have ...
SkinnedMeshEntity, WeaponEntity, HealthEntity, BackpackEntity, etc. Each of those would have it's own bag of entities. (TransformEntity, CollisionBoxEntity, ......)
Most of those would be components, not entities (e.g. HealthComponent, TransformComponent, CollisionComponent).
The weapon needs bullets, so it has a reference to a BulletMagazineEntity. A skinned mesh needs an animation entity. A character needs a target to shoot at, which is an entity.... you see were this is going?
Yes. This is one good reason why code should not exist in components. Components can be related by the fact that they are attached to the same entity, and the systems (with the code) can reason over them.
or an open world type game.
If you are thinking of the latter, I would avoid ECS like the plague. The problems when you start streaming out objects that are refenced by entities can stall development completely. You have an object (bag of entities) that is leaving the game area but objects that are still in the game area have references to them. Next update they try to extract information from the object that doesn't exist anymore with random and probably fatal results.
This problem has nothing whatsoever to do with ECS. You would have to solve exactly the same issues if you were using a more traditional OOP architecture (or whatever). "some game objects are streamed out of the world and other active game objects may have references to them". If anything, using an ECS (or any data-oriented framework) would make this more straightforward because you generally have more knowledge on where your data is. For example, a "TargetingComponent" would ideally be re-usable for any game object that needs to target another - so your code that reasons over what happens when parts of the world stream in/out only needs to look through TargetingComponents, rather than this information being hidden behind some "OOP wall" on an object.
In any serious project, I build a performance measurement system into the game. I categorize sections of the draw and update loops and can pop up a display in game that shows how long each section took, or any spikes that took place recently.
From there, I can get a better idea of what to look at: if I need to use a CPU profiler to get more info about a section of game code, or if I should use a GPU profiler (e.g. Intel GPA) to see what's up.
As has been explained several times already in this thread, this is just the way it is. When you triangulate quads and interpolate between 3 points, you will have "artifacts" that depend on how you triangulate it.
There are various mitigations:
- Sample the normals from a texture in your pixel shader. That way you'll be interpolating between 4 points, and not 3.
You need to implement some kind of parent/child hierarchy. This could be done with Transform components on entities. e.g. an entity could have a Transform component, and that Transform component contains some reference to the parent entity (or the Transform on the parent entity).
Elephant seems like an old framework that hasn't been updated in years. It also seems to put code logic in the components themselves, as opposed to systems that operate over components (as is more common these days). You might want to check out Artemis, or something based on Artemis. I'm sure there are examples using it that have solutions to visual parent-child hierarchies.
All species of Pokemon (Nidorino, Gengar etc) are instances of base Pokemon class. The base class is the blueprint containing variables like the index number, sprite filename, height, weight, base stats etc.
Class Species inherits Class Pokemon
Species that inherit this class fills up these variables and become the base species definition. (All Gengars have 130 Sp.Attack and Nidorinos have 55 Sp.Attack)
I don't understand the need for the inheritance here. Is the sole purpose of the "Gengar" class for instance, to just set the base stats? Instead I think there should just be one Pokemon class, and its stats can be initialized from some text file/data file for the species that has the needed info.
Inheritance should be used to modify the behavior of something, but I don't see the need for that. All Pokemon behave the same - their behavior is just defined from their stats, moves, abilities, etc...
For types, I would use some kind of bitmask. In C#, this could be an enum with the Flags attribute (since a poke can be multiple types). Then have a table somewhere (this is just a 2 dimensional array) that defines the interactions between any two types.
Abilities are probably one of the more complex things to handle, sure. You'll need to map out all the various ways all the abilities can influence battle and then come up with some kind of system to make that work. Ideally it would be good if you can find the right abstractions such that abilities can be completely defined by a set of data. But it's possible that some of them are so unique that they might need some methods called on them to offer custom control over game state.
As for formes, the most obvious thing I can think of now is to figure out what belongs to "pokemon" (e.g. name, level, IVs, etc...) and what belongs to "formes" (e.g. base stats, sprite, etc...), and follow that abstraction. So a "pokemon" would have 1 or more "formes" as part of its definition (the vast majority would just have one forme). I can't offer any immediate suggestions on where to put the logic to change formes, because I can't offhand think of all the ways formes can be changed.
In this case if I lose my stencil buffer as soon as I set the rendertarget, how can stencil buffering be in any way useful at all? Would I have to create the stencil and perform the comparison test all inside one draw call?
Not inside one draw call, but in several draw calls without switching render targets. You generally draw stuff to set the stencil bits how you want (often disabling color writes), and then draw the "actual stuff" that draws based on the stencil comparison.
I think (not sure) that in raw DirectX in PCs you can manage the visual buffer and depth/stencil buffer separately, but they are tied together in XNA (presumably because of this limitation in the Xbox).
For example, I *think* what I'm doing is comparing the stencil buffer of a rendertarget to a simple reference value. Is this correct?
Yes. Or simply writing to the stencil buffer.
So for a very simple use case, maybe you want to draw a circle and have a square cut out of it. So you would first draw the square and have your DepthStencilState set up to set the stencil to 1*, say (GraphicsDevice.Clear will clear it to 0 by default, unless you specify otherwise). Since you don't want to actually see the square, you would also have disabled color writes by setting ColorWriteChannels to None in the BlendState you use. So now you have nothing visible, but you have a square in the stencil buffer where the values are 1.
Next, you would draw the circle (without changing rendertargets) with your DepthStencilState set up to only draw where there is a 1 in the stencil buffer. So the circle will draw everywhere except where the square is.
Here's an XNA stencil buffer example I found. I don't know if it will be helpful, but it should show you how to set up the DepthStencilState to get the functionality you want:
*Note that when setting stencil values, the stencil is set depending on whether a pixel is output, even if that pixel is transparent. So if you are setting irregular shapes by drawing sprites, you generally want to be using alpha-testing (which discards transparent pixels), not alpha blending.
I want to have lighting on my Planets but as basicEffect.lightingenabled = true won't work (I guess it's due to the fact it's no model),
As long as you're using the effect to draw, it doesn't matter if it's a "Model" class or not. Just make sure your icosphere vertices have the necessary information - specifically, they need a normal component in order to have 3d lighting work.
Seems like it's even more straightforward if you want the z-axis involved too. Then your arrow indicator is just a regular 3d object that you render along with the rest of your 3d objects (or if you want it "always on top", then render it separately after clearing the depth buffer).
You just need to figure out the world matrix to apply the rotation to your arrow. You basically already have this: RotationBetweenVectors returns a quaternion and you can get a rotation matrix from that. One vector will be the "base" direction your arrow model points in, and the other will be some vector that points from an arbitrary world position* to the world position of the offscreen object.
Then use your regular view and projection matrices when drawing your arrow.
* The only subtlety then is choosing the world position for your arrow.
if gEngine is not allocated, then calling ::Create is just going to crash.
Well, the C++ standard might say that it is undefined behavior (I'm not sure), but it shouldn't crash since ::Create (in its current form) doesn't reference any object state. Of course, there's no reason to do it this way, as you pointed out.
You seem to emphasize that cEngine and cInput are singletons as if that's a good thing. What benefit does it give you? (hint: none).
The huge benefit by not handling this class as a system, but as a segregated singleton class is, that the user simply can create a new state, create some entities in it, and easily handle input in the virtual Update() function of that state.
It seems like you've chosen this design because you want to have a bunch of entity logic in your "states", correct? If your game logic instead was all in the systems and/or on scripts attached to the entities, then you wouldn't need this. Input could be expressed as components that map the actual input to some "action", and then another component that maps "action" to some code to run/script/etc (and then a system that operates over this). I'm not saying that's necessarily a better way (it might be overkill for your scenario), but it would be more flexible.