I'm refactoring my code from a messy OO design to an Entity/Component/System design (hereafter referred to as ECS), and I'm trying to figure out the best way to convert a part of the OO code. I feel I'm slowly getting my head around the idea of ECS, but it's fairly new to me so I'm finding component design is a bit tricky.
In my existing OO code I have a class called CreatureStats which contains stats such as Food, Water and Energy levels. Since they always go together I've created a single component to store those three values. I've also created a system which processes all entities that have that component, and does things like consume food and water and regenerate energy etc. That part seems okay to me.
The part I'm having trouble with is what I call "Ailments". They are basically buffs or debuffs that persist over time, such as food poisoning, or being on fire. While active they affect things like energy regen and health. In my OO implementation, the CreatureStats object contained an Ailment list, which I'd simply add items to as they happened, process any active ones in CreatureStats.Tick(), and then remove any expired ones. In the ECS world, I can't quite figure out the best place for them to live.
So far I've created an AilmentComponent, which contains the list mentioned earlier, but it doesn't seem right, because it feels like the Ailments are trying to do stuff that should be implemented in a System. In fact, the code around Ailments has gotten messier than the original OO implementation.
An idea I had was that since only one instance of each ailment type is allowed at any given moment, I could just make them components themselves, so we'd have FoodPoisoningAilmentComponent (or similar), and then just dynamically add or remove the ailments from the entity as required. Then a system would be responsible for handling the effects of any ailment components.
Does anyone with ECS experience have any comments on this idea?
The more I think about it the more I like the "Ailments as components" idea, but I think my inexperience with ECS is giving me doubts related to the number of component types that will result. Most examples on the web are pretty simple and usually end up with about a dozen different components. If I followed the idea that every component should be atomic, I easily foresee having 100 or more components for my engine. Is that excessive?
So I'm a few years into my part-time game project and have been learning HLSL and DirectX as I go. I'm pretty confident in my ability to write HLSL shaders to do pretty much everything I need currently but have a question about the general architecture of shaders and how they should fit together.
For example, I have a terrain shader that handles some funky texture, tile-based lighting stuff and shadow mapping as well as some special effects, like highlighting an individual tile or drawing some special construction lines.
I also have an entity shader that I use for all the objects in my game and can also do Blinn-Phong shading.
I also have many other specific use shaders, such as a water shader, a fire shader.. etc etc...
I've realised that I now want to add the shadow mapping code to the entity shader, as I originally only implemented it in the terrain shader. But then it occurred to me that I'll probably want that feature in some of the other shaders down the track.
So my first question is, should I be implementing a given feature (eg shadow mapping) multiple times for every shader that needs it, or should I be trying to create a "it-does-everything" shader? And as a follow up, could I be writing my shaders to be more modular? I'm currently not using the ability to have multiple techniques or passes as I don't follow how they should be used.
I've been doing a bit of DirectX 9 under Delphi (I normally dabble in SlimDX), and have created a 3DS file loader. While testing it I noted that the mesh I'm displaying has "highlights" along some of the face edges, as demonstrated in the following image.. Note that despite the mesh being made up of triangles, none of the diagonals are highlighted, only the orthogonal edges.
< image removed>
I've tried my renderer with about 10 different 3ds files, and even created some by hand in code rather than loading through the 3ds file loader, but I always get the same result. The renderer I'm using as to work on older hardware that doesn't support pixel/vertex shaders, so I'm using the fixed function pipeline, but I've created a simple shader to test on my dev system (which does support shaders) to see if that is the cause, and the image looks slightly better, but still has issues.
The way I am calculating the vertex normals is to sum together all the face normals and then normalise the end result. I googled different approaches to this, and tried a technique involving the angle of each triangle at the shared vertex to scale the face normals to reduce the bias caused by large angles vs small angles at the vertex, but that didn't seem to make a difference.
I'm working on a game project in SlimDX + DX9 at the moment where I'm using the RawInput API to get keyboard input as this seems to be the "recommended" way.
For most of the functionality I've required up to now this has been sufficient. However, I'm now extending my windowing system to include a textbox control, and I'm having trouble translating the keyboard values coming back from the KeyboardInputEventArgs into the ASCII characters that I need to put into the Textbox.
The Key member has no concept of upper and lower case, and I've thought about handling that myself through testing if the shift key is pressed or capslock is on. I got that working, but I've run into an issue with other keys, such as the number keys across the top of the keyboard.
In Key these are returned as D0 through D9, and its occurred to me that interpreting these values manually for the shift state is going to be a nightmare since keyboard layouts are different depending on the country. A simple example is that shift-3 is a "#" on my keyboard, but in the UK I believe it is a pound symbol.
I've looked into various ways of converting the Key values to ASCII, such as by using unmanaged calls like MapVirtualKey(), but haven't had much success.
The only thing I've had work thus far is adding a KeyPress Event handler to the RenderForm, which returns a KeyChar value which is exactly what I want, but that approach is going to be very messy as I'll still have to handle other keypresses for extended keys like the cursors or Ctrl-C, Ctrl-V through the RawInput mechanism or the KeyUp/KeyDown events.
I should mention that I've completely rolled my own Windowing/UI system, so there aren't any standard windows controls anywhere to be seen, except the main RenderForm. (I've seen a few suggestions around that make use of the standard controls as part of the solution).
My guess is that the KeyPress event handler is going to be the way to go, but figured I'd throw the question out there and see if the community has any ideas.
Hi Guys, I'm having a weird issue when I attempt to use textures with power of 2 widths which are less than 16 pixels. I'm building a UI system, which is why they are so small.
I've written some code to produce a 4 byte per pixel byte array containing a magenta/green checkerboard pattern for testing purposes. I then load that byte array into a texture and draw it onto a quad. If I specify a size such as 16x16, the texturing works exactly as I'd expect. However, if I go to 8x8, the texture data is squished into the upper half of the texture and the bottom half of the textured quad is black (with occasional randomly coloured pixels). If I go to 16x8, it textures exactly as I'd expect, as does 16x4, 16x2 and 16x1. However, any power of 2 value less than 16 for the width produces strange results.
In the attached image, everything inside the white border is the texture in question. In this example I'm using a 8x8 texture with a 4x4 pixel checkerboard.