• Content count

  • Joined

  • Last visited

Community Reputation

182 Neutral

About Subotron

  • Rank
    Advanced Member
  1. For an RTS-like game I'm currently working on, I have to add outlines around units (and other entities), for a number of purposes: * Units have to be made more visible against the background * To show if units are friendly, neutral or enemy (different color outline) * To show selected units (yet another color) * To show units that are occluded by other units/buildings Outlines will be shown for (pretty much) every entity (unit, building, etc.) in the scene. Even if a unit is in front of a building, or behind a building, (partial overlap) both the unit and building outline should still show. I am NOT so much after an edge detection/rendering algorithm, since these can show edges inside the meshes as well. This is not intended as a cartoon rendering-effect of showing outlines, it's intended to just show the most outer edges of the mesh. I am basically after this: [url=""]http://www.flipcode....Outlining.shtml[/url] However, I'm using a Direct3D-based engine, and the trick with wireframes doesn't work for me since I can't set their thickness like OpenGL allows. Basically I want high-quality silhouettes, with a constant width (not depending on how far away an object is), that just go around the outsides of the object, not doing anything with inner edges. I'm thinking a post-processor seems like the best choice. However, I'm not sure how to go about doing this. Maybe a sobel filter, but what should I then take as input? If I use a depth map, inner edges will also be draw. Maybe I could solve this by combining it with a stencil test, like in the article from Flipcode that I linked, but this might get very expensive for something as trivial as outlines. I'm also not sure if a stencil test could cope with the partial-overlap requirement. Can anyone suggest a good approach to achieve the desired effect, taking into account mainly efficiency and outline quality? Support for relatively old hardware is a pre, but not the main focus. I'm using DirectX9.
  2. Pathfinding in a space-based RTS game

    @AIDaveMark: Thanks, I found the URL myself as well and was already studying it, but it's nice to mention it in this topic in case other people end up here. @ApochPiQ: Thanks for the explanations. I've already implemented a few of these behaviours. I've implemented seek (=Follow with or without arrival, depending on if you are moving to a desired, static position or if you are chasing an enemy ship), and flocking (separation, alignment, cohesion). What is unclear to me is how Grouping and Separation are different from resp. cohesion and seperation as in flocking. From your explanation I can't really deduce if it's a separate type of behaviour, or the same thing (maybe with different parameters). When you are talking about Grouping, you do mention a problem that I haven't solved yet: When I have multiple units selected and I make them move to a desired location, they don't end up in a relaxed state because flocking (or even basic collision response) prevents them from reaching the exact target location, so they keep moving toward it, trying to push eachother away (sorry, I love applying semantics to emergent behaviour).
  3. Pathfinding in a space-based RTS game

    Thanks for your replies, they are very useful. @ApochPiQ: I have been researching some more and have indeed decided to go with steering mechanisms like flocking. I'm reading everything I can on this, and am implementing it. So far, I've only got cohesion working, which was very straight-forward. A (slightly) detailed explanation of the steering forces you mentioned would be of great help to me. So far, any papers I found on the techniques leave a lot to the imagination. I also think the influence maps would be the best solution in this scenario. Truly blocked areas are uncommon, and the C-shape you mentioned is very unlikely to occur for any objects other than ships, which are already taken care of by the aforementioned steering mechanisms (at least for friendly units, I'm not sure yet how to react to enemy units, but to be honest I think it might be best to tackle this problem later). I think I have an idea of how influence maps would work, but again I wouldn't mind you explaining the concept a bit. I really like the reactive system, instead of using a predictive system that seems overkill. I'm going to have to see if it will be easy to implement this in the client-server architecture, to see how much work will need to be done on the server to achieve synchronization, but at least for offline purposes it seems to be exactly what I was intuitively looking for. Our game will use a fog of war system that will only take into account entities that are close to your own planets/ships, so going with a reactive system makes sense. @Katie: I've gotten some good ideas from your post as well. Right now it seems I'll be going in a direction where not everything you said will be applicable anymore, but there are some hints in your post that I find interesting, such as 'small' units giving way to bigger ones. I've also not thought of using my quadtree for keeping track of this info, which is something I will definetly keep in mind!
  4. Hi, In a game I'm developing, I am hitting the stage where I need to implement pathfinding. I've been doing research on different methods of pathfinding, to see what would work best in my situation. However, most, if not all articles I found assume different game types with problems that aren't present in mine. I have a feeling I can get away with a fairly simple method of pathfinding, and don't need a lot of the overhead traditional pathfinding methods introduce. This topic is intended to discuss theory, but taking in mind implementational details such as ease of implementation, and performance. [b]Description of the game/problem:[/b] This game is essentially an RTS, rendered in 3D but with all the game logic in 2D. (I think this is called 2.5D?) Meaning, space ships, planets, etc. move over the X, Y-plane. Meshes are 3D and the camera is fully free in 3 dimensions. For pathfinding, I will thus be working in 2D. The game takes place online only. The client application receives all the world, ship and other data from the server, and constructs the world out of it dynamically. The universe is divided into solar systems, and the player will only know about the solar systems near to him. So basically, there's no advance knowledge when the game is started, so I can't pre-fabricate navigation layers, waypoint graphs, or anything. It all needs to be done in real time. Every object that needs to be taken into account will have an X,Y-position, and a radius (spherical collision). The player can click on a location in space with one or multiple units selected, in order to make the units move to that position. Thus, a unit will set a destination. The server will be notified and has to be able to figure out the path the unit is moving along to predict it's position at a given time, until the destination is reached. The first issue occurs here: Should the server be able to calculate the path the unit will take, or should this be done client-side only, after which the necessary data points are all sent to the server? The issue arises because the server has full knowledge of any (enemy) units that might get in the way of a path, yet the player might not know about these ships (and if that's the case, we don't want to take those ships into account because it would give the player information that it shouldn't have). Stepping away from the client-server problem, the real issue why I created this topic is the pathfinding algorithm itself. I've studied AI for a few years, and I've read a number of pathfinding articles recently, but I feel most of the solutions are overkill or just not well suited, given the relative simplicity we can assume (spherical objects, relatively small amount of objects in the game world), and on the other hand the dynamic nature of the world and everything in it (everything moves, planets, units, and other entities). Most algorithms use waypoints (in the A* sense of the term) or a navigation layer, both of which I don't think are particularly suitable for this game. So I set everything I knew about pathfinding overboard and decided to see if I could design a system that made sense for this particular game. [b]What I've come up with [/b]Not yet taking into account moving multiple units at once (which will introduce formations, taking into account different unit properties such as velocity and turn speed, and other variables that make matters more complicated), the problem seems easy. Given a position and a destination, just calculate the straight line between the two. Scan along this line for collisions, taking into account the radius of the ship that's moving and the radius of objects that are checked for collision. If a collision is detected, project the position (not taking into account radius) of the collider onto the traceline, create a waypoint there, and move the waypoint away from the collider's position just far enough so the moving unit can go around it. Check for the new line parts if no new collisions appear, and keep repeating until we've got a proper trajectory. Follow this trajectory (which could be sent to the server as a number of destinations, so the server doesn't have to do its own pathfinding). A number of things aren't taken into account in this simple scenario. First of all, we don't have any predictive value. All that can be done is check if there's no obstacles in the way at the time when the path is calculated. Since all obstacles are dynamic, chances are that obstacles will appear down the line later on. This will be especially true when fleets are engaged in battle, where there could be as many as fifty-or-so ships flying around quite near eachother. So, I'd somehow have to construct a mapping that not only describes the position of a given unit, but also the time at which the unit is in the given position. In itself not a hard task, but scanning this list for collisions when creating a path from a to b might become complicated and slow if not done properly. At the very least I'll need to use some kind of hash mapping to make this feasible. Second, when two objects are planning a path, and run into eachother at some point, we wouldn't want both objects to alter their route with the logic above, since their alterations will often make them run into eachother again. How will I determine which object derives from it's ideal path, or will they both do it? Will one object slow down to a stop to let the other pass? (Note that this is very costly in space, the ships will accelerate slowly and ideally always keep moving). Third, objects like planets, although dynamic, should obviously never stray from their defined paths. A planet isn't moving fast, but in an ongoing world, there will be occassions when a planet is about to collide into a ship that's just sitting there, and the ship should see this collision coming and move out of the way autonomically. Fourth, since units are not completely free in their movement (they can't slow down or get to full velocity instantly, nor can they turn an infinite amount in finite time), when additional waypoints are introduced, the whole trajectory of the unit will have to be smoothed out to make sure the unit can actually follow the path. This also introduced the fifth (and last I can think of) problem (more of a constraint): though a certain path may be the shortest possible, given the parameters of the unit it might still be non-optimal because it's too slow (if a unit has to come to a full stop five times, it will take much longer than when the unit can move over a path that is longer, but requires no slowing down at all). Ideally, the time it takes to complete the path is taken into account in the search for one. [b]This is where YOU come in [/b]Given this problem description, I'm looking for either/both solutions to the described problems, or existing pathfinding algorithms that match the properties of this game as well as possible. Also I'm interested to see if you can come up with problems that I've overlooked, and need to take into account. Thanks in advance for any help!
  5. Vision Game Engine pricing

    Another comment, a bit off-topic but it might save you from a huge pitfall/money drainer. I quote: Quote:When we finish the game (it will probably take 3 - 4 years, because we are learning the technology while making the game) we will register immediately a company and will start searching for publishers who will definitely pay attention to us , because they will see a working team , they will see a good title . Maybe I'm mis-interpreting your post, but it seems to me a lot of work in your 'company' (let's just call it that) that can be done without licensing an engine is not done yet. From my experience, the Trinigy Vision engine is quite easy to work with. I personally loved it at least, but granted, it was my first (and last) professional/commercial engine so I have little to compare to. What I intend to say is that it will not take you long before you become productive with this engine, given that you have enough skilled programmers in your team. The point I'm getting at is this: make sure that once you start licensing this engine, or another that needs to be payed for, you already have as much work done as you can on other fields. That means you have the design of your game finished to the smallest detail possible, you have loads of artwork sitting there waiting to be used, you have a team of people ready to test/work with what the programmers create, etc. If you have prepared well, you should be able to license this engine or another for a reasonable price (I don't know if I'm allowed to say what we payed for it, so I won't.) because you will not need to license it for 5+ years. In my opinion, as I kind of said in the last post already, after you've prepared, try to get a license for developing a prototype that will not be released, in other words get the engine for a limited amount of time just for trying-out purposes. That will keep your costs as low as possible. Set milestones for the time you have the license (say 12 months). Set goals in advance that need to be reached in order to conclude the project is succesful enough to extend your license. Also, use your prototype to attract investors/publishers and such. Waiting till your game is finished is not gonna cut it, believe me. If you can show that your company was able to produce a decent prototype (given the time and resources available), that should make them confident you can in fact finish the whole project, as long as your 'plan' is good enough (like I said, detail, detail, detail. Every aspect of your game needs to be on paper, preferably up to the level that programmers can just 'translate' it directly to code and it's done :)). Disclaimer: all this advice given is by a huge amateur, with very limited experience in this sort of thing. I just want to prevent you from making some of the mistakes we made, and that killed our project :)
  6. Vision Game Engine pricing

    We used to use this engine in an indie project, I was not the one setting it up, but we were all amateurs, so a lot of the comments above that they are only interested in well-established companies surely wasn't true at that time. In fact, they were very patient with us, in the sense that they gave us a lot of support on issues we really should've been able to have worked out ourselves. Of course I don't know how much they changed from there on, but I can't imagine it being THAT much. I don't know what kind of an e-mail you sent them (that they did not respond to), but I guess it helps to be specific in the kind of deal you want to make. So for example, state you want to make a title for platforms x and y, at first you want a license for 6 months to develop a prototype, with no public releases, and from there on you may or may not license the engine for at least 2 more years, with the option of prolonging, and such and such plans on releasing, etc. Worst case, the e-mail you sent them now was as general as 'what does it cost to use your engine?', and in that case I'm not surprised you don't get a response. If you do some research into how licensing works, and can ask specific questions, I'm quite sure they will be able to (and willing to) give you an offer. After all, they need to make money too :)
  7. Writing an OpenGL/DirectX render...

    Just an addition: you might wonder what you should do with things like textures (where to save em, etc) What I do is store a list somewhere that contains a pair of 1. a string (the key), 2. the object itself (texture in this case). I store all loaded textures in this list, (which in C++ would logically be implemented with a std::map iirc), like this: (pseudo code) list.Add( new Entry( "textureIdentifier", LoadTexture( "filename" ) ) ); The renderer should have access to this list, and each renderable object just stores a string that contains the texture identifier, instead of a local copy of the texture itself. This makes resource management a lot easier, cleaning up, using the same resource multiple times (possibly with reference counting) and such things become easier. You could use a similar approach for shaders, and possibly other data.
  8. Writing an OpenGL/DirectX render...

    The best advice I can probably give is to think about this stuff on a conceptual level, not too much on a code level (yet). What I always start out with is write down for myself how I ultimately want to USE the system (what calls I want to make and such). Conceptually, keeping the winapi code seperate from the renderer seems a good idea. Personally, I usually create a WindowManager class that does all winapi-related stuff for me. As you probably already know winapi can result in long code, which is something that generally you code once and don't touch often afterwards. This to me seems like a good reason to keep it away in a seperate class, and also your renderer has nothing to do with your window management. You mentioned you want to implement both OpenGL and DirectX, so I assume you would also keep open the option for cross-platform. So this means you could do the same for the WindowManager as for the renderer: create an abstract version of the class that other classes that are implementation-specific can derive from. (for example one winapi implementation and one for linux). Generally I start with letting the window manager create the winapi forms, and pass one control (window or other control) to the renderer to use as a rendering context. As for the renderer itself, this can become a bit complicated, but for now you mentioned you want to keep thing simple, and you seem to be on the right track. This is what I do: My whole scene (so all objects, not just renderable ones but also things like lights) is stored somewhere in a Scene class. This class gets the info from a loaded map for example. It contains a list of all renderable objects, which are kept seperately from non-renderable objects (once again, such as lights, although arguably you could render those as well. The camera is another important example). I use a scene class to have everything at one place, and it also holds my algorithm for culling non-visible objects. Personally I use stuff like octrees and quadtrees, but for simplicity's sake I'm just going to say that for each object, we check if it can be seen by the (active) camera. If it is visible, the scene class passes the object to the renderer class. Since the object is a renderable object, it derives from a RenderableObject class. I do this to ensure that the renderer has all the information it needs to draw the object. So this could be vertices (indices), textures, etc. So this renderable object arrives at the renderer, which simply adds the object to a queue it keeps. Later that same frame, a call is made to renderer.Draw(). The renderer now takes the queue, which is filled with ALL visible, renderable objects. Each object has the same data that is required to draw it on screen. The renderer may even sort the queue, for example to draw all objects that use the same texture first before drawing any objects that require a texture switch, and when the sorting is done, for each item in the queue the renderer just calls a render function that takes the info from the object and draws it on screen. This way, the only thing you need to worry about with your objects is that they contain the necessary information. This way, you can just always do the same thing: 1: Load an object, e.g. a model from a file 2: Insert the object into the scene 3: Update the scene (this determines visible objects and passes them to the renderer 4: Renderer puts item in queue 5: Renderer renders all objects in the queue (possibly after sorting) 6: Renderer clears the queue. (don't forget this step :)) I hope this answered your question a bit :) Note that the described design is a bit more than you asked for, but I just wanted to draw you the bigger picture I had in mind when designing this, and show you that such a system can start out really simple, but can easily be extended to include culling, shaders, optimization (by sorting the renderer's queue), etc.
  9. I don't really understand your problem, and we're probably going to need to see your code to fix this. From what I understand, you want the object to only be drawn once to the screen, but when you use both shadows and bump mapping it gets drawn twice, but you only see this when you do certain camera movement. First of all, the problem is not your camera movement in this case (although it is indeed very weird that one of the 2 objects gets translated like in the screenshot, but this is a different bug). The whole bug seems to be that you actually draw your scene to the backbuffer twice! This will not only be really slow, but it's totally wrong. From what I see in the screenshots, bump mapping works, but there are no shadows. So what I expect that you do: You render the shadow pass to the backbuffer, where you probably want to render it to some other buffer like the depth buffer or a texture (assuming you use shadow mapping and not stencil shadows). Either way, you don't want to render this pass to the screen. The second pass (bump mapping) should take into account the shadow map generated in the first pass, and apply that (next to the bump mapping itself) to produce the shadows. That way, each object will be drawn to the screen only once (in the 2nd pass), and you should never see that double object anymore. About the modelViewProjection matrix, why do you set it to identity? That seems weird to me. If this is not your problem, please elaborate a bit more. Your post isn't exactly clear and you provide very little information (no offense, just trying to be helpful).
  10. As far as I understand what you are trying to do, you indeed don't have to duplicate each vertex 4 times. Smarter solutions exist. To start off, you probably want to render your terrain as one big list of vertices. Just rendering vertices is not what you want though, you want indices as well. Basically you want to create every vertex once, so something like this: vertices = new Vertex[vertexDimensions.X * vertexDimensions.Y]; for (int z = 0; z < vertexDimensions.Y; z++) { for (int x = 0; x < vertexDimensions.X; x++) { vertices[z * vertexDimensions.X + x] = new Vertex(new Vector3((float)x * tileSize, 0.0f, (float)z * tileSize)), new Vector2((float)x * textureScale, (float)z * textureScale)); } } This assumes vertices is a single dimension array of Vertex's (lol), and each Vertex is constructed with a Vector3 position and Vector2 texture coordinates. Note that each vertex is created only once, but we are going to use the same vertex for multiple polygons! By the way, tileSize is the size of one terrain tile, so how far vertices are away from eachother, and vertexDimensions is a Vector2 containing the amount of vertices in the x and z direction. (I'm assuming Y is your up vector). textureScale is just a float value that says how often a texture should be repeated for one tile. For me, this value is always 1.0f or smaller. The trick comes from the indices: (note that tileDimensions is also a Vector2 like vertexDimensions, but both the X and Y are one lower since you need x+1 vertices to create x tiles in one direction). indices = new int[6 * tileDimensions.X * tileDimensions.Y]; int currentIndex = 0; for (int z = 0; z < tileDimensions.Y; z++) { for (int x = 0; x < tileDimensions.X; x++) { indices[currentIndex] = z * vertexDimensions.X + (x + 1); currentIndex++; indices[currentIndex] = z * vertexDimensions.X + x; currentIndex++; indices[currentIndex] = (z + 1) * vertexDimensions.X + (x + 1); currentIndex++; indices[currentIndex] = (z + 1) * vertexDimensions.X + x; currentIndex++; indices[currentIndex] = (z + 1) * vertexDimensions.X + (x + 1); currentIndex++; indices[currentIndex] = z * vertexDimensions.X + x; currentIndex++; } } Basically this loops through all the tiles in the terrain, and creates 6 indices for each tile (each tile is a quad, so 2 triangles, so you need 6 indices of existing vertices. This gives you what you need (at least I think). Now just draw the vertices using the indices (look it up if you don't know how), and your terrain will be there. Make sure you use the right ordering for indices (clockwise or counter-clockwise depending on your culling). If you want to use frustum culling for terrain, it gets a little more complicated. You probably want to divide your terrain into smaller parts. The easiest way to do this is to create index buffers for each part, instead of one big index buffer.
  11. Hi, I'm working on a simple real time strategy game. I've come to the point where it's not viable to draw the whole scene to the buffer anymore, I really need to start not-rendering the parts that cannot be seen. I've implemented octrees and quadtrees before in simple situations, but I'm not sure this is the best way to go for a strategy game. Since in a typical RTS the view direction is pretty limited (isometric perspective, but not at a fixed rotation in this case), I think I can settle for something a little less overkill. As far as I can see, at least quadtrees will be more suitable than octrees. Since height differences in RTS games are usually relatively small, I think I can ignore this component and do 2.5D partitioning, meaning I just give each 'quad' a constant height (max height - min height of the world). But even quadtrees don't seem to be 'simple' enough. Basically the map consists of a heightmap, and a number of objects on it (foliage, rocks, etc.). All the buildings (except for maybe a few that are already added to the scenario) are placed by the players and their location can therefore not be known in advance. Units will be moving around and will probably need a seperate mechanism for culling anyway. So the property of quadtrees, that the leaf nodes are (partially) determined by the amount of polygons they hold, does not seem to help here a lot. So the best I can think of is to just divide the terrain into a number of (2.5D) regions with fixed size, and only draw those regions that are (partially) in view. Each region should probably have references to the regions around it (max. 8). These references can be used to easily look up the neighbouring regions, and also to place dynamic objects like units in the right region when they move over the borders of the one they are currently in. This then only leaves me with the cases where a unit is actually on the border of a region, where the old region isn't currently in view but the new one is (and the border is as well), in which case the unit might not be drawn although it is partially in view. These cases will not happen often though since the camera needs to be positioned in a way that on a border of the screen there is also a region border, and exactly at that place a unit would be crossing over. It might be more of a problem when a building is placed on an edge though, since it will remain there. I guess one thing that can be done is just put the object in both regions, and give it an 'already rendered this frame' flag. Is there a better way to do this for a strategy game? Am I missing something? And what is the logical place for the structure that divides up the game world? For now I've only been talking about terrain since it logically determines the bounds of the game world in this type of game, but the space partitioning structure should probably live outside the terrain class. The way I see it now, the best thing to do is something like this. Each object in my game that can be drawn is derived from a DrawableObject class, to ensure it has the needed properties when it's passed to the renderer. I think the best thing to do is set a terrain size, so I know the bounds, pass these bounds to the space partitioning structure and let it create a number of regions, then pass these regions back somehow to the terrain and use that info to create one DrawableObject (containing a vertex/index buffer of terrain) for each region. I have one question regarding vertex/index buffers on this point: Can I just make 1 big vertex buffer containing all the vertices of the whole terrain, and create seperate index buffers for each region, or do I have to create seperate vertex buffers for each region as well? Because the latter introduces some big problems for me... Now that I have a set of regions and landscape in each region, I can just place buildings and units in the correct region when they spawn. Units could do a check after movement to see if they crossed region borders, and be reinserted in a neighbouring region. When drawing the scene, I just give the renderer the regions, the camera frustum, and calculate what can be seen. All DrawableObjects from the regions that are visible can then be passed to the renderer to be drawn. Please let me know if there's a better way to do this, or if you foresee any problems with the described approach. Thanks a lot for any feedback.
  12. Thank you for your reply. I should've looked a bit harder, because quite fast after I posted this topic I saw the example on XNA Creators you linked here. Thanks anyway for linking it. It does seem to do exactly what I need, but I'm currently having issues importing this Visual Studio 2005 project, since I use 2008. The converting seems to go wrong at some important parts of the project so I started a thread about this on the XNA CC forums. On the other hand, 'replacing' the functionality of the classes shouldn't be too much of a problem indeed, but from what I've seen so far I kind of like the content system XNA uses, and afaik I will have to ditch that too if I don't use their Winforms example, and I'm not sure I'm ready to do that. So for now I'll try to get the XNA Winforms example working.
  13. Hi, I've just made the switch to using C#/XNA for my projects. So far everything is going great, but there's one thing I can't really find any documentation on, even though it probably should be quite simple to achieve. By default XNA creates a form for me which is used as a render context, but since I'm working on a map editor in which I want to use the C# form tools to add controls and such, I want to be able to create my own forms, display them, and let XNA render to a control on one of these forms (could be a form, could be a picture box, or whatever). I'm used to having to initialize the render context myself so I could always pass in a control, but with XNA it's all behind the screen. So I have to achieve 2 things: 1. Pick my own render context for XNA rendering. 2. Find a way to display my own form(s) instead of the default one that XNA creates (and update them, respond to user actions, just like you would expect). The first thing is where the problem currenty is. I haven't really thought about the second problem yet, but it's probably not going to be very straightforward now that I think of it, so I just included it in this post just in case you have any thoughts/advice/pointers about this. Google and XNA Creators Club didn't seem to offer the solution to me. Thanks in advance for any help.
  14. I know there have been quite a few topics about this subject, I read a lot of them, but I still have a few issues I can't resolve with my 'effect system' design. I've pretty much automated all drawing in my game framework. Each frame I pass all meshes that should be drawn to the renderer, renderer does all the drawing automatically. This was very easy until effects (.fx files) kicked in. I am writing a number of shader effects like per pixel lighting, shadow mapping, texture splatting, bump/parallax mapping, etc. Now I'm having issues automating this in code. These are the problems I foresee: In a map editor, the end-user will be able to select geometry and choose an effect from a list (e.g. diffuse lighting, bump&diffuse, etc). Probably they can also toggle options like "Casts shadows" on renderable objects. This means I do not know in advance which effect is desired, and a combination of effects may be necessary (e.g. bump¶llax mapping effect with "Casts shadows" enabled is basically 2 or 3 effects). In addition, for flexibility I might need to allow users to define their own effects (even if it is as simple as allowing them to write and load their own .fx files). When the application starts, the user's computer will be checked to see what support for shaders it has. On older hardware certain effects may not be available, and need to be replaced by effects that are supported, and resemble the desired effect as much as possible. All and all there are a lot of factors that make it kind of impossible for the renderer to know in advance which shader parameters need to be set (setvalue... blabla). I might need light information, textures, matrices, etc., depending on the effect(s) that are chosen to render the object. The way I see it, retrieval should be fairly simple. When rendering a renderable object, the renderer should probably have access to something like ObjectState (with object-specific settings like textures, world transformation, etc.) and WorldState (light information, view/projection matrices, etc. etc.). These states should hold the information required in the SetValue calls. However, since it is not known in advance what information is actually required, a 'frame problem' (for the AI people here) occurs, I basically need to store all information available so that the required info is always present. Ouch, memory wastage? But even if that is sorted out, how does the renderer know which values to set, and which info from the object/game state it should be set to? It heavily depends on the effect(s) used, and these effects might not even be known when compiling the framework, as users may add new ones. Another problem I have is, how do you combine effects? E.g. the mentioned example where you can toggle shadow casting, it would mean the object required a 2-pass effect just for shadows, plus the effect it is actually rendered with. Now suppose we are drawing terrain, this could mean the 'effect it is actually rendered with' is also a combination of effects, e.g. some lighting effect and texture splatting. How would I combine all this? Last question, which is more of a general question, is there a downside to putting all effects in 1 fx file? (meaning, is it slower and such) It seems logical to me to put it all in one big file because of the way you define techniques. There is the disadvantage of the file being a bit messy, but that shouldn't have to be a problem imo. Who can help me with the design of this system?
  15. IDE's ? C++ vs C# ??

    I would actually like to suggest going for C#. There are a few downsides, but recently I switched from C++ to C# and I am getting sooooo much more done now, plus my code looks a lot better/easier to maintain. The biggest downside for me is the lack of an obvious API choice. In C++, I'd always use DirectX9 or OpenGL, both are really good and there is great support. In C#, the main choices (afaik) are XNA, MDX and SlimDX. MDX is not maintained any more so probably not the best idea in the long run, XNA is sort of the replacement but is tied into X-Box which poses some limitations, SlimDX is a 3rd party sort-of-replacement-framework-for-MDX, I use the latter atm and it's great, but documentation/support is in its infancy. Luckily, the creators are GDnet members so you are close to the source.