• Content count

  • Joined

  • Last visited

Community Reputation

559 Good

About rioki

  • Rank
  1. Nuclear Launch Console

    You both raise good points. Getting the timing right is sort of the hard. On the one hand the experience of a launch officer is basically just waiting around. On the other hand when you distill that in a game, I understand why people would go "why the hell would I do this?".   I think on putting more events into the game and reducing the interval of the drills. I was also thinking about messages sent by the enemy, that would not authenticate. Basically put more meat into the "story". 
  2. Nuclear Launch Console

    Technically speaking once the missiles are unlocked you can hit the launch button at any time. There is no "software limitation" and that is sort of the catch.   The real question is, did you play the game until you got the real launch order? Did you launch the nuke?   This is what bothers me most, I fear that many will give up way before that. There are a total of 5 drills spaced 2 min apart, with the final order 2 min after the last drill. After drill 3 you get the alert message (sort of DefCon 3) and after the 4th drill you get a target update message. But I think that most people will be turned down by the waiting.   BTW, if you want to read the epilogues without waiting 15 min, you can read them at and   Thank you I will think about your input. I was thinking of implementing a "manual" that slides in from the side with more detailed instructions.
  3. Nuclear Launch Console

    If you want to space 15 min of your time and find out if you have things it takes to launch a nuke, find out with the Nuclear Launch Console.   [attachment=23120:2014-08-11 21_13_01-Nuclear Launch Console.png]   Seriously, if you can find 15 min of your time, I would like some constructive feedback. It is a small experience I hacked together over the course of two weeks (~20h), but still interesting; I think,    
  4.   technically, OOP is not required for implementation of a C-E system.     That is true, but it helps. 
  5. The entity component model is all the rage for the last decade. But if you look at the design it opens more issues than it solves. Broadly speaking the entity component model can be summarized as the following: An entity is composed of components who each serve a unique purpose. The entity does not contain any logic of its own but is empowered by said components. But what does the entity component model try to solve? Each and every illustration of the entity component model starts with a classic Object-Oriented conundrum: a deep inheritance hierarchy with orthogonal proposes. For example, you have a Goblin class. You form two specializations of the Goblin class, the FylingGoblin who can fly and the MagicGoblin who uses magic spells. The problem is that it is hard to create the FylingMagicGoblin. Even with C++ and its multiple inheritance, you are not in the clear as you still have the dreaded diamond and virtual inheritance to solve. But most languages simply do not support a concise way to implement it. When solving the issue with components, the components GroundMovement, FlyingMovment, MeleeAttack and MagicAttack are created and the different types of goblins are composed of these. Good job, now we went from one anti-pattern (deep inheritance hierarchy) to a different anti-pattern (wide inheritance hierarchy). The central issue is that the inheritance hierarchy tries to incorporate orthogonal concepts and that is never a good idea. Why not have two object hierarchies, one for attack modes and one for movement modes? As you can see from an object oriented standpoint the entity component model fares quite poorly. But that is not the only problem the entity component model tries to solve. In many cases, you see the concept of a data driven engine. The idea is that you can cut down on development time by moving the object composition out of the code and into some form of data. This allows game designers to "build" new objects by writing some XML or using a fancy editor. Although the underlying motivation is valid, it does not need to use an entity component model, as a few counter examples show quite well. Putting the structural criticism aside, a naive implementation of the entity component model can in no way be efficient. In most cases the components are not such high concepts as moving or attacking, they are more along the lines of rendering and collision detection. But unless you have additional management structures, you need to look at each and every entity and check if it has components that are relevant for the rendering process. The simplest way to resolve the issue, without altering the design too radically, is the introduction of systems. In this case the actual implementation is within the systems and the components are just indicating the desired behaviour. The result is that a system has all the relevant data in a very concise and compact format and as a result can operate quite efficiently. But if you look closely you can see that we have all these useless components. What if you removed the components and just used properties on the components and the systems just look for appropriately named properties? Now you have duck typing. Duck typing is a concept that is used a lot in weakly typed languages, like for example JavaScript or Python. The main idea here is that the actual type is irrelevant, but specific properties and function are expected on a given object within a specific context. For example it is irrelevant if it is a file stream, a memory stream or a network socket - if it has the write function it can be used to serialize objects to. The problem with duck typing is that is does not lend itself easily to native languages. You can cook up some solution using some varying type but in no way is this an elegant solution. Chances are you already have a scripting system, in this case the solution is quite straight forward, you implement the core game logic in scripts and underlying systems look at this definition and implement any heavy lifting in native code. The idea of alternating hard and soft layers is nothing new and should be considered where flexibility and performance is needed. You may think that implementing the game logic in scripts is an inefficient way to do it. In cases where you are building a simulation-oriented game this may be quite true. In these cases is makes sense to extract your logic and reduce it to its core concepts, a simulaiton if you will. Then implement the presentation layer and control layers externally directly against the simulation layer. The nice thing about this design is that you can split the presentation layer and simulation so far that you can put one of them on one computer and the other on a different one. Wait, did you just describe MVC? Um... No... Stop changing the subject. When looking into scalability you get interesting requirements. One very clever implementation of the entity component system was to make it scale in an MMO setting. The idea here is that entities and components do not exist in code, but are entries in a database. The systems are distributed over multiple computers and each work at their own pace and read and write to the database as required. This design addresses the need of a distributed system and reminds me of the High Level Architecture][10] used by NASA and NATO to hook up multiple real-time simulations together. Yes this design approach even has its own standard, the IEEE 1516. Ok, oh wise one, what should we do? If you read about these issues you are either building a game engine or a game. Each and every game has different requirements and game engines try to satisfy a subset of these different requirements. Remember what you are developing and the scope of your requirements. If your current design sucks, you do not need to go overboard with your design, because chances are you aren't gonna need it. Try to make the smallest step that will solve the problem you are facing. It pays out to be creative and look at what others have done.
  6.   Yes! No! Maybe! Having implemented many a system, inside and outside of game development, I can say that a ECS does neither favor nor deter from a solid concurrent execution model. The most solid case for the ECS is the Intel Paper Designing the Framework of a Parallel Game Engine. Even through they never lose either the words entity or component, if you look at their universal object and associated specific system objects, it quickly looks like an ECS. In this paper they basically do functional decomposition to use as much concurrency as possible.    On the other hand in their excellent video series Don't Dread Threads, they make the very valid point that function decomposition has it's limits. You may actually get more out of your system by doing data decomposition. The key benefits here are two, first you probably can provide a higher level of concurrency and in most cases you can get away with almost no synchronization primitives. Using openMP to parallelize a few core loops may actually give your the biggest bang for your buck. Granted the real art is to do both, functional decomposition and data decomposition.   The thing that an ECS provides is an extreme level of flexibility at the cost of added complexity. In regard to concurrency it fares similar to other solutions. The key point here is added complexity; if a small simple system outperforms on one core because of memory locality an ECS with a high level or concurrency, you have gained nothing. Only use and ECS if you really need these extreme levels of flexibility.   Concurrency is like with optimization, don't do it until you really need it.
  7. I am honestly humbled by your constructive criticism. I must admit that writing this article was sort of a knee jerk reaction to yet an other starry eyed ECS article. After some introspection, I think my biggest qualm is not the ECS itself (it is just a pattern after all), but the naive and oversimplified rhetoric surrounding the issue. It is a bit like the screen graph gospel two decades ago. (Two decades?! I feel old.)   This article went through half a dozen revisions and rewrites, ranging in tone between CS paper and the ramblings of a crazy old man. I settled for my normal and rather informal writing style, I picked up from Jeff Atwood. The tone shifts at the end are sort of intentional, they interrupt the lull of the rhetoric and as a result make you question your line though and my reasoning. I must agree that this stylistic element is not the most popular.   Typos... I am dyslexic and even though I put a lot of effort into checking my writing for spelling mistakes, some always end up in the text. I am grateful for each typo found; i would rather have somebody point it out, than having the mistake linger.   Although I could have spelled it out more explicitly, the "OOP vs ECS" discussion is a non-starter and misses the point. Each and every article I have ever read about the ECS hat two problems it tried to address, a deep inheritance hierarchy and a god class. The problem is that the ECS is not the solution to these problems, it is rigorously refactoring the biggest offenders of well established  OOP design principles. The result will probably not resemble even closely an ECS.   But if the ECS does not actually address deep hierarchies or a good class, what does it solve? It provides flexibility where none would be. I fully agree with the approach the Unity engine took. Their use of the ECS is the solution to the problem "How do we get game designers to build objects without code?".   It is no coincidence that the other place I found an ECS outside of game development was in the implementation of a scripting engine. The ECS is great at providing a high level of flexibility, especially for run-time composition and for this it does the job quite well.   The flip side of the argument is that, if you need this high level of flexibility, why are you using a strongly typed language anyway? So my question is, if you need this high level of flexibility why are you actually not using a weakly typed system? It is shocking how few developers even know about the concept of duck typing or soft/hard layer mixing.   The reason why I did not delve into the specifics, such for example debugging, is because the ECS is already mildly flawed on a design level. What is bad about a wide inheritance hierarchy? The term you are looking for is "ravioli code".   One of the my biggest issues I have the articles about ECS is that few talk about how an ECS could fit into the entire engine. There are sane ways to integrate a ECS into your game, but none of the code samples I have ever seen get close sanity. (If you have actual rendering code in your components, you are doing it wrong.)   There are valid uses cases of a ECS, where the benefits outweigh the flaws. The important part is to be aware of these flaws and keep and open mind. Maybe the problem you are trying to solve can be solved differently than you initially though.   Thank you for the feedback, my writing and reasoning is far from perfect. 
  8. Da Boom! was developed over the course of the last half year in my spare time. It is a classical bomb laying game with a retro art style, but with the ability that you can play the game over the network. Not only can you play the game over the internet, but you can mix local multiplay and network play, you can for example have stwo player on one end and two on the other. The focus of this project was mostly centered around developing technologies and development strategies. This is the first hobby project that I ever completed. I went out and said, either I complete this project or I give up on game development. What Went Right Limited Scope From the start I knew that I had to pick a small project and severely limit the scope. I can only invest around 20 hours on a good week and this means that I had to remove as much gold plating as possible. The actual choice of the game type was triggered by a good friend complaining about that lack of bomb laying games that worked properly over the internet. This game type with its limited scope fit the bill quite nicely. But even here the scope was reduced to only have 3 power ups and a restricted art style. Art Direction Although the art is technically speaking "programmer" art, it is not at the level I can produce. I specifically aimed at a severely reduced and retro looking art style. This art style meant that I could quickly get something on screen and play around with the game mechanics. Pivotal Tracker I almost by chance started to try out Pivotal Tracker. Originally I wanted to use no planing software at all. I have come to the conclusion, at least for my hobby projects, that the more you plan the less you actually do. The problem comes from the fact that when I see the mountain of work to do and miss a deadline that this can deal a final blow on my motivation. But I found two things that were awesome about Pivotal Tracker. It allows you to easily reorder work. This is important since I tend to plan things that are not needed now or even ever. This gold plating can then simply be pulled out of the plan, when you notice that everything will take forever. The second thing is that deadlines or rather milestones are estimated by the work already done and how much there is to do. Although you can assign a date to a milestone, there is no kidding you about the chance that you will make the milestone on time, when that is not the case. Technology I have a bone to pick with most graphics libraries that are available to me. They either make really simple things hard to achieve or lack focus and maturity. Over the years I have amassed a large body of code that does most of what I needed. The only problem was that it was not packaged in an easy to use fashion. I invested some time into building and packaging pkzo, spdr and glm. Not only do I now have usable libraries for rendering, sound and networking, I gained a significant productivity boost. I'm not trying to integrate unwieldy libraries that waste my time, because they do not build from the get go, they have weird intrusive interfaces or the setup overhead is huge. On the other hand I did not rewrite everything from scratch. Being able to build upon awesome libraries like SDL and the companion libraries SDL_image, SDL_ttf and SDL_mixer really cut the development time in half. Excessive Refactoring One thing that I can simply not stand is ugly and bad code. This might be a character flaw, but I have given up working on projects because the code felt wrong. This time around I was determined to keep the code in the best shape possible. It sounds easy at first, but some things just sneak up on you. For example, the class handling the menu logic went from being small and well defined to a huge jumble of special cases. But even here it was possible to break up the code and remove most duplication. It takes severe discipline to refactor code. The problem is that it feels like you are making no progress at all while doing it. But it was worth the effort. What Went Wrong Lack of Discipline We are all humans and it is often hard to muster the strength to do all the little tedious things. The project was on a great start, what what do you expect, this is my passion. But after the first two months went by I started to not put in the desired time. This was especially true that I also started to pick the interesting thing to do instead of the really necessary ones. It is interesting how much time you can spend choosing music when the game does not even have the means to play it. Rewrites and Gold Plating I am proud to say that I only rewrote the entire rendering and networking systems about one and a half times each. Although this is a record low for me it remains one of my biggest stumbling stones. The first most obvious gold plating was that I migrated my rendering code from immediate mode OpenGL 2.0 to OpenGL 3.2. Although the old code was inefficient, it did fully and totally not matter. The scene is so simple that any decent PC is able to render it without breaking a sweat. The second gold plating and unnecessary effort was in making the network system asynchronously multi-threaded. Although the networking code worked fine, the game logic broke in very subtle ways and I ended up falling back in synchronous single threaded code. Technological Blunders The biggest lack of foresight was the game logic and especially the interaction with the presentation layer. Although the two were weakly bound, through the help of an event based system, it turned out to be fatal when integrating the networking layer. The multi-threaded nature of the networking system indirectly forced the presentation layer to be multi-threaded. But as it goes with OpenGL, manipulating resources from multiple threads is not a good idea. The real problem was that implementing mitigation strategies, such as call marshalling or locking strategies, were a significant unplanned effort. In the end I ended up calling the networking system in the main even loop. In future designs I must think about multi-threading from the start, especially if I want to get the most out of multicore systems. Then again, on such a small game, this is a wasted effort. Missing Features Unfortunately I was not able to implement all features I wanted. These being notably character animation, scoring and AI. My focus was on the core experience and unfortunately I did not find the time and energy to implement them. I will maybe add them in on another go around. External Conditions On of the biggest dampers was my professional situation. Normally I am working 4 day weeks. This gives me at least one entire day to work on such hobby projects. But as the project at my day job approached its release date and things got a bit hectic, I worked 3 months for 5 days a week. This reduction in time meant that I could simply not get so much done.
  9. Post Mortem: Da Boom!

    glm: pkzo: spdr:   I forgot to link to the libraries in the article, will amend it.
  10. First off I want to say: [b]Do not cut up user stories! [/b]I have seen this advice all over the place, but it is ill advised and comes from the Watterfall model. The falcy is that if you can't estimate one big task propertly then maybe you can estimate a five small tasks properly. The underlying though is that errors will even out (Law of Lage Numbers), but that is not true, small tasks have a bias towards underestimating and this results in an overall underestimation. And finally the entiere idea if a user story is that it describes one chunk of usefull functionality for the "user". If you chop them up you don't deliver that functionality until all bits are done. But now that they are choped up you stop seeing which user stories go together and you end up making the user whait longer and longer on the needed features. Back to the topic at hand. If you are starting to catch up so much delay this early, then probably your estimates suck. You have tow options either you start to use a estimated to real time function to correct the error. This is what they do with XP, you estimate in ideal time and then convert that into real time for planing. Take your current factor and apply that to all additional estimates. A different aproach is the poject velocity, you take the number of estimated man hours you where able to do this sprint, this is the base line for the next sprint. From this you can interpolate when your expect what feature to be done. These two metrics will change over time and often improve, since you often overlook mundane tasks that need to be done. Also if your estimates suck, you might want to look into planing poker. I think programming is the hardest to estimate task, since each problem is a new one. Although planing poker is not very acurate, but it takes into account that estimating is basically guessing and on average the result is quite solid. Programmers are a odd bunch, I know I am one and strongly suffer from the student problem. The student problem is, no matter how much time you give him for an essay, he will use up all the time. Many programmers want to make "the right thing", which unfortunatly often leads to overengineering a problem. Ask them how they would solve the problem in half the time. I have good experiance in pushing developers to do things in shorter times and then let them refactor the bunch into a properly formed application. Just time box each step and make the programmers comit to the timebox. Honestly I found it gives better results if you give them a fixed time and variable feature set than a fixed feature set and variable time. The odd thing is they are better at maximising (more featuers) than minimaising (less time per feature).
  11. Ocean Rendering

    Really really really nice work. But... I happen to be an occasional surfer. The water looks quite realistically for a calm day on a large lake. How well do the waves scale up? I am thinking of a swell of around 1-2 m... True, I must admit that rendering shore waves is a real challenge. One thing I want to note, waves normally don't align themselves along the shore. Waves keep the basic direction and get slightly deflected by the shallows. This is quite important for surfers, because that means that waves start breaking at a specific point and keep breaking for quite a long time. If they did align along the shore and the shore happens to be quite straight they just go flop. Occasionally you see this nature... That are the days when you tend to hand out in a bar instead...
  12. Ok, this might be like the wrong answer and completely over the top, but here is my implementation, of my new defunct project: It's obviously over the top because it uses the parser construction tools bison and flex... Even if you think it is overkill, I would suggest that you write a lexer / parser with a more standard approach: char stream -> lexer -> token stream -> parser -> objects You have a lexer (aka. tokenizer) that determines the type of token and the associated value. The you write something like a recursive decent parser that will trigger on the "keywords" (o, g, n, f...) read a batch of values. You detect errors when you get a token that is not of proper type. You should look over your code. It's awful, for C++ code. But I can't see any obvious flaw.
  13. How road creation works?

    It is true what pto and Hodgman said. This technique is commonly used in game development. I just wanted to point out that there are other techniques that are more procedural. You can model a terrain and add roads by placing way points. By using some heuristic the terrain is altered to match the road and make it look natural. For example on a slope the terrain is caved to match the road. Then buildings and other Objects are placed on the terrain. This works for any game, including racing games. Examples for this are CityScape as seen on GameDev or the CryEngine2 Sandbox.
  14. Skydome and clouds

    It really depends... If your are marking a 1st or 3rd person game where you stay on the ground, maybe a skybox with photo texture may be a better bet. Does it have to by dynamic? You can buy a set (6) of high res textures for 10 bucks or google soem free ones... Of course you can implement the cloud rendering algorithm detailed don this page: The question is it worth the effort...