Popular Content

Showing content with the highest reputation since 08/23/17 in all areas

  1. 9 points
    Most people's coding styles swing on a pendulum over their careers. You'll go through phases of liking different things. Some people might like the symmetry, simplicity and terseness of the 2nd option. Other people might like the verbose, explicitness of the 1st option. Arguments about readability are very subjective... but personally I'd choose the first option right now as I feel like it improves readability of the algorithm at the call site. Also, I would expect the performance characteristics of those two functions to be completely different to each other (find by index should be almost free, but find by name is hopefully a binary search of string comparisons or a string hash, etc...). Personally I believe you should not write code where using the slow path and using the fast path will look the same. It should be obvious when reading the calling code whether they're doing something dumb (repeatedly doing unnecessary string comparisons) or not.
  2. 8 points
    Both or niether, as needed. Cache coherency is great, but it's trivial to design cache-coherent algorithms in a vacuum ("oh, just put all the components in a big array, update them in a loop, DONE"). It's much harder to design cache-coherent algorithms that adapt to actual practical use ("oh, it turns out that this component needs to read the position, which means now we might be blowing cache coherency to read that from a different component now, or blowing it to copy it into this component earlier," et cetera). What you want to do is design how you store your data in memory in a fashion that is efficient for the way you will access and transform that data. "Access," importantly, includes more than just how the memory will actually be fetched by the CPU, but how you will actually get at and use, connect, et cetera, that data in your APIs. If you bend over backwards to make some components stored in a big cache-coherent array, but you never actually need to update all those components at once, have you really gained anything? Worse, if by doing so you've made it vastly more complex to use those components at the API level, have you actually improved anything? The reason that there are no generalized, broad, great answers to this problem is that the devil is in the details. So consider the purpose of each component or other piece of data or functionality you're adding to your system, and how you want to interact with it at the API level, and how you need to interact with it at the implementation level, and weigh the pros and cons of every available approach with that in mind. And make a decision for that problem that might be different than the decision you'll make for the next. In general, I'd aim for cache coherency when I can, as long as it doesn't sacrifice usability in any fundamental way.
  3. 8 points
    This is a technical article about how I implemented the fluid in my game “Invasion of the Liquid Snatchers!” which was my entry for the fifth annual "Week of Awesome " game development competition here at GameDev.net. One of the biggest compliments I’ve received about the game is when people think the fluid simulation is some kind of soft-body physics or a true fluid simulation. But it isn’t! The simulation is achieved using Box2D doing regular hard-body collisions using lots of little (non-rotating) circle-shaped bodies. The illusion of soft-body particles is achieved in the rendering. The Rendering Process Each particle is drawn using a texture of a white circle that is opaque in the center but fades to fully transparent at the circumference: These are drawn to a RGBA8888 off-screen texture (using a ‘framebuffer’ in OpenGL parlance) and I ‘tint’ to the intended color of the particle (tinting is something that LibGDX can do out-of-the-box with its default shader). It is crucial to draw each ball larger than it is represented in Box2D. Physically speaking these balls will not overlap (because it’s a hard-body simulation after all!) yet in the rendering, we do need these balls to overlap and blend together. The blending is non-trivial as there are a few requirements we have to take into account: - The RGB color channels should blend together when particles of different colors overlap. -- … but we don’t want colors to saturate towards white. -- … and we don’t want them to darken when we blend with the initially black background color. - The alpha channel should accumulate additively to indicate the ‘strength’ of the liquid at each pixel. All of that can be achieved in GLES2.0 using this blending technique: glClearColor(0, 0, 0, 0); glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE); Putting all that together gets us a texture of lots of blurry colored balls: Next up, is to contribute this to the main backbuffer as a full-screen quad using a custom shader. The shader treats the alpha channel of the texture as a ‘potential field’, the higher the value the stronger the field is at that fragment. The shader compares the strength of the field to a threshold: Where this field strength is strong enough then we will snap the alpha to 1.0 to manifest some liquid. Where the field strength is too weak then we will snap the alpha to 0.0 (or we could just discard the fragment) to avoid drawing anything. For the final game I went a little further and also included a small window around that threshold to smoothly blend between 0 and 1 in the alpha channel, this softens and effectively anti-aliases the fluid boundary. Here’s the shader: varying vec2 v_texCoords; uniform sampler2D u_texture; // field values above this are 'inside' the fluid, otherwise we are 'outside'. const float threshold = 0.6; // +/- this window around the threshold for a smooth transition around the boundary. const float window = 0.1; void main() { vec4 col = texture2D(u_texture, v_texCoords); float fieldStrength = col.a; col.a = smoothstep(threshold - window, threshold + window, fieldStrength); gl_FragColor = col; } This gives us a solid edge boundary where pixels are either lit or not lit by the fluid. Here is the result after we apply the shader: Things are looking a lot more liquid-like now! The way this works is that when particles come within close proximity of each other their potential fields start to add up; once the field strength is high enough the shader will start lighting up pixels between the two particles. This gives us the ‘globbing together’ effect which really makes it look like a fluid. Since the fluid is comprised of thousands of rounded shapes it tends to leave gaps against the straight-edged tilemap. So the full-screen quad is, in fact, scaled-up to be just a little bit larger than the screen and is draw behind the main scene elements. This helps to ensure that the liquid really fills up any corners and crevices. Here is the final result: And that’s all there is for the basic technique behind it! Extra Niceties I do a few other subtle tricks which help to make the fluids feel more believable… Each particle has an age and a current speed. I weight these together into a ‘froth-factor’ value between 0 and 1 that is used to lighten the color of a particle. This means that younger or faster-moving particles are whiter than older or stationary parts of the fluid. The idea is to allow us to see particles mixing into a larger body of fluid. The stationary ‘wells’ where fluid collects are always a slightly darker shade compared to the fluid particles. This guarantees that we can see the particles ‘mixing’ when they drop into the wells. Magma particles are all different shades of dark red selected randomly at spawn time. This started out as a bug where magma and oil particles were being accidentally mixed together but it looked so cool that I decided to make it happen deliberately! When I remove a particle from the simulation it doesn’t just pop out of existence, instead, I fade it away. This gets further disguised by the ‘potential field’ shader which makes it look like the fluid drains or shrinks away more naturally. So, on the whole, the fading is not directly observable. Performance Optimisations As mentioned in my post-mortem of the game I had to dedicate some time to make the simulation CPU and Memory performant: The ‘wells’ that receive the fluid are really just colored rectangles that “fill up”. They are not simulated. It means I can remove particles from the simulation once they are captured by the wells and just increment the fill-level of the well. If particles slow down below a threshold then they are turned into non-moving static bodies. Statics are not exactly very fluid-like but they perform much better in Box2D than thousands of dynamic bodies because they don’t respond to forces. I also trigger their decay at that point too, so they don’t hang around in this state for long enough for the player to notice. All particles will eventually decay. I set a max lifetime of 20-seconds. This is also to prevent the player from just flooding the level and cheating their way through the game. To keep Java’s Garbage Collector from stalling the gameplay I had to avoid doing memory allocations per-particle where possible. Mainly this is for things like allocating temporary Vector2 objects or Color objects. So I factored these out into singular long-lived instances and just (re)set their state per-particle. Note: This article was originally published on the author's blog, and is reproduced here with the author's kind permission.
  4. 7 points
    It's been awhile since I posted. I've been kinda reluctant to just post more project updates on GC, since that kind of thing gets a little tedious and pretentious sometimes. "Hey, guys, take five minutes out of your precious time to look at this screenshot of an incremental improvement to creature AI, that doesn't really show anything that I've actually been working on because that kind of thing doesn't show up well in static screenshots! But ain't it SHINY?!" GC is still in progress; at least, as much as it can be given all the other stuff going on. Kid starting kindergarten, camping for several days in Yellowstone, other kid starting preschool, work (as always), yardwork, working some more on the basement, etc, etc, etc... While thinking about some further UI upgrades, however, I was doing some reading through old links. I was thinking about healthbars, and re-visited a link from my bookmarks that dealt with the resource bubble meters in Diablo 3. Diablo 3 was an awful game, still one of my most hated, but damned if Blizz can't make things shiny and pretty. https://simonschreibt.de/gat/diablo-3-resource-bubbles/ speculates a little bit about how the resource bubbles are implemented graphics-wise, and I thought it might be interesting to take a stab at making one, even if ultimately it isn't useful for Goblinson Crusoe. The article above talks about diving through the D3 assets and finding a circular mesh with distorted UV coordinates that provide the basis for the spherical distortion of the moving texture map. However, I elected to go with the idea mentioned in https://simonschreibt.de/gat/007-legends-the-world/ of using a texture map that encodes the UV coordinates for the spherical distortion. To generate the map, I fired up Blender and created a sphere and a custom Cycles material for it. After some thrashing around, the node setup I ended up with was this: The material takes the normal of the sphere and splits it into X, Y and Z components. It multiplies X and Y by the cosine of Z, then scales and translates the results into the range 0,1 by multiplying by 0.5 and adding 0.5. Then it sets the red channel from the X result, the green channel from Y, and applies the resulting color as an emission material with a power of 1 (which outputs the color without any lighting or shading). The result looks like this: The way this texture is used is that the resource bubble is drawn as a rectangular plane, texture mapped across its face from (0,0) in the upper left to (1,1) in the lower right. This texture coordinate is used to sample the bubble UV texture, and the result is used to sample the diffuse color texture. This causes the diffuse color to be distorted as if it were being drawn across the surface of a sphere: For the diffuse color, I grabbed a random photo of stars, snipped a piece, and made it seamless. To achieve the complex-seeming swirling motion effect, the star map is duplicated 3 times, with each layer being translated by some vector multiplied by elapsed time, and the three layers multiplied together and scaled up. You can't see the motion in a still photo, of course, but in real-time it has 3 layers being moved in different directions at different speeds, and the effect is actually quite mesmerizing. To implement the resource bubble representing different levels, I added a uniform to the shader, Level, that can be specified as any value in the range 0,1. To set the specific level, you set the value of the uniform based on the ratio of the resource's current value to it's maximum value. ie, for mana, you would set the uniform to (CurrentMana / MaximumMana). This level is used with a smoothstep function to cut-off the diffuse texture at a given height. Of course, using the smoothstep as-is would create a straight cut-off, so to make it more interesting I provide a noise texture and use the noise texture to distort the cut-off function. This noise texture is animated, and the effect is to make the top of the resource fluid look 'frothy'. Also, a clip texture (in a real application, I would probably make this the alpha channel of the normal texture instead of a separate texture) is used to clip the outsides of the bubble to black. Now, I felt that the surface of the fluid, even with animated froth, looked a little plain. And if you look at the D3 resource bubble, there is a 'line' of glowing material on the surface of the fluid that provides emphasis to the level. So to implement that, I used another pair of smoothstep functions, based on the fluid level, to isolate a 'band' centered on the top of the fluid. This band is used to scale the brightness of the fluid, and is distorted by the same noise texture as the fluid surface. This gives the effect that light is shining on the surface of the liquid, making it stand out. Finally, I overlaid a texture that contains the reflections/streaks for the glass. To implement this texture, I used the sphere in Blender and applied a glossy shader with some lights. This one was done in haste, but it looks okay regardless. In a real application, I would spend a little more time on that one to make it look better. This glass texture is applied additively to the final fragment color. In motion, it looks pretty cool: The final GLSL shader code for Urho3D looks like this: #include "Uniforms.glsl" #include "Samplers.glsl" #include "Transform.glsl" #include "ScreenPos.glsl" varying vec2 vTexCoord; varying vec4 vWorldPos; uniform sampler2D sGlass0; uniform sampler2D sClip1; uniform sampler2D sNoise2; uniform sampler2D sNormalTex3; uniform sampler2D sStars4; uniform float cLevel; void VS() { mat4 modelMatrix = iModelMatrix; vec3 worldPos = GetWorldPos(modelMatrix); gl_Position = GetClipPos(worldPos); vTexCoord = GetTexCoord(iTexCoord); vWorldPos = vec4(worldPos, GetDepth(gl_Position)); } void PS() { float level=1.0-cLevel; vec2 newuv=texture(sNormalTex3, vTexCoord).xy; float clip=texture(sClip1, vTexCoord).x; float maskval=vTexCoord.y+texture(sNoise2, vTexCoord+vec2(0.1,0.3)*cElapsedTimePS).x * 0.05; float mask=smoothstep(level-0.01, level+0.01, maskval); float glowline=min(smoothstep(level-0.05, level, maskval), smoothstep(level+0.05, level, maskval))*clip*5+1; gl_FragColor=clip * mask * texture(sStars4, newuv + vec2(0.1,0)*cElapsedTimePS) * texture(sStars4, newuv + vec2(0.01,0.03)*cElapsedTimePS) * texture(sStars4, newuv + vec2(-0.01,-0.02)*cElapsedTimePS) * 4.0 * glowline + texture(sGlass0, vTexCoord); } If you want to see it in action, here is a downloadable Urho3D sample (I can't promise it'll stay active for long; I tried to upload it as an attachment, but it kept failing mysteriously): https://drive.google.com/file/d/0B_HwlEqgWzFbR0R6UzJGUTJTSHM/view?usp=sharing Github repo: https://github.com/JTippetts/D3ResourceBubbles On Windows, extract the archive, navigate to the root, and execute the run.bat batch file. It should open up a 256x256 window with the animated resource bubble filling the frame. This little project was a fun little diversion while waiting to take my daughter to preschool. I don't tinker with shaders very often anymore, so it was nice to do so. It shows how even cool effects like this can be relatively simple underneath.
  5. 7 points
    Because, as mentioned, scenes can vary widely, the common way to decide how many shadows you will have is derived from a performance goal on a specific target spec. In other words, determine how many shadows you can have while maintaining X FPS on Y hardware. The reason you should be using an algorithm like this to determine your own metrics is because not only do different scenes in different games come with different performance compromises, your own implementation of shadows may perform very differently from others'. Your question is useful for allowing you to consider how optimized shadow maps must be in other games and for you to consider how much you have to do to get there, but if you were asked right now by a boss to estimate how many shadows you can use you would use the above-mentioned process. To give you actual stats and an idea of the optimizations used, here is what I did on Final Fantasy XV. We had a basic implementation likely matching what you have, with cube textures for point lights and different textures for the rest (4 textures for a cascaded directional light and X spot lights). The first thing I did was improve the culling on the cascaded directional light so that the same objects from the nearest cascade were not being needlessly drawn into the farther cascades. If you aren't doing this, it can lead to huge savings as you can avoid having your main detailed characters being redrawn, complete with re-skinning etc. Next I moved the 6 faces of a cube texture to a single 1X-by-6X texture. So a 512-by-512 cube texture became a single 512-by-3,072 texture. Although you must write your own look-up function that takes 3D coordinates and translates them to a 2D coordinate on this texture, it comes with a few advantages in caching, filtering, clearing, filling, and most importantly it prepares for the next big optimization: a shadow atlas. Now that all shadows were being drawn to 2D textures, I created a texture atlas for all the shadows except the cascaded ones. A single large texture for all the point and spot lights. It was 2,048-by-2,048 first but could grow to 4,096-by-2,048 if necessary. Putting at the point and spot shadows into a single texture was a huge gain for many reasons, but one of main gains was that we had access to all the shadows during a single lighting pass, which meant we could draw all the shadows in a single pass instead of many. At this point our limit was simply how many shadows could be drawn until the texture atlas got filled, sorted by priority largely based on distance. As mentioned by MJP, an important aspect of this is to cull all six faces of a point-light shadow. Any shadow frustums not in view meant less time creating shadow maps and more room for other shadows in the atlas. Next, I wanted the shadow maps to have LOD, as the smaller shadow sizes would allow faster creation, and smaller shadow maps meant more shadows could fit into the atlas. Each shadow frustum (up to 6 for point lights and 1 for each spot light, where each shadow frustum at least partially intersects the camera frustum—any shadow frustums fully outside the view frustum would be discarded prior to this step) was projected onto a small in-memory representation of a screen and clipped by the virtual screen edges. This sounds complicated but it is really simple. The camera's world-view matrix translates points into a [-1,-1]...[1,1] space on your screen, so we simply used that same matrix to transform the shadow frustum points, then clipped anything beyond -1 and 1 in both directions. Now with the outline of the clipped shadow frustum in -1...1 space, taking the area of the created shape gives you double the percentage of the screen it covers (represented as 0=0% to 2=100%). In short, we measured how much each shadow frustum is in view of the camera. Based on this percentage, I would drop the shadow resolution by half, or half again if even less was in view, etc. I believe I put a limit at 64-by-64. If you play Final Fantasy XV, you can see this in action if you know where to look. If you slowly move so that a shadow from a point light takes less and less screen space you might be able to see the resolution drop. Now with the shadow-map LOD system, most shadows are drawn at a lower resolution, only going full-size when you get near and are looking directly at the shadowed area. Because this actually affects so many shadows, the savings are significant. If you decide to keep the same limit on shadows as you had before you will find a huge gain in performance. In our case, we continued allowing the shadow atlas to be filled, so we were able to support double or more shadows with the same performance. Another important optimization is to render static objects to offline shadow maps. A tool generates the shadow maps offline, rendering only static objects (buildings, lamp posts, etc.) into them. At run-time, you create the final shadow map by copying the static shadow map over it and then rendering your dynamic objects (characters, foliage, etc.) into it. This is a major performance improvement again. We already had this for Final Fantasy XV, but since I added the shadow LOD system I had to make the offline static shadows carry mipmaps. It is important to note that the shadow mipmaps are not a downsampling of mip level 0—you have to re-render the scene into each mipmap, again with some lower limit such as 64-by-64. All of this together allowed us probably around 30 shadow maps with the ability to dynamically scale with the scene and without too many restrictions on the artists. Shadow maps were sorted by a priority system so that by the time the shadow atlas was filled, the shadows that had to be culled were distant, off-to-the-side, or otherwise unimportant. L. Spiro
  6. 7 points
    Don't worry, I'm already making eighth engine, it's twice as good as fourth engine was!
  7. 7 points
    Unless and until you have a proven difficulty with your actual game objects getting work done in 1 CPU tick, multithreading is not the right tool. My experience has been that writing a nontrivial simulation in a thread-safe way is extremely hard, and if you use the approach you describe, you spend more time reconciling all the "messages" than you would have spent just single-threading the simulation. Rendering, physics, and a few other things are amenable to multithreading. Game simulation rarely needs enough CPU cycles to make it even attractive to consider, and the dependency flow is so tangled that it's virtually impossible to guarantee that you'll actually get a net win.
  8. 6 points
    The position you took in the chat was that ALL companies do this, a position that I asserted was "bullshit"(*) based on the simple fact that I have worked at companies that did not, and worked with recruiting firms that did not (specifically, use software to filter resumes). At no point in this conversation did you clarify that your position was that MOST companies do, which is a statement I would have significantly less issue with. But the reason you were kicked was not simply making and then insisting on doubling-down on incorrect information (to a new member of the chat who was looking for help). It was because you were rude about it, and refused to table the issue when I asked. Twice. As a result, you were given a slap on a wrist - a kick. Subsequent to that kick, you rejoined and having apparently failed to learn your lesson about tabling an issue when asked, tried to re-engage with the debate. You were then banned. Your ban will last for at least the next hour, at which point I will unban you provided you continue to engage in good behavior (edit: done). If you think this was an abuse of power, or if you think I or any of my fellow moderators have EVER committed such an act (as you heavily insinuate above), you are welcome to report them through the proper channels: a private conversation with the moderator in question, or with any of the senior moderators (Washu, Promit, et cetera), or the staff (jbadams, khawk). (*) Since it's been suggested to me that this is unclear: I literally said this during the conversation, that's why it is quoted here. I fully admit this was not an appropriate response, but I'm not making that assertion now, I made it then, and I quoted it here to attempt to provide as much transparency into my actions as I can. The swearing was absolutely not justified, but I stand by the rest of my actions.
  9. 6 points
    Wobbly Duck Studios founder Eric Nevala (@slayemin is his profile here on GameDev.net) has been documenting the development and Steam release of Spellbound through his devblog here on GameDev.net at slayemin's Journal. We caught up with Eric to discuss his background, challenges as an indie developer in the emerging Virtual Reality market, and thoughts on the future of VR. Read on for the interview, loaded with interesting design thoughts, tips, and more. Who are you? My name is Eric Nevala. My account on GameDev.net is @slayemin. I've been programming for 17 years now, which is hard for me to believe. In the fall of 2000, I went to community college to take more programming courses and eventually get a CS degree. I was also running a small side business building webpages for companies, so I taught myself HTML, PHP, MySQL, etc. About six months into college, I joined the US Marines as a reservist and entered into bootcamp in June 2001. I graduated bootcamp on Sept 21st, 2001, exactly ten days after 9/11/2001. Our country declared war on Afghanistan right as I was getting out. In 2003, America declared war on Iraq as well. The question of whether I'd go to war or not became a question of "when?". I decided that if I'm going to go to war, I'd do it under my own conditions, so I volunteered to join the 3rd Civil Affairs group and be their webmaster in Fallujah, Iraq. I used my programming and website background to build a web based application which managed a little over a billion dollars in reconstruction projects in Al Anbar province. I felt pressured to work hard and fast because I sincerely believed that every wasted day was another day without peace and more people die the longer I take. It took me 3 months of working 7 days a week, roughly 16 hours a day. I got it done. It was 20,000 lines of code. At the time, that was a lifetime accomplishment. It was my first time working as a "professional" developer, and the software development experience put into perspective the lessons I had learned in classes. For the next few years, I bounced between going to war and returning to the classroom. I kept on forgetting my pre-calculus and calculus maths, so I had to keep retaking those classes. It took me eight years to get a four year degree. After I got my degree in computer science, I became a contractor working for the US Military. I went to Afghanistan as a Sr. Software Developer working out of the Army headquarters in Bagram. I spent 18 months there and saved all of the money I had made. What is your background in game development? After my tour ended, I decided it was finally time to change focus and make games. Finally! I had spent years going to school, getting experience, and trying and failing to make games on my own. I really loved XNA at the time, so I decided I would make a game which combined Magic: the Gathering with Total War. I already knew the scope was ambitious, so I had to take it in small chunks. XNA is just a thin abstraction layer on top of DirectX9, so there wasn't much in terms of a useful library. I got carried away and ended up creating my own game engine from scratch. This was a mistake and I knew it, but I was having fun. I spent the next year working on my game engine, until I realized that my engine was too shoddy to be used to ship a commercial product. It was severely lacking in capabilities, so if I wanted to add a new capability such as rendering text in 3D space, I'd have to write my own text rendering engine. That would take weeks, compared to spending two minutes just reading up the documentation within an existing engine. Thus, it was decided: I would use a third party game engine. I decided to port my magic game from XNA to Unreal Engine 4, and I made some pretty rapid progress. I really loved Unreal Engine 4 because the source code to the engine was available for me to read and modify as I saw fit, so if there was any questions on what was happening underneath the hood, I could just step through the source code line by line and see for myself. Writing my own engine turned out to be enormously helpful in being comfortable and understanding the low level code within Unreal Engine. What project(s) have you been working on? So, the magic game project was going well for a year, and then I started hearing about Virtual Reality. There was this little company called Oculus which had been working on a really rough prototype of a VR head mounted display. I initially disregarded it because it seemed like a tinkerers project rather than anything commercially viable. And then, Facebook bought out Oculus for $2 billion. That got my attention. Nobody spends $2 billion on a tinkerers project and lets it die, they are going to make sure that their investment succeeds. I immediately bought the Oculus DK2 (developer kit 2) and a Leap Motion device. This eventually turned into my first commercially released game, Spellbound. How did you become interested in game development? I became interested in game development about 20 years ago. I was playing Commander Keen 1, and one summer afternoon it dawned on me that someone had actually made this game and designed this cute little green alien. Someone had taken the time to design this game and all of the fun things I loved about it. I wondered if I could eventually create a game as well. What would it take? Whatever it took, I would try. If I have to take the same math class over and over again until I get it, I will. If I have to go to war and get shot at, I will. If I have to work long, hard hours and sacrifice everything, I will. I gradually discovered that I would have to become a programmer to be able to create games, so that became my life mission. I spent my high school years taking as many programming classes as I could so that I could learn how to make games, and between classes and over summer vacation, I had my own side projects to make games. Honestly, nothing makes me happier than drinking coffee, turning up the music, and working hard on a good coding problem which comes attached with a higher purpose. What resources have you used to learn, connect, and become a better developer? I started learning game development in the mid 1990's and started visiting GameDev.net when I was in high school. Being in the presence of professionals helped mature my thinking. Most of my learning has come from working on my own programming projects and trying to figure out how to solve my problems. In terms of connecting with other developers, I usually go to a few meetups here in Seattle and have become friends with other indie developers, and we have each given lectures and workshops to each other on gamedev best practices. I think one of the best ways I have gradually gotten better as a developer is to periodically take a half hour and critically think about what I did and how I could have done things better. Constructive critical self reflection is necessary to personal growth. What tools do you use in development? Software Visual Studio 2015 community edition Notepad++ Unreal Engine 4 Adobe Photoshop Wings3D Maya 2015 Hardware Oculus Rift + Oculus Touch HTC Vive Leap Motion Other A small dry erase white board A notebook and pen for scribbling What was your inspiration for Spellbound? I had originally been working on a "Total War + Magic the Gathering" game for traditional PC gaming. It was going to be a bunch of epic magic battles between mages, competing for control over the world. I had the game designed and the prototype worked great, and the game was interesting. Then, Oculus got bought out by Facebook. I went to a meetup and tried out VR for the first time, and it was a really short demo of Technolust, where you basically stand in a space ship room and a future punk character sits in a chair, following you with her head. I thought to myself, "This demo is terrible but this VR stuff is amazing -- I bet I could do way better." I took that bet. A week later, I ordered an Oculus Rift DK2 and a Leap Motion for $500. I had read about someone who spent a weekend creating a project where he used the leap motion and VR to throw fireballs at crates and barrels. I couldn't find it online, so I said, "Hey, if this guy can make this in a weekend, then so can I!" So, I did. And then what happened? One of the problems with this game design is that throwing fireballs at crates and barrels is inherently boring. I needed something more exciting, something that progressively punished you for missing. What is scary, moves slowly, and becomes an increasing threat as it gets closer to you? The only answer I could come up with was "Zombies". So, within a weekend, I had taken some of the assets from my other magic game and made a game where you used your hands to throw fireballs at zombies. It was kind of cool. A really solid, first prototype for a VR game. The following Monday, I showed my artist the game and he said that he wished I would have included him on the project because it was fun and cool. I said that it wasn't too late. We could spend the next two weeks polishing it up and then releasing it online for free in order to collect feedback and learn from customers. I reasoned that if it took me a weekend to build, then two weeks of hard work should be more than sufficient, right? Wrong. Polish is hard, especially if you want to do it right. Polishing a prototype means you really have to redo everything. Two weeks turned into two months. Two months turned into six months. Six months turned into two years. But it always felt like the end was just around the corner. I had some tough narrative design problems to resolve: Who is the red wizard? Why are there zombies trying to eat the red wizard? What happened? Where did the zombies come from? What happens to the red wizard after he kills all of the zombies? What happens if the zombies get the wizard? Are there other types of wizards? What about other types of monsters? Are there other types of spells? What are they, and how do they work? Were your problems limited to design? Were there technical challenges? On the technical side, we had some problems with Leap Motion hand tracking and artificial locomotion as well. When people throw stuff, they bring their hands back behind their head and that causes a loss of hand tracking. A little while later, Valve announced the HTC Vive. Only the exclusive club of VR developers could work with one, and sadly, I wasn't in that club. So, I had to sit and wait for 9 months. Finally in December 2014, Valve was kind enough to send me a Vive Pre, so I switched my game from a seated VR experience to a room scale experience. That compounded all of my VR design problems. Suddenly, the player could walk around in the room and the wizard character would match their movements. What happens if the player walks through a solid object in game? What happens if the player walks through the wall at the top of the wizards tower? The wizard clips through the geometry and falls, and the falling causes motion sickness. Obviously, this is very bad and had to be solved. The other major design problem was artificial locomotion. A player may have a 2.5mx2.5m play area, but the game world may be kilometers in size. How does the player move from one end of the level to the other without leaving their physical play area? I also had a game where the threat of being surrounded by zombies needed to be a very real threat, so the common industry standard of teleporting out of danger was a lazy, terrible, immersion breaking solution. I just couldn't do it and needed something better. But what would work? I was walking to work, thinking hard about the locomotion problem, and then I realized that my arms move side to side when I walk. What if, I could use something like this as the means for locomotion within VR? It would feel natural and wouldn't break immersion. A month later, I had invented my own unique locomotion system and then went public with it (to avoid letting patent trolls take it). I called it "walk-a-motion", but later, people called it "arm swing" locomotion. What can you say about Spellbound's story? Spellbound borrows a bit from the previous magic game I had been working on. I was a huge fan of M:tG, so I decided to roughly theme my magic systems off of it. I was also a fan of Magicka, Advanced Dungeons and Dragons, Skyrim, and a few other games with magic, so I borrowed some ideas and themes from them. I was also a fan of the Harry Potter universe, Star Wars, Lord of the Rings, and many Disney stories. What common thread makes those stories work? Why are they so compelling? My hunch is that the stories in each universe hint at greater, amazing secrets, just begging to be discovered by the protagonist, and if only the audience waits just a little more, the protagonist might find this amazing thing, whatever it is, and it'll be amazing. The essential ingredient shared between all of these great stories is narrative curiosity. So, the goal with creating the narrative for Spellbound is to evoke those same feelings of awe and mystery, and create that sense of narrative curiosity, sprinkled with danger and reward. At the same time, I don't want to just entertain players, I want to educate them and make them into better human beings, so the stories will all contain moral components which let people explore moral choices safely and learn about consequences and how they affect other people. Lastly, I aim for heart warming tales of passion. If at the end of Spellbound you didn't feel a single emotion, either I failed to do my job or you're dead. In terms of narrative structure and story telling, I felt that the only way I could get away with an immersive VR story is to have a meta layer. I was heavily inspired by "The Princess Bride" and the narrative mechanic they used for introducing the story. I decided I would go roughly the same route. So, the audience starts off in a dusty library in some castle with a bunch of books on a shelf. They pick a book, and the book magically floats down and opens, and the narrator starts reading the story. This is sort of like an establishing shot in film, where we introduce the narrator, the character, the setting, and most importantly, the context for the experience. You enter into the story universe, and find yourself controlling the protagonist. Now, you are directing the story. Since the narrator was introduced earlier, it doesn't seem weird to hear his voice telling the story as you go along. The player can still get eaten by zombies or die in some gruesome way, so when that happens, we have to restore from a checkpoint. But for that to make narrative sense, the narrator says, "And the wizard was devoured by zombies! ... But that's not what really happened ... let's turn back a few pages!", so we maintain narrative continuity and immersion by briefly returning to a meta layer without interrupting the VR experience of immersion. This is the format I'm going to use for all of the VR stories I use, and later on when I introduce multiplayer elements to the game, the multiplayer match making will happen within the castle auditorium outside of the library. As an indie in VR, what business challenges have you faced with Spellbound? Spellbound has had TONS of challenges, both on the business side and on the technical side. The biggest business challenge is the lack of funding. I spent all of my personal savings to develop the game, ran out of money, had to lay off my artist, and somehow continue production and find a way to pay bills without any income. I tried pitching to investors, and failed miserably. When you stop and think about it, it makes no sense to an investor: I'm an independent developer. I have no team. I have no experience shipping games. It's a game, which generally has a high rate of failure. Not only that, but it's also a virtual reality game. Most investors don't even know what virtual reality is. I barely know how to pitch or how to play the investment game. So, there is no investor money. Kickstarter is also a waste of time, especially in 2017. You have to spend 3 months preparing your copy, creating media and art, coming up with backer rewards, etc. Then during the campaign, you spend a month of full time work just running the campaign, trying to get press and media attention, spamming social media, answering backer questions, etc. Kickstarter is also an all-or-nothing campaign, so if you're a dollar short, you get nothing and you wasted four months. If you're cursed with success, then you are now committed to weekly or monthly updates to your backers, and you have to fulfill backer rewards, which eat into the raised funds. On top of that, the expectations from the gaming community on what it costs to actually produce a game are wildly off from reality. If you think it costs $60,000 to produce a video game, you should consider multiplying that number by ten or a hundred. I released Spellbound on Steam in Early Access exactly a year ago. I didn't do any marketing (couldn't afford it), so the only customers I have, happened to just stumble on it. It's an early access title, so it's certainly incomplete, but I tried polishing the existing game play as much as possible so that people would get a clear idea on what to expect with the rest of the game play. As a result of these conditions, sales were terrible but reviews and feedback was highly favorable. Just to set the expectation here, in 2017 almost nobody in the VR industry can sustainably live off of sales revenue alone. I expect sales to be weak for the next three years, so in order for me (and anyone else) to continue working and throwing money at the VR industry, we need to have faith and believe in a brighter future. I couldn't do what I'm doing right now without the financial and moral support of my girlfriend and our side businesses. We were both renting out our apartments on AirBnB, often having to sleep at a friends house, and also watching dogs from Rover. We once had something like 25 dogs in our apartment during a major holiday. We gradually increased the number of accommodations we rented out on AirBnB, upgraded to a 60 acre farm next to a river that floods, and then upgraded to a 240 acre ranch near a national park. That means we're also in the tourism and hospitality business, so our revenues from that business is very feast and famine, depending on the time of year. I have only had to worry about paying for office rent and buying lunch, so the revenue from game sales has barely been enough to pay for that. In the most recent months, I've been doing contract and freelance work, creating VR products for various small and large companies. This means that game development mostly gets put on hold until I wrap up those projects. In the long term, my goal is to get Spellbound to become a financially self sustaining project. I want to bring more content to the game, bring my vision to life, and distribute it on as many distribution platforms as possible and support lots of hardware platforms to create the next level of immersive VR gaming. Eventually, the hope is that the game content is compelling enough that it sells itself and that it is widely discoverable, so the amount of money spent on marketing is lowered. Ultimately, the current lack of funding and sales severely slows down the pace of game development since I can't hire helpers or commit full time to the development of the game. I think that the funding problem is a temporary problem however, and given enough time and effort on my part, it will take care of itself. I hope that my sales will at least grow proportionately with the growth of the VR industry and that I'll still be around five years from now, still creating VR content. I am convinced that virtual reality will become the predominate media format for entertainment in 2020-2030, and I want to be a part of making it happen. Any technical challenges with Spellbound? On the technical and design side, Spellbound has also been really hard. I'm an overly principled designer with a vision for how things should be, and I have a strong vision on what VR is meant to look and feel like. Out of most of the VR content out there, I am relatively unimpressed by most of it because I feel most VR developers don't accurately understand or capture the essence of virtual reality and how it's different from traditional media. Broadly speaking, VR enables you to be someone else, somewhere else, and do completely different things. It is meant to capture every way we sense reality and override it with new sensory inputs in order to create a new reality. VR is not the hardware, it is not the content, it is an experience of being someone else (think of the movie Being John Malkovich). When I initially started working on Spellbound, I believed that we absolutely needed to bring our hands with us into VR. It's our most familiar way of interacting with our surrounding environments, and if we fail to bring that into VR, then it greatly weakens the overall VR experience. So, I bought the Leap Motion to bring in hand interactions. That presented its own technical challenges. Then, I added roomscale support for the HTC Vive. This created a lot of new design problems. How do you prevent players from walking through solid objects? How do you let players walk around their room and let them keep going once they get to the edge of their room? How do you account for differences in player height? How do you prevent players from getting motion sick? How do you figure out what direction the players body is facing, based off of three data points? How do you figure out where the elbows should be positioned? How do you train players on how to play your game without directly giving instructions? How do you design a user interface without a single graphical user interface widget? How do you make that UI intuitive and non-immersion breaking? How do you design your game and levels so that you constantly hit 90 frames per second throughout your entire game play experience? How do you tell a story in VR? How does VR change the nature of story telling? How do you involve the player as the center of the story being told? Keep in mind, VR is an entirely new industry and there are no answers, so you can't just google your questions to get answers -- you have to invent the answers yourself and then google will eventually tell other people whatever you came up with. To do this well, you need both the mind of an engineer and a creative mindset... and a lot of perseverance and patience for mistakes. Overall, I think the whole process of creating a VR game in a tough business climate is like trying to navigate through a maze blindfolded. I'm still in that maze, discovering dead ends and new paths. What marketing strategies have you tried for Spellbound? I have had booths at various local VR conventions and given people demos at local meetups. In terms of social media, I created a few Facebook posts, created a reddit post, and did a reddit AMA. Overall, all of my marketing efforts have been utterly ineffective. I have a Google Analytics page tied to my Steam Store page and I can track the number of daily viewers. I have established a baseline of approximately 100 visitors a day, and after any promotional activity, I look at whether my visitor count moved the needle from the baseline. If the needle doesn't move, the promotional effort was ineffective. I have believed that creating great content would help and hoped it would be good enough that people would talk about the game without my prodding, but I suspect that my early access game just isn't there yet. It's good, but not great. I've also had an assumption that if I push out content updates, sales would increase. That turned out to be a wrong assumption as well. The reality is that people only discover Spellbound in the list of available VR games on Steam. The last I checked, it was on page 6, so a customer would have to wade through quite a few higher ranked titles before they found Spellbound. The only customers who would be interested in the game would already have an existing VR headset, so already my total addressable market is quite small. As time goes on and sales dwindle and newer releases hit Steam, Spellbound gets pushed further and further down the list, making it increasingly more difficult for people to discover and purchase the game. This turns into a vicious cycle. My only chance is to make Spellbound more discoverable across multiple store channels. If I'm on page six across three major channels, that's three times better. The other strategy is to create the best content I can, tell a great story, inspire the imagination of the player, and make the content something which hardware vendors want to support. There are lots of cool VR hardware companies out there who have neat hardware, but there is very little content for that hardware. If people purchase the hardware, the next thing they'll want to do is purchase content which is compatible with that hardware. If my game is one of the few games available and it's awesome content, then it will do awesome in a very niche market. Gradually, as the markets grow and mature, I'll hopefully be able to survive. That's the long term plan. What does the future hold for Spellbound? Currently, you can only play the "prelude" version of the game. I have designed the content to be broken up into a series of books. Each book will be about an hours worth of content, and each series of books will be centered around a particular wizard and theme. The themes are roughly inspired by Magic: the Gathering, though I'm modifying them significantly. The red wizard is an elemental wizard who can control fire, earth, wind, water, and arcane elements. He accidentally triggers a demonic invasion, so he has to save the world from what he unleashed. The white wizard series (Sorceress of Light) will be a pacifist who seeks to create peace through dialogue and uses magic to heal animals and bring happiness, possibly with romance. The theme for this series will be a radical departure from blowing up zombies and general aggressive game play, making the game more appealing to a wider audience and also exploring other areas for storytelling in VR. The black wizard will be a story about a male character who faces the tragic loss of his wife and spends years looking for a way to bring her back, and he discovers necromancy. The intent here is to show that "evil" is a gradual descent rather than a sudden switch of nature, and the path towards evil can always be rationalized away. The story gets dark, but the goal is to show that human nature itself is dark and necromancy is a correlation rather than a causation of evil. The green wizard will be a story about elves who practice nature magic and have lots of hedonistic parties in the forests because they're somewhat immortal, but I haven't spent a lot of time thinking about their game play or narrative yet, other than that it is focused on ecological preservation. The last story of wizardry will center on the blue wizards, who are masters of deception, illusion magic, and mentalists. The blue wizards, being masters of illusion, have formed a sort of secret religious cult and recognize that their world is a virtual reality, so the cultists are trying their best to escape from the virtual reality and into our reality. They have discovered that on occasion, some of them become "possessed" by another being, marked by a period of blacking out and appearing in a totally different part of the land, hours or days later. This story is sort of a matrix mindf#%k story, where the player is left wondering about the nature of our reality and how they can trust their senses. I borrow a lot of inspiration from Rene Decartes and metaphysical philosophy and hope to introduce some of the critical thinking strategies to players. Any plans for multiplayer wizard battles? I'd eventually like to support multiplayer as well. I'd like to let players join each other in a lobby and go on a cooperative quest together and get rewards, and I'd like to have a wizard training battleground, where players can choose between various wizards and have combat against each other. I have no idea how that will look yet, but it's so far down the pipeline, that it's not worth expending effort on it until I have more content and a bigger player base. The VR market is tough with recent announcements that some entrants are backing out or at least slowing down their investment. As an early VR developer, what do you see happening in the VR market in the next year? Yeah, I knew the VR market would be tough. I believe the indie gaming market is even tougher. Have you seen how many games were released on Steam last year? How can an indie stand out amongst that deluge of releases and be financially self sustaining and make a reasonable living? I don't think it's possible. I think VR is the only chance for a new indie in 2017 -- it's easier to be a big fish in a little pond than it is to be a little fish in an ocean filled with big fish. I chose VR as a necessity for survival in the current environment. Where is the VR market now, and where is it going to go? My sense is that we're very early into this new form of media. VR is currently owned by enthusiastic early adopters and the general mass market is slowly getting consumer awareness of VR and what it is. As a historic comparison, it's 1950 and the neighbor just down the street got a small black and white television set, and if you jiggle the antenna a bit, you can get a picture. Everyone in the neighborhood is stopping by to take a look at it and seeing the earliest forms of television programming. Nobody has any idea on what television will become in the next few decades, or how the programming will change, and how the media format will change consumerism. This is roughly the stage where VR is at right now. Us content creators are barely starting to understand how to take advantage of this new medium and the unique story telling opportunities it creates -- we're taking what we know about traditional gaming and trying to apply them to VR -- similar to taking a popular radio program and running it on television. Being a radio heavyweight doesn't automatically mean you're an expert at television production, and the same applies to traditional gaming and VR development. A lot of the AAA game studios are watching the VR market carefully and slowly starting to dip their toes into the water with smaller projects. We have to realize that the AAA companies are going to be very focused on the financial viability of whatever IP they create, so if they are going to sink $75 million to produce an original IP for VR, they are going to want to be sure that they're going to see a return on their investment. With the current size of the VR market right now, it's just not financially feasible for AAA companies to create a major IP for VR. That's not going to be forever though: It's a tricky matter of timing. If you consider that the average title for traditional gaming takes between 3-5 years to produce, if a AAA company starts production on a VR title today, the business landscape could be very different 3-5 years from now. So, for the AAA gaming companies, it's going to be a matter of timing the market such that the release of their AAA VR content coincides with a market which has grown to the point where the company could see a reasonable return on investment. I think the smart move for AAA companies right now is to let the indie VR developers spend their money and sweat to innovate the tech and create the best practices, acquire the successful VR companies so that they can have the engineering talent in-house, and use that as a launching point for smaller and mid sized IP brands. For us indies, this leaves a narrow window of opportunity to produce our games, get proficient with the technology, and try to define the industry before we are overwhelmed by an overabundance of AAA VR content, leaving us on page 60 instead of page 6. What do you think will happen with VR in the next 5 years? There will be a lot of news in the next 2-5 years about VR companies going out of business. It's already happening. The reason these companies will go out of business is because they took on early venture capital funding, scaled up their headcount and increased their overhead costs, and couldn't get enough revenue to sustain their operating expenses. It doesn't matter what kind of product they create or how popular it is, because the companies are outpacing the growth of the VR market and it's financially unsustainable, and eventually, their bottom line will catch up to them and the venture capitalists will grow impatient and stop funding the company. Again, it has nothing to do with product viability, but everything to do with market timing, growth, and room temperature business plans. It's already happening, and every time it does, the press gobbles it up and every naysayer gets to shout "I told you so!", but slowly, the VR industry marches forward and the survivors plod onwards, one hard fought sale at a time. Five years from now, people will look back in hindsight and say, "Well, of course VR was always going to work. Startups have a historically have a high rate of failure, that's all!". Your predictions for the next 10 years? In the next 5 to 10 years, we're going to see the releases of second generation VR hardware. Video card manufacturers will have adjusted their hardware rendering pipelines to support stereoscopic displays, and the performance limitations we're seeing in content production today will gradually vanish. Hardware specs will rise across the market, and the average gamer with a VR capable PC / console will become the norm. Gradually, the current financial barrier for entry into the VR entertainment market will lower, making it increasingly accessible for more and more gamers. We'll see a lot of new names being made famous. E-sports will integrate VR, and E-sports gamers will become much more athletic in order to remain competitive with the physical demands of VR gaming. We'll also be seeing a gradual adoption of VR outside of gaming, and this will really mainstream it when it happens. Gaming has always been the tip of the spear in terms of technological innovation, but corporations all benefit from the windfalls of the technological advancement. I would expect to see VR having significant influences in training, medical imaging, travel, sales and advertising, simulations, pornography, and cinema. I also think there's going to be a huge disruption in education and online learning with VR. However, all of these other technological applications historically tend to be about 3-5 years behind the curve, so don't expect to see them immediately or being heavily developed in parallel to the gaming industry. Any interest in Spellbound AR? I'm extremely skeptical on the viability of AR. I have only seen the Hololens and spent a weekend working with it, but it was enough to lose interest in AR completely. I imagine that if I was invited to see whatever tech Magic Leap is working on, I would leave quite unimpressed. If AR is going to be viable in the future, I wouldn't expect to see anything practically useful for another 10-15 years. There are a lot of major problems with AR: It requires a lot of computational power It's an overlay of the real world, and the real world is messy, boring, and very dynamic. The range of colors you can display on AR is limited. Black is transparent. The glasses look stupid and the wearer looks creepy when walking around with a camera on their head. Every promotional bit of marketing material you see for AR is fake smoke and mirrors. NDA's prevent people from calling AR companies out on it. If AR is around the corner, then AR hardware companies would be releasing their dev kits to third party developers to create a content ecosystem. That's not happening. The best we have is a $3000 dev kit from Microsoft, with a price tag which pretty much makes the hardware unavailable to indies, which pretty much makes the platform dead on arrival. I could adapt Spellbound for AR, but that would be a radical departure from the strengths of VR. Within VR, you get to be someone else entirely and experience a new world from their eyes. Within AR, you are still yourself, experiencing you own world, with a few augmentations to it and the remaining physical limitations. Within VR, I could let the player ride a horse into battle or ride high above a town on a magical carpet, but within AR? You're just a weird guy running around a park flailing your arms wildly and shouting at invisible friends. If I rub my crystal ball extra hard and try to look 20 years into the future, I think there won't be a hardware distinction between VR and AR. You'll wear the same HMD, but the difference between AR and VR will just be the amount of sensory information from the physical world being overridden by the hardware. You'd be able to blend reality by integrating it into your VR environment, so if you are in the park and walking by a tree, you'd see a virtual representation of that same tree in VR, and the tactile sensation of touching or bumping into the tree would match expectations. Today though, I think VR development is much easier than AR development, and the market is established and a lot more healthy to make it financially viable. You've gone through some tough times with funding. What advice would you give to someone thinking about trying to make a career as an independent game developer? Don't hire employees until your revenues can support it. If you have custom assets which need to be created, contract it out. Use online market places as a way to jumpstart your asset production work, but never consider online assets to be everything you need. Avoid spending your own money if you can. It's always better to spend someone else's money (investors, publishers, etc) Don't bet with your production budget. Investing in stocks is betting. Don't invest what you can't afford to lose. Don't expect to make lots of money as an indie developer. You'll probably be poor and barely break even. Earmark at least 30% of your budget for marketing. Pay very close attention to the scope of your project, your timeline and your budget. Scope creep doesn't just eat time, it also eats money. Have enough money to be able to afford to live without income for a few years. You may have to. If you aren't testing your game with potential customers at every step in the development cycle, you risk creating a product nobody wants to buy. This was great, Eric. Thank you for your time and sharing your thoughts with others on the challenge of being an indie and developing for the VR market. You're welcome. The last bit of parting advice I can give to anyone thinking about starting this journey, is to buy a bunch of books on entrepreneurship and project management. Read them from cover to cover, absorb the knowledge. This will cost you 2-4 weeks of time and maybe $200, but just think about how much time you'll save by avoiding rookie mistakes which cost months of time and tens of thousands of dollars worth of work! The following are books I recommend: Code Complete, 2nd Edition (Microsoft Press) Lean Startup, by Eric Ries Project Management for Dummies Any book on marketing and sales Don't just read the books, try to find ways to apply their teachings to your project (otherwise you're wasting your time). Even if you aren't an indie developer, the lessons are broadly applicable and will jump start your project and career. Also, don't be afraid to publicly show your game. Forget about NDA's and people stealing your idea, that is the least of your worries. Focus on production, creating great content, and marketing. At the end of the day, making and shipping games is the easy part -- marketing and sales is the hard part, so always think about how you're going to market and sell your game and get product-market fit, right from the beginning of the project. At every step of the way, always be thinking about marketing and creating the best value for your customer. Good luck, work hard, and be persistent and consistent! Interview conducted by Kevin Hawkins (@khawk) of GameDev.net. You can learn more about Spellbound on Steam at http://store.steampowered.com/app/463400/Spellbound/.
  10. 6 points
    Say you've got a purple triangle and a brown one: Red X's are pixel centers, light-blue X's are 2xMSAA sample points. After running non-MSAA deferred shading, we end up with this lighting buffer: When you render your geometry a 2nd time, this time into the 2xMSAA buffer, the purple triangle will cover these sample locations (left) and the PS will run at these locations (right): ...but if it's fetching colours from that lighting buffer, the bottom right edge of the purple triangle will receive brown lighting. And to solve it, you need to search the lighting buffer in the local neighbourhood for a texel that best matches the geometry (either best-fit or a bilateral filter), in which case you'll end up with those edge pixels realizing that the lighting buffer contains invalid data for them, and instead blending valid data that they find in their neigbourhood: And this isn't fool-proof. If you've got sub-pixel geometry, then it's possible for this search to occasionally fail and find no valid data in the lighting buffer! So you should avoid sub-pixel geometry when using this technique.
  11. 6 points
    Why did you choose a 7 year old version of Visual Studio?
  12. 6 points
    Thank you everyone, This has been another great year for the WoA. I'm quite happy to see how well it's turned out for the past several years, and I think we've made some great strides in improving it over the years. The amount of work you all do is tremendous, and seeing your ideas and games come together throughout the week is very cool to see. This year however is going to be the last year that I take up the mantle for organizing and running it. I'm hopeful someone will be up to taking up the mantle for next year, @khawk Has plans to bring the contest and gamejam parts of gd.net online later this year, so ideally it should make future contests a bit easier to run . I thank you all for helping make the week into what it is, And i wish good luck to whomever wishes to continue the week of awesome.
  13. 6 points
    The authors may have some more or less rational arguments against overloading. In this particular case, the call site may be a bit unexpected if you use NULL or 0 as a null pointer constant, as the integer overload will be chosen. There's nothing intrinsically wrong with overloading, but it can be a bit more clear at the call site what function is intended and what it does. Use common sense and adhere to local style.
  14. 5 points
    Tons of commercial games are already being made in C#. If you're more focused on lower-level languages that can be direct replacements at the system programming level, then I'd say that since the demand for that is dropping, C and C++ are adequate. Maybe Rust/Go/Swift will be a worthy competitor in future, but I don't think there is enough critical mass.
  15. 5 points
    In my experience, story comes after the game mechanics. Usually a story could be attached to a broad variety of games. Consider how a thematic story can be equally applied to first person shooters, a real time strategy, a platformer, even a board game. I can easily recall many hallmark games of each genre that have an object or person taken, with the story itself applying equally well to Mario, to the Halo franchise, to Angry Birds, to many other games. Build your game's core mechanic, the thing you want to build upon as the first thing. When you've got an amazing core mechanic, build your story around it as a framework. If your game's mechanic is not fun, story cannot save you.
  16. 5 points
    Enemies sell the levels Level design is more than the geometry of the levels. Blocks and power ups and traps all test the player's mettle, but it's the enemies that bring a level to life. Their personality and how well they fit the theme of the level is important for lore, but it's their behavior that makes the game play. Enemies that are varied and challenge the player in unique ways make for memorable experiences, especially when used in harmony with the level's geometry. Brilliant enemy design, brilliant level placement! (source: A Critical Look at Mega Man 3 Stages) This article won't go into how and where to place enemies. Instead, it will focus on designing the enemy's behaviors. Just keep in mind that their placement and how they compliment the player's move and attack sets matter. Check out the super fun Super Mario Bros Crossover to see just how different enemy behavior can be when the player's character moves and attacks differently. Also, note that this article is mainly about designing enemies in a 2D space. Some of this will apply to 3D enemies, but the differences in dimensions and movement directions between the two play spaces are a lot larger than they might seem. Defining Enemy Behavior - States and Triggers After playing through virtually the entire NES and SNES library, I took notes on enemy behavior and ended up categorizing it all. Enemy behavior can be broken down into three main categories: Movement - How they Move Qualities - Their innate nature Abilities - The things they can do By mixing and matching attributes of these three categories, you can create an Enemy State. Often, however, enemies have multiple states and switch through those states with "Triggers". I created lists of attributes for each of these qualities. Most interesting enemy design can be made by mixing and matching these attributes into unique states and then further deepening the experience with a series of triggers. Most common enemies have at most two states with a single trigger, but Boss fights are special in that they often have four or more states with a variety of triggers. Bosses that don't usually aren't very fun (see Axiom Verge for examples of this). This list is very comprehensive and can describe the behavior of every enemy I could find in a retro game. I'm sure there are attributes and triggers I haven't listed, so if you know of any (especially Trigger ideas), leave a comment! Otherwise, this method can create every enemy from any top-down or side view action game you can think of. Also, consider projectiles (bullets, arrows, fireballs, etc) as enemies too. Some very cool projectiles can be created for both enemies and player character using the same rules! Without any further introduction, let's get to the attributes! Movement Stationary - The enemy does not move at all. Walker - The enemy walks or runs along the ground. Example: Super Mario's Goombas Riser - The enemy can increase its height (often, can rise from nothing). Examples: Super Mario's Piranha Plants and Castlevania's Mud Man Ducker - The enemy can reduce its height (including, melting into the floor). Example: Super Mario's Piranha Plants Faller - The enemy falls from the ceiling onto the ground. Usually, these enemies are drops of something, like acid. Some games have slimes that do this. Jumper - The enemy can bounce or jump. (some jump forward, some jump straight up and down). Examples: Donkey Kong's Springs, Super Mario 2's Tweeter, Super Mario 2's Ninji Floater - The enemy can float, fly, or levitate. Example: Castlevania's Bats Sticky - The enemy sticks to walls and ceilings. Example: Super Mario 2's Spark Waver - The enemy floats in a sine wave pattern. Example: Castlevania's Medusa Head Rotator - The enemy rotates around a fixed point. Sometimes, the fixed point moves, and can move according to any movement attribute in this list. Also, the rotation direction may change. Example: Super Mario 3's Rotodisc, These jetpack enemeis from Sunsoft's Batman (notice that the point which they rotate around is the player) Swinger - The enemy swings from a fixed point. Example: Castlevania's swinging blades Pacer - The enemy changes direction in response to a trigger (like reaching the edge of a platform). Example: Super Mario's Red Koopas Follower - The enemy follows the player (Often used in top-down games). Example: Zelda 3's Hard Hat Beetles Roamer - The enemy changes direction completely randomly. Example: Legend of Zelda's Octoroks Liner - The enemy moves in a straight line directly to a spot on the screen. Forgot to record the enemies I saw doing this, but usually they move from one spot to another in straight lines, sometimes randomly, other times, trying to 'slice' through the player. Teleporter - The enemy can teleport from one location to another. Example: Zelda's Wizrobes Dasher - The enemy dashes in a direction, faster than its normal movement speed. Example: Zelda's Rope Snakes Ponger - The enemy ignores gravity and physics, and bounces off walls in straight lines. Example: Zelda 2's "Bubbles" Geobound - The enemy is physically stuck to the geometry of the level, sometimes appears as level geometry. Examples: Megaman's Spikes, Super Mario's Piranha Plants, Castlevania's White Dragon Tethered - The enemy is tethered to the level's geometry by a chain or a rope. Example: Super Mario's Chain Chomps Swooper - A floating enemy that swoops down, often returning to its original position, but not always. Example: Castlevania's Bats, Super Mario's Swoopers, Super Mario 3's Angry Sun Mirror - The enemy mirrors the movement patterns of the player. Sometimes, the enemy moves in the opposite direction as the player. Example: Zelda 3's Goriya Qualities GeoIgnore - The enemy ignores the properties of the level's geometry and can pass through solid objects. Examples: Super Mario 3's Rotodisc, many flying enemies will ignore geometry as well. GeoDestroyer - The enemy can destroy blocks of thee level's geometry. Examples: Super Mario 3's Bowser, Bob-ombs. Shielder - Player attacks are nullified when hit from a specific direction or angle, or hit a specific spot on the enemy. Example: Zelda 2's Dark Nuts Reflector - Player's ranged attacks are reflected when the enemy is hit by them. Damager - The player takes damage when colliding with the enemy. Most enemies have this. Redamager - The player takes damage when striking the enemy. Example: Link to the Past's Buzz Blob Secret spot - The enemy has a specific spot where damage can only be taken when that spot is hit. The opposite of a Shielder. Many bosses have this attribute. Invulnerable - The enemy cannot be harmed or killed. Examples: Most are geometry based hazards (spikes, mashing blocks, fire spouts), but also includes enemies like Super Mario 3's Boo Brothers. Reanimator - The enemy is revived after dying. Examples: Castlevania's Skeletons, Super Mario's Dry Bones. Regenerator - The enemy's health is regenerated over time. Secret Weakness - The enemy is vulnerable to specific types of attacks (fire, magic, arrows, etc). Example: Zelda's Dodogno (weakness to bombs). Hard to Hit - These enemies are specifically hard to hit with by the player's normal attack ranges. They are often very fast, or very tiny. Example: Zelda 2's Myu Segmented - These enemies are made up of smaller individual objects or enemies. Each individual segment can often be destroyed independently, and are often in snake-form - but not always. Example: Zelda's Manhandla, Super Mario's Pokeys Bumper - These enemies act like pinball bumpers, pushing the player (or other objects) away quickly. Example: Zelda 3's Hard Hat Beetles GeoMimic - These enemies have properties of level geometry. Most commonly, this means the player can stand on them. Examples: Most enemies in Mario 2, Megaman X's Sigma, Castlevania's Catoblepas Alarm - An enemy or object that causes another enemy or object to react. I didn't record any specific examples, unfortunately. Meta-vulnerable - The enemy can be defeated by meta-actions the player takes such as resetting the game, purchasing an in-app item, sharing the game on social networks, playing the game upside down with an accelerometer/compass, playing the game in a well-lit room, making noise in a microphone, etc. Example: Japanese Zelda's Pol's Voice Abilities Melee Attack - The enemy has a close-range attack. Emitter - The enemy 'emits' another enemy or projectile. This type is very similar to the Thrower, and the differences are mostly semantic. The main difference being that throwers throw an object they've picked up or spawn with, while Emitters have an infinite supply of the things they can throw out. Examples: Super Mario 3's Bullet Bill Cannons, Super Mario's Lakitus, anything that has a gun, Gauntlet's enemy spawners. Thrower - The enemy can throw whatever it is holding (an object, the player, or another enemy). This is often combined with Carriers, but not always. Examples: Castlevania's Barrel throwing Skeletons, Super Mario 3's Buster Beetle Grower - The enemy can grow in size. Shrinker - The enemy can shrink in size. Forcer - The enemy can apply direct movement force to the player. I don't have any specific examples, but treadmills found in most games apply, as well as enemies who blow air at the player, causing them to move backward. Carrier - The enemy can carry another enemy, an object, or the player. This one is a bit broad but includes enemies that pick up the player, that 'eat' the player and spit it out again, or otherwise grab and restrict the movement of the player. Example: Zelda's WallMasters Splitter - The enemy can split into multiple other enemies while destroying itself. Examples: Rogue Legacy's Slimes, Legend of Zelda's Vire and Biri. Cloner - The enemy can duplicate itself. Often, the duplicate is an exact clone, but sometimes, the duplicate is very slightly modified (such as having reduced health or being faster). The enemy that does the cloning is not destroyed. Example: Zelda's Ghini Morpher - The enemy morphs into another enemy type. Sometimes, this is temporary, and sometimes, the enemy will morph into multiple other types of enemies. Sapper - The enemy can cause the player's stats to be modified (slower speed, reduced or eliminated jump, inability to attack, etc). These effects can be either permanent or temporary. Latcher - Like a Sapper, the enemy can drain the player's stats and abilities but does so by latching on to the player. Example: Mario 3's Mini-Goombas, Zelda's Like-Likes. Hider - The enemy can hide from the player. Typically, the enemy is hidden until the player comes within a set distance or the inverse where the enemy hides once the player comes too close. Example: Mario's Piranha Plants. Exploder - The enemy can destroy itself and cause splash damage. Example: Proximity Mines found in many games, Mario's Bo-Bombs Interactor - The enemy can interact with scripted objects, such as levers or buttons. Charger - The enemy will pause its behavior between switching to another behavior example: Castlevania's Wolves (Who pause a moment and lunge towards the player) Meta-Manipulator - The enemy can manipulate meta objects, such as the time of the day, the weather, the player's gender, save files, or unlock game modes. Triggers Random - The enemy randomly switches to a new state. Timed - The enemy will switch to a new state after a certain period of time has passed. Time Cycle - The enemy will switch states depending on the time of day. For example, sleeping during the night, active during the day. Distance Traveled - The enemy has traveled a certain distance Proximity - The enemy is within a certain distance of another object, usually the player, but can also be another enemy, an object in the level, or a projectile. Post-Ability - The enemy just finished using an ability. Example: The enemy shoots a bazooka, then hides behind the level geometry Occupies Volume - The enemy has entered a volume of space. This can be a volume like a water zone, or an invisible volume defining the boundaries of a town. Stat Value - The enemy changes its behavior depending on its own stats or the stats of the player. Examples: The enemy's health drops below 10% and it explodes, or the player has over 500% magic points and the enemy tries to siphon it away. Line of Sight - The enemy can personally 'see' the player, another enemy, or an object. Scripted Event - A scripted event causes the enemy to change its behavior. Someone Else - A trigger is invoked on another enemy. Example: One enemy falls below 20% health and all the other enemies rush at the player. Death - The enemy runs out of health or otherwise dies. Variables Many of these behaviors and attributes can be modified to create even more variation by tweaking just a few variables. In fact, variables could be all that change when triggers are invoked. Here are some examples: Movement Speed Jump Height Sine Wave Length and Amplitude Attack Range Splash Damage Radius Enemy Size Dash Distance .. and so on Think about what values you plug into your behaviors and if it would be better to increase or decrease those values to make even more unique enemies. A fast moving Goomba is more of a threat than a slow one! Have Fun! This is a pretty fun list to mix and match. It's sort of like a Lego system for behavior! In fact, I built a neat little interactive app a while ago on my Portfolio that assists in this, even letting you make randomized states that can be used as a base. Feel free to play around with it, and above all, have fun! That's what game design is all about <3
  17. 5 points
    If there's one thing I've learned from programming for 25 years, it's that project methodologies are not absolute. What works for one programmer, one team, one company... may not work for others. This is the single most important fact you can know before studying project methodology. All other aspects of managing software projects are subject to this truth. Anyone who claims to have the "best" approach to managing a project is deluded, trying to sell you something, or both. That said, there are a few small things that can get a lot of mileage in software production: Moderately sized work items are the most likely to be accurately estimated. Too short, and you'll blow past your deadlines; too long, and you'll churn. Aim for 1-2 days for a "normal" work item. Large items can be a few weeks, small items can be a few hours, but your average should be on the order of a couple days. Flexibility is paramount. Because of the golden rule I posted above, you will almost certainly encounter places where your approach is not optimal. Be willing to constantly refine your approaches and constantly adjust your goals and expectations. Communication is extremely hard and extremely important. You can over-communicate, just as you can communicate too little. Who you communicate with will change by project, by team, by company, etc. Your methods of communication will probably also change, as will the tools you use to communicate. Flexibility is again the most important thing you can work on. Be disciplined. This does not mean be inflexible - you can and should change your estimates, your deadlines, your milestones, etc. as work progresses and you learn more about the problem space. But don't give in to the urge to just throw your schedule away. Set deadlines even if you know you may need to move them. Establish goals, objectives, and strategies, even if you will end up refining or even discarding them. Do the hard thing when it is also the right thing.
  18. 5 points
    Programming Game AI by Example - Mat Buckland Incredible book, provides elegant solutions to this problem and way more.
  19. 5 points
    void Application::removeEnemy(EnemyIDOrWhatever e) { enemies.erase(e); if (enemies.empty()) callback(); } Just like that. You'd call application.removeEnemy(e) instead of application.enemies.erase(e) everywhere you want to remove an enemy. You can make the Application::enemies vector private to make sure you don't accidentally call .erase directly on the vector.
  20. 4 points
    That's because "ECS" is an overengineering trap. These issues are only as complicated as you choose to make them, and you choose to make them more complex by ascribing more magic and dogma to what an "ECS" is than you should. Even your described "bus data" paradigm strikes me as overengineered by virtue of focusing on a generic, one-size-fits-all solution to everything and then trying to cast specific problem scenarios into that mold. But remove "component" from the names of all these interfaces. You are left with the same "convoluted dependency" problem. The issue here isn't the component approach, per se. It's just the same old API interface design problems re-made with different names. The ECS magic bullet has been billed so hard (incorrectly) as a solution to these problems, but it isn't at all. It's a solution to an entirely different problem that gone through some kind of collective-consciousness mutation. Just like scene graphs did, back in the day. If your agent movement is currently depending on pulling data from either the input system or the AI system, depending on the state of the agent (player controlled versus AI controlled), perhaps the solution is to have something push the movement commands to the agent movement object through the unified single interface of that object. Thus reducing the double dependency by inverting the relationship.
  21. 4 points
    No, that is not necessarily the spiral of death. Most games require far less time doing the update than it takes for the time to pass. Your numbers show this quite well, if you think about it. In your example the fixed update is 0.5ms where it runs as many fixed updates as needed to catch up. Also the rendering takes 4 ms. Because rendering takes at least 4ms you will always need at least 4 simulation steps for every graphical frame. But in practice you've probably got much longer than that, especially if you're using vsync for as a frame rate limiter, which most games do. On a 120Hz screen you've got about 8.3 milliseconds per frame, so you'll probably need to run 16 or 17 fixed updates, and they must run within 4 milliseconds. On a 75Hz screen you've got about 13.3 milliseconds per frame, so you'll probably need to run 26 or 27 fixed updates, and they must run within 9 milliseconds. On a 60Hz screen you've got about 16.6 milliseconds per frame, so you'll probably need to run 32 or 33 fixed updates, and they must run within 12 milliseconds. In these scenarios, it is only a problem if the number of updates take longer than the allotted time. The worst case above is the 120 Hz screen, where an update processing step needs to run faster than 0.23 milliseconds; if it takes longer then you'll drop a frame and be running at 60Hz. At 75Hz the update processing step must finish in 0.33 milliseconds before you drop a frame. At 60 Hz the update processing step must finish within 0.36 milliseconds. Your frame rate will slow down, but as long as your simulation can run updates fast enough it should be fine. If it drops to 30 frames per second then a 0.5ms processing step has more time, up to 0.44 milliseconds. If it drops to 15 frames per second then the 0.5 ms processing step has up to 0.49 milliseconds per pass to run. As long as your simulator can run a fixed update in less than that time the simulation is fine. You ONLY enter the "spiral of death" if the time it takes to compute the time interval takes longer than the values above. Since typically the simulation time is relatively fast it usually isn't a problem. If the time it takes to compute a fixed time step is longer than the times above, and if you can't make it faster, then it may be necessary to change the simulation time step. Usually the only issue with that is the games feel less responsive, feel slower. Many of the older RTS games had a simulation rate of 4 updates per second, even though their graphics and animations were running at a much higher rate. Even that may not be much of a problem. It all depends on the game.
  22. 4 points
    Unfortunately it's lacking a lot of stuff which most programmers want. It's fascinating that one of the statements in the Jai Primer is "Automatic memory management is a non-starter for game programmers who need direct control over their memory layouts" when it's increasingly clear that game programmers are demanding that level of control less and less often. Unreal and Unity are garbage collected! I'm sure Jonathan will be very productive with it, and it has many good aspects, but I don't see it ever seeing serious use by more than a double-digit number of developers.
  23. 4 points
    In the past, I never bothered with marching different meshes for different terrain materials. I just marches the terrain as a single mesh, then used vertex colors (generated after marching the surface, using various techniques) to blend between terrain textures in the shader. Something like this (very quick example): With a tri-planar shader that displays different textures for the top surface than what it displays for the side surfaces, then you can just paint the v-colors (either procedurally, or by hand if that is your wish, in a post-process step) for different materials, and the shader will handle blending between the types and applying the tri-planar projection. A single color layer provides for 5 base terrain materials, if you count black(0,0,0,0) as one material, red(1,0,0,0), green(0,1,0,0), blue(0,0,1,0) and alpha(0,0,0,1) as the others. Provide another RGBA v-color layer and you can bump that to 9. Doing it this way, you don't have to be content with sharp edges between terrain types, since the shader is content to smoothly blend between materials as needed, and you don't deal with the hassle of marching multiple terrain meshes.
  24. 4 points
    Hello there! I'm still alive and working on the game so I jump right into what I worked on in the last month or so. Even though I was pretty silent a lot has "changed". The topic will be polishing, because it never stops , some input handling tricks and another pretty complex one: game balance. Polishing During a series of play-test sessions with friends, family and old colleagues I gathered some really valuable feedback on how to enhance the user experience. Thankfully the game itself was well received, but the mentioned "issues" really bugged me, so I sat down for a week or two to further enhance the presentation. Cost indicators This was a tiny addition but helped a lot. Now the color of the chest and shop item cost texts reflect the state whether you can open/buy them. Animated texts I went into an in-game UI tuning frenzy, so I added a "pop" animation on value change, besides the existing yellow highlights, to gold and attribute texts. Health bar The health bar got some love too! I implemented a fade-in/out effect for the heart sprite slowly turning it into a "black" one when you are low on health. I also added a maximum health indicator and the same value change "pop" animation I used for the gold and attribute texts. Battle events Battle events and various skills (hit miss, dodge, fear or cripple events etc...) got many complaints due to their visibility being insufficient, leaving the player puzzled sometimes why a battle didn't play out as expected. Besides using the existing sprite effects I added text notifications, similar to the ones used with pickups. No complaints ever since . Critical strike This one was an "extra". I wanted to beef-up the effects of the critical strikes to make them look more ferocious and better noticeable. Level transition Play testers shared my enthusiasm towards having a better level transition effect, so I slapped on a black screen fade-in/out during dungeon generation and it worked wondrous. Input handling I knew for a long time now, that the simple input handling logic the game had will not be good enough for the shipped version. I already worked a lot on and wrote my findings about better input handling for grid based games, so I'm not going to reiterate. I mostly reused the special high-level input handling parts from my previous game Operation KREEP. It was a real-time action game, so some parts were obviously less relevant, but I also added tiny new extras. I observed players hitting the walls a lot. Since the player character moves relatively fast from one cell to another this happened frequently when trying to change directions, so I added a timer which blocks the "HitWall" movement state for a few milliseconds towards each walled direction for the first time when a new grid cell is reached. Again, results were really positive . Balancing My great "wisdom" about this topic: balancing a game, especially and RPG, is hard. Not simply hard, it is ULTRA hard. Since I never worked on an RPG before, in the preparation phase I guesstimated, that it will took around 2 to 3 days of full-time work, because after all it is a simple game. Oh maaaaaaan, how naive I was . It took close to two weeks. Having more experience on how to approach it and how to do it effectively I probably could do it in less than a week now with a similar project, but that is still far off from from 2/3 days . Before anyone plays the judge saying, I'm a lunatic and spending this much probably wasn't worth it, I have to say, that during the last 6 months nothing influenced the fairness and "feeling" of the game as much as these last 2 weeks so do not neglect the importance of it ! Now onto how I tamed this beast! Tools and approach Mainly excel/open-office/google-sheets, so good old-fashioned charting baby . And how? I implemented almost all the formulas (damage model, pickup probabilities, loot system etc...) in isolated sheets, filled it with the game data and tweaked it (or the formulas sometimes) to reach a desirable outcome. This may sound boring or cumbersome, but in reality charts are really useful and these tools help tremendously. Working with a lot of data is made easy and you get results immediately when you change something. Also they have a massive library of functions built-in so mimicking something like the damage reduction logic of a game is actually not that hard. That is the main chart of the game, controlling the probabilities of specific pickups, chests and monsters occurring on levels. It plays a key role in determining the difficulty and the feel of the game so nailing it was pretty important (no pressure ). If balancing this way is pretty efficient why it took so much time? Well, even a simple game like I Am Overburdened is built from an absurd number of components, so modeling it took at least a dozen gigantic charts . Another difficult aspect is validating your changes. The most reliable way is play-testing, so I completed the game during the last two weeks around 30 to 40 times and that takes a long while . There are faster but less accurate ways of course. I will talk about that topic in another post... Tricks and tips #1.: Focus on balancing ~isolated parts/chunks of your game first. This wide "chest chart" works out how the chests "behave" (opening costs, probabilities, possible items). Balancing sections of your game is easier than trying to figure out and make the whole thing work altogether in one pass. Parts with close to final values can even help solidifying other aspects! E.g.: knowing the frequency and overall cost of chests helped in figuring out how much gold the player should find in I Am Overburdened. #2.: Visualization and approaching problems from different perspectives are key! The battle model (attack/defense/damage/health formulas) wasn't working perfectly up until last week. I decided to chart the relation of the attack, defense and health values and how their change affect the number of hits required to kill an enemy. These fancy "damage model" graphs shows this relation. Seeing the number of hits required in various situations immediately sparked some ideas how to fix what was bugging me . #3.: ~Fixing many formulas/numbers upfront can make your life easier. Lot of charts I know, but the highlighted blue parts are the "interesting" ones. I settled on using them as semi-final values and formulas long before starting to balance the game. If you have some fixed counts, costs, bonuses or probabilities you can work out the numbers for your other systems more easily. In I Am Overburdened I decided on the pickup powers like the + health given by potions or the + attribute bonuses before the balancing "phase". Working out their frequencies on levels was pretty easy due to having this data. Also helps when starting out, since it gives lot of basis to work with. Now onto the unmissable personal grounds. Spidi, you've been v/b-logging about this game for a loooooong while now, will this game ever be finished?! Yes, yes and yes. I know it has fallen into stretched and winding development, but it is really close to the finish line now and it is going to be AWESOME! I'm more proud of it than anything I've ever created in my life prior . Soon, really soon... Thanks for reading! Stay tuned.
  25. 4 points
    The letter/digit keys have no constants because their value are the ASCII values, so you can just use a character literal 'A' etc.
  26. 4 points
    I would probably use the entity ID as a foreign key inside the component structure. That's also what most of the "ECS" dogmatic blogs like T-machine seem to talk about doing. e.g. struct DamageOverTime { int entity;//foreign key float damagePerSecond; float timeLeft; }; struct DamageMessage { int entity; float damage; }; struct HealthSystem { void Process( const std::vector<DamageMessage>& ); }; struct DamageOverTimeSystem { std::vector<DamageOverTime> components; void Process( float deltaTime, HealthSystem& health ) { std::vector<DamageMessage > results; results.reserve(components.size()); for(auto c = components.begin(); c != components.end(); ) { results.push_back({c->entity, c->damagePerSecond * deltaTime}); c->timeLeft -= deltaTime; if( c->timeLeft > 0 ) ++c; else // fast erase { std::swap(*c, *(components.end() - 1)); components.pop_back(); } } health.Process(results); } }; You can put as many components in there as you like for the same entity, and they will stack additively (two 3-damage-per-second components will do 6 damage per second). If you wanted to change the game rules so that they stack multiplicatively (two 3dps components results in 9dps), that's a simple tweak: void DamageOverTimeSystem::Process( float deltaTime, HealthSystem& health ) { //group components by entity std::sort(components.begin(), components.end(), [](const DamageOverTime& a, const DamageOverTime& b) { return a.entity < b.entity; }); std::vector<DamageMessage > results; results.reserve(components.size()); //multiply together all the damage values for each entity int groupedDamage = 1; for(auto c = components.begin(); c != components.end() ) { int entity = c->entity; c->timeLeft -= deltaTime; groupedDamage *= c->damagePerSecond; if( c->timeLeft > 0 ) ++c; else c = components.erase(c); //end of a group of entities if( c == components.end() || entity != c->entity ) { results.push_back({entity, groupedDamage * deltaTime}); groupedDamage = 1; } } health.Process(results); } In this example I've not given the DamageOverTime components their own component-ID. If you can somehow cancel these effects (e.g. killing a vampire removes every life-drain spell that they've cast) then you'd probably need component ID's too so that you can keep track of them. Also note that you don't need a big framework to do ECS. I just did it in plain C++ in a few minutes and IMHO, writing ECS style code manually (without a framework) results in cleaner, more maintainable code and a better understanding of the data flow in your program
  27. 4 points
    16-bit depth maps are a common optimization for directional lights because they use an orthographic projection, which means that the resulting depth range is linear (as opposed to the exponential distribution that you get from a perspective projection). When you're rendering to 4 2k depth maps in a frame, cutting the bandwidth in half can make a pretty serious difference in your performance. The worst case is actually more like 20 depth passes, because of the 4 cascades from the directional light. The upside of lots of depth passes is that it gives you some time to do some things with async compute.
  28. 4 points
    My first suggestion is that you don't need to have a list of classes up-front. If we tried to do that for AAA games we'd never start coding. My other comments: I like to have a separate App class which contains a Game class. Game is for, well, gamey things. I'd create the window and other input/output in the App, as well as an instance of the Game. Actors shouldn't need TextureManagers. There's no part of 'acting' that needs to look up textures. If you want Actors to be responsible for drawing themselves, that's fine (at least at this level), but you should pass in the texture to use in each case. Input handling for a game like this can stay simple. I'd start with a function that is called from the main loop, with the paddle passed in as an argument. Call methods on the paddle to implement movement, based on input state. If you have the separate Game/App system like I do, then the App's input handling can call through to the Game's input handling.
  29. 4 points
    Just a handful of random suggestions: Get your algorithm correct first. Micro-optimization is a waste of time if your code is solving the problem wrong. In case you aren't already aware, benchmark in Release builds, not Debug builds. This can make a tremendous difference. Your heuristic is not great. See this page for details. Start with a tiny test case, like 10x10, and walk through the algorithm start to finish in a debugger. Pay attention to the behavior of the code. You will almost certainly find inefficiencies this way.
  30. 4 points
    I don't think whitespace is your problem. I think your problem is that \d "digit" doesn't match '.' characters (I use C# not Java, but regular expressions should be basically the same, shouldn't they?), so when you see x.y your regular expression finds x, skips . and then returns a second match for y. Try using "[0-9\\.]+" instead, or if you want to be more exact: "[\\-\\+]?[0-9]*(\\.[0-9]+)?" (but this does not include the 10E+5 syntax. left as an exercise to the reader)
  31. 4 points
    I'd rather you called this an "unexpected feature" than a bug. Homing missiles are cool. Maybe after you fix it, you can still keep that behavior so that it activates with a powerup or some special code.
  32. 4 points
    Do you think a car with an automatic transmission and a full tank of gas would permanently damage your ability to drive, or should everyone start by learning how to change the oil and filters, rotate the tires, service the fuel injectors and rebuild the clutch on a manual transmission?
  33. 4 points
    I suspect you're hitting Analysis Paralysis by getting too hung up on doing things the "OO way". Pong is really no different to any other kind of application: It gathers input It processes that input into some state It presents that state as output The only notable variation is that for a real-time game those 3 step needs to occur every single frame. So all we do is wrap them up in a game loop like this: World world; while (true) { Keys keysPressed = checkInputs(); // input update(keysPressed, world); // process render(world); // output } World maintains the state of the game (paddles, players, scores, ball, etc) and its public methods model the interactions that you have available to you when playing. In the case of Pong it might have a method like: movePaddle(amount). The update() function knows how to convert inputs (keypresses) into actions that modify the world. For example it knows that if you push the right-arrow key then it will invoke: world.movePaddle(1); The job of the render() function is to translate the state held by World into something visual. If the process of rendering requires state all of its own (textures, shaders, models, etc) then a simple render function isn't enough. That's fine. Just make it a method in a class: renderer.render(world); This class can now hold all that rendering gumpf, available only to the render() method. Of course as you scale this up to larger and larger applications/games you'll find that these pieces become bloated. At which point you'll want to start sub-dividing ("refactoring") them. For example you can move code out of your World class into Paddle and Ball classes and World just glues them together. Or you might move code out of render() (or Renderer) into renderPaddle, renderBall, etc. This "refactoring" process helps keep your code organised and maintainable. But it's not where you start. You start simple and you subdivide structure into it only when needed. This is how we all do it! With lots of experience we developers can sometimes foresee what sort of factoring is appropriate and then shoot straight for that. But in reality it's an unreliable process (because we're only human, we stumble across pitfalls or we learn something new or requirements change). Ultimately it's a skill that you will never master and you will never stop improving!
  34. 4 points
    Yeah, I've worked on games where three specific objects were included in a depth-pre-pass (prior to the G-buffer pass), and that's it! They happened to always be very close to the camera and occluded a lot of pixels, and also had high depth complexity / overlapping parts. It turned out that a generic ZPP was a loss, but doing ZPP for just these three specific objects was a win There's no generic general purpose realtime renderer that's a good fit for every game. Every single console game that I've worked on has used a different rendering pipeline. Do what works in your situation.
  35. 4 points
    Sounds like lack of practice to me. You say you have solved many programming challenges, so you obviously know how to write certain types of programs. I wouldn't give up, and instead I would try to get specific help with specific problems. I have helped people learn how to write programs before. The method I recommend is starting with a fairly trivial program that you can compile and test. You then make some changes to get the program a bit closer to doing what you really want it to do. Compile. Test. Repeat. It might take some practice before you even know how to make progress in this manner, so a bit of personal guidance could be useful. I am much too busy these days to promise you any of my time, but perhaps these forums can play the role of the mentor.
  36. 4 points
    You will make bad design decisions early on. So do the professionals. The difference between them and the rest is that the professionals just get on with things, and fix things up as they go. Regarding your server 'procrastination', I'd suggest just getting started on receiving information from a client and broadcasting it out to other clients. Don't do too much client work until you have this working because the server controls the game, not the client.
  37. 4 points
    Adding to what has already been said... You have 90% of what you need to know. Given that you asked about this 3 years ago as well, I think your real problem is that you're procrastinating and waiting for the One True Book that will give you everything you need, instead of having a go right now and learning as you go along. But major software projects like this are not things you can just copy out of a book. You have a 'shopping list' of all the things you want to know about - but almost all of them are covered to some degree in the books you have. Why do you think that the next book will give you a better answer than they already did? Regarding those specific things, here's my very short summary: Principles, ideas with example code or pseudocodes... - the Massively Multiplayer Game Development book has plenty of those. If they don't fit your project, start a topic here about your specifics, and how they differ from what you've read. Project structure - start simple, and refactor as you go. You might want to consider keeping your server code and client code in separate libraries, maybe with shared code in a 3rd library. The rest is entirely up to you. Code structure - this is the same as project structure. How to properly structure the game loop - there are many articles on the game loop, all of which can work for you. How to handle zoning - depends how you want zones to work. Pick something, then you can ask a specific question. How/Where to handle AI - the server runs AI for the non-player characters. Clients will handle NPCs in a similar way to how they handle other players, i.e. the server tells the clients what they're doing. Threading (what should be in separate threads) - whatever you like. Probably nothing, to begin with, until you get a decent understanding of what can be multithreaded. This is not MMO specific. Multi servers (what should be in a separate server/machine) - again, whatever you like, and probably nothing to begin with. Putting it all together - just do all the above things.
  38. 4 points
    I can see what you asked, but let's approach this from a different direction. The direct question of "are there any good books" will just create a library that you'll never read. One approach is the for us to play 20 questions with you. Can you already make a "Guess the Number" game? Can you already make a Tic Tac Toe game? Can you make a Connect Four clone? Can you make a Pong clone? A Tetris clone? A Breakout clone? A Space Invaders clone? An Asteroids clone? Can you make a Galaga clone? Can you make a Super Mario Bros clone? Can you make them with multiple players on a single machine? Can you make them with persistent high scores systems? Can you make network chat clients? Can you embed a chat client into your game? If you answered "yes" to all of them, congratulations. you don't need those books. Otherwise, stop at the first "no" and we can help you grow from there. Or we can approach this assuming you know a bit more, with broader questions. On what technology scale are you looking to build? What tools are you planning on using? You could start with RPG Maker and put together a fairly good RPG with very little work, and with minimal software development skill. Even game development beginners can turn out a decent (yet small) game in a few days with that. You might try building something with GameMaker:Studio or GameSalad or Construct, they're rather comprehensive and easy to use even for people not familiar with programming. You might try something even bigger requiring more skills and effort, such as using Unreal or Unity as your engine, developing on from there. You might try simpler tools and libraries, maybe leverage SDL or Marmalade or LibGDX or Cocos2D-x, or any other libraries. Or you might be considering building everything from scratch yourself, which is about the most ambitious plan. Is anything blocking you from using those tools effectively? Do you have what it takes to build the game you want with those tools? If you're using tools for C++, or C#, or Java, or some other language, do you know them well enough to build what you want to build? If not, there is your thing to learn. Do you need more knowledge or more experience on specific topics within those tools? Only you know what you already know on the topics. Maybe you have minimal skill with the language. Maybe you're somewhat experienced with simple programming but need to learn more about specific topics, perhaps algorithms and data structures regarding containers, or searching, or graph manipulation, or state machines, or IPC, or networking, or whatever else. You mention learning about server side, so you may need to get comfortable working with databases, get comfortable with writing communications libraries and protocols, get comfortable working with connectivity meshes and data persistence requirements, or keeping data consistent between machines, or working with locked resources, or understanding data isolation levels. If you already know how to use those tools effectively, then try again with the first list. Can you make "guess the number" using the tools? Connect four? Pong? Tetris-style and Breakout-style games? Networked chat? Networked Tetris-style and Breakout-style games? Galaga-style shooters? Networked Galaga-style shooters? At some point there is a boundary; one thing you are able to make today, the other thing you currently lack the skills or experience to make. Nobody knows what you should be learning (not even you) until you understand where that boundary is. We need to understand what it is that you want to learn. With that we can direct you to resources to help you learn it.
  39. 4 points
    Apologies to all the participants, I had a family medical emergency and was unable to submit my results before the deadline. Given the results have already been released I won't add them to the official site, but you can view my scores and notes here. Well done to everyone, and a fantastic effort from slicer and the other judges.
  40. 4 points
    The DXGI names are little endian. B8 is listed first, meaning it's byte #0 - the byte at the little end. D3D9 used a big endian naming convention, where that format would be written as "ARGB".
  41. 4 points
    I prefer whichever allows me to type in the Hexcode. Edit: But given the four choices above, I tend to prefer the bottom left (circular) one, because that's how I see the primary colors in my mind's eye.
  42. 4 points
    The Open/Closed Principle recommends that code be extensible without needing modification. So if you're working with a half-decent underlying framework or library, you should be able to extend freely without needing to modify internals. Ironically, this lends itself more to composition than it does to Java-style "extends" inheritance. I don't have a link handy, but there's actually a good argument that composition and inheritance are equally powerful, because they can be expressed in terms of each other. It goes something like this: Consider a class Derived which inherits from Base The surface area of Derived (in an abstract sense) is a superset of Base's surface area, because Derived can do anything Base can do If you follow good design principles, including Open/Closed, Base can do nothing that Derived cannot also do. Now consider a class Container which has a public member variable of type Base The surface area of Container is a superset of Base Base can do nothing that Container cannot also do, because Container has access to Base's functionality through that public member So far, this reads like an argument that composition and inheritance are equivalent. Here's the magic touch that makes composition ridiculously superior to inheritance: A class composed of A, B, C can expose a smaller surface area than the union of (ABC). This is key. A class built up using composition can always simplify what it exposes to clients. In other words, just because I have-a component doesn't mean I have to expose it to other code. Furthermore, I can expose parts of my components to external callers. This helps satisfy the Law of Demeter and other complexity-reduction principles. I cannot inherit an arbitrary number of base classes without generating a ton of complexity. I can compose until eternity and still keep complexity tightly controlled. Implementation inheritance was a bad idea and deserves to fade into obscurity.
  43. 3 points
    C++ is also a replacement for C and it already exists. I don't think it's worthwhile looking at any language in a vacuum where other alternatives are ignored.
  44. 3 points
    Asking if people want better anything they will always say yes. If you know how to make a game with better writing and linear elements then I really recommend making it. Just don't be surprised if what you thought was a better story is disliked by players, sometimes people have a different idea of what is better. I for one would like to see more story based games, so you already have my interest. What ever you can safely afford and a bit more. Different experiments have different costs, emotional stories can often lead to the player feeling depressed when playing a game, up to the point where they just stop. So if you plan on doing these I recommend testing the game often on test players to see how they are impacted. This war of mine is a good example of a game where a heavy emotional story limits game play, most people I know didn't play the game more than a few tries. It's a good game but they pushed the emotions of the player past the breaking point.
  45. 3 points
    We don't do this - we might close a topic of it isn't productive (usually only in more extreme cases or after allowing a chance for reasonable conversation), but we don't censor or hide complaints about moderation. If you're unsatisfied with Josh's response we're happy to discuss the matter further, either here or privately. Lastly, if you're going to make an allegation of systemic abuse of power, please provide examples, or at least a timeframe so we can examine logs.
  46. 3 points
    For what it's worth, both MotorStorm: Pacific Rift & Apocalypse (PS3) and Driveclub (PS4) used varying degress of Depth-only passes. Rift had lots of foliage and ground-rush (grass and the like) so we did a depth-only pass of just the world geometry after rendering upto 64 'occluders'. The occluders were just large polys, upto 8 verts I think, that were inside canyon walls, etc. These occluders were also used further up the pipe on the CPU side to do simple coarse-grained PVS. On Apocalypse, our VS were very heavy from lots of skinning and the like, so we ended up using conditional rendering. We did a full depth-only pass, and used the RSX feature that wrote out a pixel count for each draw call. On the full G-Buffer pass, the RSX used the value to decided whether to skip the draw-call. We did a large number of automated fly-throughs of our levels looking Fwd / Rev with this stuff on/off and it was a win, something like 1ms - 2ms if I recall. Everyone told us that Conditional Rendering was way slower, but not for us. On Driveclub, we again used Occluders but also a full depth-only pass. We fired off a ComputeShader right after to build tile info for lighting, etc which ran in parallel with our ShadowMap passes. Overally, this was a nice win despite some very heavy vertex shaders.
  47. 3 points
    This brings up the question....do you want to make an engine, or make a game? If you just want to make games and that's all you care about right now, then go right ahead with a game engine. The above suggestions still apply. If you want to make an engine(not my recommendation unless you want to for learning), then you could still start by using an engine just to get a feel for the kinds of things you don't have to do when using an engine(which are things you would have to do if you made the engine yourself). I don't see anything wrong either with making something simple in with vanilla OpenGL/D3D in C++(or whatever). Knowing a bit about the undersides of game engines can make you better at using said engines....but I don't think it is necessary, rather something that can possibly help.
  48. 3 points
    There's a reason that the newer VS compiler is behaving differently than the one shipped with Dev-C++: the one with Dev-C++ is over a decade old, and very buggy. The Dev-C++ software contains over 250 known bugs. Code that "worked" with that compiler is, in fact, incorrect and may exhibit problems in your released product. Using a modern compiler should also result in better performance. As others have suggested, you should probably bite the bullet and fix all of the errors in your code. You'll end up with a better product in the long run - there are reasons that those things are considered errors. Given you're updating to a newer IDE/compiler, why is your newer choice 7 years old? There have been several newer versions of Visual Studio since 2010, and each version has a freely available option. If you for some reason insist on using Dev-C++ for any future projects, at least use the more recently updated "Orwell Dev-C++" -- I still don't really recommend it, but it has at least had more recent bug fixes and has a significantly newer compiler/debugger/etc. This is a fast moving field where new software and technologies are released all the time. You certainly don't need to jump on to every new thing as it comes out -- and I would actually recommend waiting for a while and updating between projects! -- but if you want to remain relevant and produce good products you do need to make some effort to stay at least reasonably current. Definitely, don't start any future projects using such an outdated choice of software without some very good reason!
  49. 3 points
    Right now AMD suggest to put your frequently changing stuff first, while nVidia say last. I would say none matters much because unless you have a very edgy case for the bound resources and access pattern, the difference in performance on the CPU and the GPU are at the level of a white noise. I would even suggest to you to even care less, because the trend in rendering large amount of data is to go bindless, and it adds an explicit indirection in the shader from material/object properties to the actual descriptors. Cost again is negligible in practice and compensate by the shrinking amount of work the command processor has to do plus you have the possibility to prepare many of your binding in advance in immutable buffers and reuse them from frame to frame instead spending time flushing everything everyframe.
  50. 3 points
    Seconded. Most games I've worked on typically update game logic systems between 10-20Hz on a single thread without any impact to the gameplay experience. There simply isn't enough going on every frame for gameplay to consume that much CPU, especially considering that humans themselves are limited in how fast they can respond anyway. As long as the key player "interactive" systems are real-time responsive (rendering, sound, input, etc.), the game as a whole will feel responsive. Also consider that in any multiplayer game, network latency is going to mask a lot of it. Certain gameplay-relevant systems that are more computationally expensive (AI, pathfinding, physics) can be multithreaded independently as previously mentioned, but again those are very specific, well-defined problem domains.