• Advertisement

Leaderboard


Popular Content

Showing content with the highest reputation since 02/18/18 in all areas

  1. 8 points
    The more you know about a given topic, the more you realize that no one knows anything. For some reason (why God, why?) my topic of choice is game development. Everyone in that field agrees: don't add networked multiplayer to an existing game, you drunken clown. Well, I did it anyway because I hate myself. Somehow it turned out great. None of us know anything. Problem #1: assets My first question was: how do I tell a client to use such-and-such mesh to render an object? Serialize the whole mesh? Nah, they already have it on disk. Send its filename? Nah, that's inefficient and insecure. Okay, just a string identifier then? Fortunately, before I had time to implement any of my own terrible ideas, I watched a talk from Mike Acton where he mentions the danger of "lazy decision-making". One of his points was: strings let you lazily ignore decisions until runtime, when it's too late to fix. If I rename a texture, I don't want to get a bug report from a player with a screenshot like this: I had never thought about how powerful and complex strings are. Half the field of computer science deals with strings and what they can do. They usually require a heap allocation, or something even more complex like ropes and interning. I usually don't bother to limit their length, so a single string expands the possibility space to infinity, destroying whatever limited ability I had to predict runtime behavior. And here I am using these complex beasts to identify objects. Heck, I've even used strings to access object properties. What madness! Long story short, I cultivated a firm conviction to avoid strings where possible. I wrote a pre-processor that outputs header files like this at build time: namespace Asset { namespace Mesh { const int count = 3; const AssetID player = 0; const AssetID enemy = 1; const AssetID projectile = 2; } } So I can reference meshes like this: renderer->mesh = Asset::Mesh::player; If I rename a mesh, the compiler makes it my problem instead of some poor player's problem. That's good! The bad news is, I still have to interact with the file system, which requires the use of strings. The good news is the pre-processor can save the day. const char* Asset::Mesh::filenames[] = { "assets/player.msh", "assets/enemy.msh", "assets/projectile.msh", 0, }; With all this in place, I can easily send assets across the network. They're just numbers! I can even verify them. if (mesh < 0 || mesh >= Asset::Mesh::count) net_error(); // just what are you trying to pull, buddy? Problem #2: object references My next question was: how do I tell a client to please move/delete/frobnicate "that one object from before, you know the one". Once again, I was lucky enough to hear from smart people before I could shoot myself in the foot. From the start, I knew I needed a bunch of lists of different kinds of objects, like this: Array<Turret> Turret::list; Array<Projectile> Projectile::list; Array<Avatar> Avatar::list; Let's say I want to reference the first object in the Avatar list, even without networking, just on our local machine. My first idea is to just use a pointer: Avatar* avatar; avatar = &Avatar::list[0]; This introduces a ton of non-obvious problems. First, I'm compiling for a 64 bit architecture, which means that pointer takes up 8 whole bytes of memory, even though most of it is probably zeroes. And memory is the number one performance bottleneck in games. Second, if I add enough objects to the array, it will get reallocated to a different place in memory, and the pointer will point to garbage. Okay, fine. I'll use an ID instead. template<typename Type> struct Ref { short id; inline Type* ref() { return &Type::list[id]; } // overloaded "=" operator omitted }; Ref<Avatar> avatar = &Avatar::list[0]; avatar.ref()->frobnicate(); Second problem: if I remove that Avatar from the list, some other Avatar will get moved into its place without me knowing. The program will continue, blissfully and silently screwing things up, until some player sends a bug report that the game is "acting weird". I much prefer the program to explode instantly so I at least get a crash dump with a line number. Okay, fine. Instead of actually removing the avatar, I'll put a revision number on it: struct Avatar { short revision; }; template<typename Type> struct Ref { short id; short revision; inline Type* ref() { Type* t = &Type::list[id]; return t->revision == revision ? t : nullptr; } }; Instead of actually deleting the avatar, I'll mark it dead and increment the revision number. Now anything trying to access it will give a null pointer exception. And serializing a reference across the network is just a matter of sending two easily verifiable numbers. Problem #3: delta compression If I had to cut this article down to one line, it would just be a link to Glenn Fiedler's blog. Which by the way is here: gafferongames.com As I set out to implement my own version of Glenn's netcode, I read this article, which details one of the biggest challenges of multiplayer games. Namely, if you just blast the entire world state across the network 60 times a second, you could gobble up 17 mbps of bandwidth. Per client. Delta compression is one of the best ways to cut down bandwidth usage. If a client already knows where an object is, and it hasn't moved, then I don't need to send its position again. This can be tricky to get right. The first part is the trickiest: does the client really know where the object is? Just because I sent the position doesn't mean the client actually received it. The client might send an acknowledgement back that says "hey I received packet #218, but that was 0.5 seconds ago and I haven't gotten anything since." So to send a new packet to that client, I have to remember what the world looked like when I sent out packet #218, and delta compress the new packet against that. Another client might have received everything up to packet #224, so I can delta compress the new packet differently for them. Point is, we need to store a whole bunch of separate copies of the entire world. Someone on Reddit asked "isn't that a huge memory hog"? No, it is not. Actually I store 255 world copies in memory. All in a single giant array. Not only that, but each copy has enough room for the maximum number of objects (2048) even if only 2 objects are active. If you store an object's state as a position and orientation, that's 7 floats. 3 for XYZ coordinates and 4 for a quaternion. Each float takes 4 bytes. My game supports up to 2048 objects. 7 floats * 4 bytes * 2048 objects * 255 copies = ... 14 MB. That's like, half of one texture these days. I can see myself writing this system five years ago in C#. I would start off immediately worried about memory usage, just like that Redditor, without stopping to think about the actual data involved. I would write some unnecessary, crazy fancy, bug-ridden compression system. Taking a second to stop and think about actual data like this is called Data-Oriented Design. When I talk to people about DOD, many immediately say, "Woah, that's really low-level. I guess you want to wring out every last bit of performance. I don't have time for that. Anyway, my code runs fine." Let's break down the assumptions in this statement. Assumption 1: "That's really low-level". Look, I multiplied four numbers together. It's not rocket science. Assumption 2: "You sacrifice readability and simplicity for performance." Let's picture two different solutions to this netcode problem. For clarity, let's pretend we only need 3 world copies, each containing up to 2 objects. Here's the solution I just described. Everything is statically allocated in the .bss segment. It never moves around. Everything is the same size. No pointers at all. Here's the idiomatic C# solution. Everything is scattered randomly throughout the heap. Things can get reallocated or moved right in the middle of a frame. The array is jagged. 64-bit pointers all over the place. Which is simpler? The second diagram is actually far from exhaustive. C#-land is a lot more complex in reality. Check the comments and you'll probably find someone correcting me about how C# actually works. But that's my point. With my solution, I can easily construct a "good enough" mental model to understand what's actually happening on the machine. I've barely scratched the surface with the C# solution. I have no idea how it will behave at runtime. Assumption 3: "Performance is the only reason you would code like this." To me, performance is a nice side benefit of data-oriented design. The main benefit is clarity of thought. Five years ago, when I sat down to solve a problem, my first thought was not about the problem itself, but how to shoehorn it into classes and interfaces. I witnessed this analysis paralysis first-hand at a game jam recently. My friend got stuck designing a grid for a 2048-like game. He couldn't figure out if each number was an object, or if each grid cell was an object, or both. I said, "the grid is an array of numbers. Each operation is a function that mutates the grid." Suddenly everything became crystal clear to him. Assumption 4: "My code runs fine". Again, performance is not the main concern, but it's important. The whole world switched from Firefox to Chrome because of it. Try this experiment: open up calc.exe. Now copy a 100 MB file from one folder to another. I don't know what calc.exe is doing during that 300ms eternity, but you can draw your own conclusions from my two minutes of research: calc.exe actually launches a process called Calculator.exe, and one of the command line arguments is called "-ServerName". Does calc.exe "run fine"? Did throwing a server in simplify things at all, or is it just slower and more complex? I don't want to get side-tracked. The point is, I want to think about the actual problem and the data involved, not about classes and interfaces. Most of the arguments against this mindset amount to "it's different than what I know". Problem #4: lag I now hand-wave us through to the part of the story where the netcode is somewhat operational. Right off the bat I ran into problems dealing with network lag. Games need to respond to players immediately, even if it takes 150ms to get a packet from the server. Projectiles were particularly useless under laggy network conditions. They were impossible to aim. I decided to re-use those 14 MB of world copies. When the server receives a command to fire a projectile, it steps the world back 150ms to the way the world appeared to the player when they hit the fire button. Then it simulates the projectile and steps the world forward until it's up to date with the present. That's where it creates the projectile. I ended up having the client create a fake projectile immediately, then as soon as it hears back from the server that the projectile was created, it deletes the fake and replaces it with the real thing. If all goes well, they should be in the same place due to the server's timey-wimey magic. Here it is in action. The fake projectile appears immediately but goes right through the wall. The server receives the message and fast-forwards the projectile straight to the part where it hits the wall. 150ms later the client gets the packet and sees the impact particle effect. The problem with netcode is, each mechanic requires a different approach to lag compensation. For example, my game has an "active armor" ability. If players react quick enough, they can reflect damage back at enemies. This breaks down in high lag scenarios. By the time the player sees the projectile hitting their character, the server has already registered the hit 100ms ago. The packet just hasn't made it to the client yet. This means you have to anticipate incoming damage and react long before it hits. Notice in the gif above how early I had to hit the button. To correct this, the server implements something I call "damage buffering". Instead of applying damage instantly, the server puts the damage into a buffer for 100ms, or whatever the round-trip time is to the client. At the end of that time, it either applies the damage, or if the player reacted, reflects it back. Here it is in action. You can see the 200ms delay between the projectile hitting me and the damage actually being applied. Here's another example. In my game, players can launch themselves at enemies. Enemies die instantly to perfect shots, but they deflect glancing blows and send you flying like this: Which direction should the player bounce? The client has to simulate the bounce before the server knows about it. The server and client need to agree which direction to bounce or they'll get out of sync, and they have no time to communicate beforehand. At first I tried quantizing the collision vector so that there were only six possible directions. This made it more likely that the client and server would choose the same direction, but it didn't guarantee anything. Finally I implemented another buffer system. Both client and server, when they detect a hit, enter a "buffer" state where the player sits and waits for the remote host to confirm the hit. To minimize jankiness, the server always defers to the client as to which direction to bounce. If the client never acknowledges the hit, the server acts like nothing happened and continues the player on their original course, fast-forwarding them to make up for the time they sat still waiting for confirmation. Problem #5: jitter My server sends out packets 60 times per second. What about players whose computers run faster than that? They'll see jittery animation. Interpolation is the industry-standard solution. Instead of immediately applying position data received from the server, you buffer it a little bit, then you blend smoothly between whatever data that you have. In my previous attempt at networked multiplayer, I tried to have each object keep track of its position data and smooth itself out. I ended up getting confused and it never worked well. This time, since I could already easily store the entire world state in a struct, I was able to write just two functions to make it work. One function takes two world states and blends them together. Another function takes a world state and applies it to the game. How big should the buffer delay be? I originally used a constant until I watched a video from the Overwatch devs where they mention adaptive interpolation delay. The buffer delay should smooth out not only the framerate from the server, but also any variance in packet delivery time. This was an easy win. Clients start out with a short interpolation delay, and any time they're missing a packet to interpolate toward, they increase their "lag score". Once it crosses a certain threshold, they tell the server to switch them to a higher interpolation delay. Of course, automated systems like this often act against the user's wishes, so it's important to add switches and knobs to the algorithm! Problem #6: joining servers mid-match Wait, I already have a way to serialize the entire game state. What's the hold up? Turns out, it takes more than one packet to serialize a fresh game state from scratch. And each packet may take multiple attempts to make it to the client. It may take a few hundred milliseconds to get the full state, and as we've seen already, that's an eternity. If the game is already in progress, that's enough time to send 20 packets' worth of new messages, which the client is not ready to process because it hasn't loaded yet. The solution is—you guessed it—another buffer. I changed the messaging system to support two separate streams of messages in the same packet. The first stream contains the map data, which is processed as soon as it comes in. The second stream is just the usual fire-hose of game messages that come in while the client is loading. The client buffers these messages until it's done loading, then processes them all until it's caught up. Problem #7: cross-cutting concerns This next part may be the most controversial. Remember that bit of gamedev wisdom from the beginning? "don't add networked multiplayer to an existing game"? Well, most of the netcode in this game is literally tacked on. It lives in its own 5000-line source file. It reaches into the game, pokes stuff into memory, and the game renders it. Just listen a second before stoning me. Is it better to group all network code in one place, or spread it out inside each game object? I think both approaches have advantages and disadvantages. In fact, I use both approaches in different parts of the game, for various reasons human and technical. But some design paradigms (*cough* OOP) leave no room for you to make this decision. Of course you put the netcode inside the object! Its data is private, so you'll have to write an interface to access it anyway. Might as well put all the smarts in there too. Conclusion I'm not saying you should write netcode like I do; only that this approach has worked for me so far. Read the code and judge for yourself. There is an objectively optimal approach for each use case, although people may disagree on which one it is. You should be free to choose based on actual constraints rather than arbitrary ones set forth by some paradigm. Thanks for reading. DECEIVER is launching on Kickstarter soon. Sign up to play the demo here!
  2. 5 points
    If you've got a large mob of actors all trying to navigate to a single location and your navmesh/grid/whatever has uniform movement costs, you can ditch A* and instead do a single breadth-first flood-fill from the goal location outwards, writing an increasing value each time the fill expands to an unvisited location, then have all actors access the resulting gradient map, which lets them move "downhill" towards the goal location.
  3. 5 points
    Coming up with extra features is easy. The hard part is throwing out the ones that distract from the good parts of the game or only serve to complicate things without making it more fun.
  4. 5 points
    They're not distinct or mutually exclusive. It's not just one or the other in different parts of the code, but you can use both at the same time! Also, a lot of the Anti-OOP rants that you see on the internet are from people who have unfortunately had to work on badly written OOP projects and think that OOP=inheritance (when inheritance isn't even present in OOPs formal description, and any decent OOP practitioner will tell you that a core tenet of OOP is the preference for composition over inheritance). The purpose of OOP is to develop useful abstractions that reduce the surface area of your components to the minimum necessary, allowing million-line-of-code projects to remain maintainable as a collection of decoupled components. OOP is a 'paradigm' in that it provides a bunch of tools and hen it defines a structure using those tools. The purpose of DOD is to analyse the flow of data that's actually required by your results, and then structure your intermediate and source data structures in a way that results in the simplest data transformations. The goal is to produce simpler software that performs well on real hardware. If using OOP, this analysis tells you how your should structure your classes. I wouldn't call DOD a 'paradigm' like OOP, it's more of a methodology, which can be applied to almost any programming paradigm. You should apply both if you're making a large game that you want to be able to maintain, but also want good performance and simple algorithms. Because OOP is about components and their interfaces (while hiding their implementations), you're free to do whatever you want within the implementation of a component too. Also, once an OOP component is created, it can be used by other OOP components, or can be used by procedural-programming algorithms, or functional-programming algorithms, etc... A common trend in my recent experience is actually to merge functional-programming techniques with OOP too. Components in OOP are typically mutable (e.g. Some core OOP rules are actually defined in terms of how the component's state changes/mutates...) but functional code typically works with immutable components and pure-functions, which IMHO leads to better maintainability as the code is easier to reason about. It's also common to use procedural programming to define the' script' of high level loops, like the rendering flow or the game update loop. One reason for C++'s popularity in games is that it is happy to let you write in this mixture of procedural, pure functional, object oriented and data oriented paradigms. [edit] I'd highly recommend reading Albrecht's Pitfalls of Object Oriented Programming ( https://drive.google.com/open?id=1SbmKU0Ev9UjyIpNMOQu7aEMhZBifZkw6 ), which could be interpreted as being Anti-OOP, but I personally interpret as anti-naive-OOP and a practical guide on applying DOD to existing code that already works but has been badly designed regarding performance. The design he ends up with could still have an OOP interface on top of it, but strays from the common OOP idea of associating algorithms with just a single object.
  5. 4 points
    If you ever have something like this it's a sure sign that you're violating the Law of Demeter everywhere. An algorithm that binds shader resources only needs access to an interface for binding resources. In D3D that's the device-context, or in abstract terms you'd generally call it a "command list writer" or some such. In older engines you typically have one command list, but in modern engines you can have lots, so you definately don't want them to be global. The simplest version looks something like: void Widget::Draw( CommandList& ctx ) { ctx.BindPsSampler(slot, m_pointSampler); ctx.DrawIndexed(...); } This code doesn't need to know about the engine, or two layers of "managers", just the one interface that it actually cares about. It also doesn't need to know anything about how this object was "located". It's free of unnecessary dependencies. If it keeps each individual bit of code minimalistic (as in the example above) and free of dependencies, then, no, it's really not. Look how clean and simple that example code is above, and how obvious it's dependencies / side-effects are. I would not think of the device and device-context as being paired. The device is basically a thread-safe memory manager for GPU resources. The context is a command-list writer for use by a single thread, and you can create multiple of them if you wish to have multi-threaded command-list creation. In my engine, most drawing functions are passed a device-context as an argument, and most graphics-initialization functions are passed a device as their argument. Sometimes a drawing function will need to do some resource management, so I allow you to fetch a device pointer from a device-context. What if you want to create a second window?
  6. 4 points
    The more you know about a given topic, the more you realize that no one knows anything. For some reason (why God, why?) my topic of choice is game development. Everyone in that field agrees: don't add networked multiplayer to an existing game, you drunken clown. Well, I did it anyway because I hate myself. Somehow it turned out great. None of us know anything. Problem #1: assets My first question was: how do I tell a client to use such-and-such mesh to render an object? Serialize the whole mesh? Nah, they already have it on disk. Send its filename? Nah, that's inefficient and insecure. Okay, just a string identifier then? Fortunately, before I had time to implement any of my own terrible ideas, I watched a talk from Mike Acton where he mentions the danger of "lazy decision-making". One of his points was: strings let you lazily ignore decisions until runtime, when it's too late to fix. If I rename a texture, I don't want to get a bug report from a player with a screenshot like this: I had never thought about how powerful and complex strings are. Half the field of computer science deals with strings and what they can do. They usually require a heap allocation, or something even more complex like ropes and interning. I usually don't bother to limit their length, so a single string expands the possibility space to infinity, destroying whatever limited ability I had to predict runtime behavior. And here I am using these complex beasts to identify objects. Heck, I've even used strings to access object properties. What madness! Long story short, I cultivated a firm conviction to avoid strings where possible. I wrote a pre-processor that outputs header files like this at build time: namespace Asset { namespace Mesh { const int count = 3; const AssetID player = 0; const AssetID enemy = 1; const AssetID projectile = 2; } } So I can reference meshes like this: renderer->mesh = Asset::Mesh::player; If I rename a mesh, the compiler makes it my problem instead of some poor player's problem. That's good! The bad news is, I still have to interact with the file system, which requires the use of strings. The good news is the pre-processor can save the day. const char* Asset::Mesh::filenames[] = { "assets/player.msh", "assets/enemy.msh", "assets/projectile.msh", 0, }; With all this in place, I can easily send assets across the network. They're just numbers! I can even verify them. if (mesh < 0 || mesh >= Asset::Mesh::count) net_error(); // just what are you trying to pull, buddy? Problem #2: object references My next question was: how do I tell a client to please move/delete/frobnicate "that one object from before, you know the one". Once again, I was lucky enough to hear from smart people before I could shoot myself in the foot. From the start, I knew I needed a bunch of lists of different kinds of objects, like this: Array<Turret> Turret::list; Array<Projectile> Projectile::list; Array<Avatar> Avatar::list; Let's say I want to reference the first object in the Avatar list, even without networking, just on our local machine. My first idea is to just use a pointer: Avatar* avatar; avatar = &Avatar::list[0]; This introduces a ton of non-obvious problems. First, I'm compiling for a 64 bit architecture, which means that pointer takes up 8 whole bytes of memory, even though most of it is probably zeroes. And memory is the number one performance bottleneck in games. Second, if I add enough objects to the array, it will get reallocated to a different place in memory, and the pointer will point to garbage. Okay, fine. I'll use an ID instead. template<typename Type> struct Ref { short id; inline Type* ref() { return &Type::list[id]; } // overloaded "=" operator omitted }; Ref<Avatar> avatar = &Avatar::list[0]; avatar.ref()->frobnicate(); Second problem: if I remove that Avatar from the list, some other Avatar will get moved into its place without me knowing. The program will continue, blissfully and silently screwing things up, until some player sends a bug report that the game is "acting weird". I much prefer the program to explode instantly so I at least get a crash dump with a line number. Okay, fine. Instead of actually removing the avatar, I'll put a revision number on it: struct Avatar { short revision; }; template<typename Type> struct Ref { short id; short revision; inline Type* ref() { Type* t = &Type::list[id]; return t->revision == revision ? t : nullptr; } }; Instead of actually deleting the avatar, I'll mark it dead and increment the revision number. Now anything trying to access it will give a null pointer exception. And serializing a reference across the network is just a matter of sending two easily verifiable numbers. Problem #3: delta compression If I had to cut this article down to one line, it would just be a link to Glenn Fiedler's blog. Which by the way is here: gafferongames.com As I set out to implement my own version of Glenn's netcode, I read this article, which details one of the biggest challenges of multiplayer games. Namely, if you just blast the entire world state across the network 60 times a second, you could gobble up 17 mbps of bandwidth. Per client. Delta compression is one of the best ways to cut down bandwidth usage. If a client already knows where an object is, and it hasn't moved, then I don't need to send its position again. This can be tricky to get right. The first part is the trickiest: does the client really know where the object is? Just because I sent the position doesn't mean the client actually received it. The client might send an acknowledgement back that says "hey I received packet #218, but that was 0.5 seconds ago and I haven't gotten anything since." So to send a new packet to that client, I have to remember what the world looked like when I sent out packet #218, and delta compress the new packet against that. Another client might have received everything up to packet #224, so I can delta compress the new packet differently for them. Point is, we need to store a whole bunch of separate copies of the entire world. Someone on Reddit asked "isn't that a huge memory hog"? No, it is not. Actually I store 255 world copies in memory. All in a single giant array. Not only that, but each copy has enough room for the maximum number of objects (2048) even if only 2 objects are active. If you store an object's state as a position and orientation, that's 7 floats. 3 for XYZ coordinates and 4 for a quaternion. Each float takes 4 bytes. My game supports up to 2048 objects. 7 floats * 4 bytes * 2048 objects * 255 copies = ... 14 MB. That's like, half of one texture these days. I can see myself writing this system five years ago in C#. I would start off immediately worried about memory usage, just like that Redditor, without stopping to think about the actual data involved. I would write some unnecessary, crazy fancy, bug-ridden compression system. Taking a second to stop and think about actual data like this is called Data-Oriented Design. When I talk to people about DOD, many immediately say, "Woah, that's really low-level. I guess you want to wring out every last bit of performance. I don't have time for that. Anyway, my code runs fine." Let's break down the assumptions in this statement. Assumption 1: "That's really low-level". Look, I multiplied four numbers together. It's not rocket science. Assumption 2: "You sacrifice readability and simplicity for performance." Let's picture two different solutions to this netcode problem. For clarity, let's pretend we only need 3 world copies, each containing up to 2 objects. Here's the solution I just described. Everything is statically allocated in the .bss segment. It never moves around. Everything is the same size. No pointers at all. Here's the idiomatic C# solution. Everything is scattered randomly throughout the heap. Things can get reallocated or moved right in the middle of a frame. The array is jagged. 64-bit pointers all over the place. Which is simpler? The second diagram is actually far from exhaustive. C#-land is a lot more complex in reality. Check the comments and you'll probably find someone correcting me about how C# actually works. But that's my point. With my solution, I can easily construct a "good enough" mental model to understand what's actually happening on the machine. I've barely scratched the surface with the C# solution. I have no idea how it will behave at runtime. Assumption 3: "Performance is the only reason you would code like this." To me, performance is a nice side benefit of data-oriented design. The main benefit is clarity of thought. Five years ago, when I sat down to solve a problem, my first thought was not about the problem itself, but how to shoehorn it into classes and interfaces. I witnessed this analysis paralysis first-hand at a game jam recently. My friend got stuck designing a grid for a 2048-like game. He couldn't figure out if each number was an object, or if each grid cell was an object, or both. I said, "the grid is an array of numbers. Each operation is a function that mutates the grid." Suddenly everything became crystal clear to him. Assumption 4: "My code runs fine". Again, performance is not the main concern, but it's important. The whole world switched from Firefox to Chrome because of it. Try this experiment: open up calc.exe. Now copy a 100 MB file from one folder to another. I don't know what calc.exe is doing during that 300ms eternity, but you can draw your own conclusions from my two minutes of research: calc.exe actually launches a process called Calculator.exe, and one of the command line arguments is called "-ServerName". Does calc.exe "run fine"? Did throwing a server in simplify things at all, or is it just slower and more complex? I don't want to get side-tracked. The point is, I want to think about the actual problem and the data involved, not about classes and interfaces. Most of the arguments against this mindset amount to "it's different than what I know". Problem #4: lag I now hand-wave us through to the part of the story where the netcode is somewhat operational. Right off the bat I ran into problems dealing with network lag. Games need to respond to players immediately, even if it takes 150ms to get a packet from the server. Projectiles were particularly useless under laggy network conditions. They were impossible to aim. I decided to re-use those 14 MB of world copies. When the server receives a command to fire a projectile, it steps the world back 150ms to the way the world appeared to the player when they hit the fire button. Then it simulates the projectile and steps the world forward until it's up to date with the present. That's where it creates the projectile. I ended up having the client create a fake projectile immediately, then as soon as it hears back from the server that the projectile was created, it deletes the fake and replaces it with the real thing. If all goes well, they should be in the same place due to the server's timey-wimey magic. Here it is in action. The fake projectile appears immediately but goes right through the wall. The server receives the message and fast-forwards the projectile straight to the part where it hits the wall. 150ms later the client gets the packet and sees the impact particle effect. The problem with netcode is, each mechanic requires a different approach to lag compensation. For example, my game has an "active armor" ability. If players react quick enough, they can reflect damage back at enemies. This breaks down in high lag scenarios. By the time the player sees the projectile hitting their character, the server has already registered the hit 100ms ago. The packet just hasn't made it to the client yet. This means you have to anticipate incoming damage and react long before it hits. Notice in the gif above how early I had to hit the button. To correct this, the server implements something I call "damage buffering". Instead of applying damage instantly, the server puts the damage into a buffer for 100ms, or whatever the round-trip time is to the client. At the end of that time, it either applies the damage, or if the player reacted, reflects it back. Here it is in action. You can see the 200ms delay between the projectile hitting me and the damage actually being applied. Here's another example. In my game, players can launch themselves at enemies. Enemies die instantly to perfect shots, but they deflect glancing blows and send you flying like this: Which direction should the player bounce? The client has to simulate the bounce before the server knows about it. The server and client need to agree which direction to bounce or they'll get out of sync, and they have no time to communicate beforehand. At first I tried quantizing the collision vector so that there were only six possible directions. This made it more likely that the client and server would choose the same direction, but it didn't guarantee anything. Finally I implemented another buffer system. Both client and server, when they detect a hit, enter a "buffer" state where the player sits and waits for the remote host to confirm the hit. To minimize jankiness, the server always defers to the client as to which direction to bounce. If the client never acknowledges the hit, the server acts like nothing happened and continues the player on their original course, fast-forwarding them to make up for the time they sat still waiting for confirmation. Problem #5: jitter My server sends out packets 60 times per second. What about players whose computers run faster than that? They'll see jittery animation. Interpolation is the industry-standard solution. Instead of immediately applying position data received from the server, you buffer it a little bit, then you blend smoothly between whatever data that you have. In my previous attempt at networked multiplayer, I tried to have each object keep track of its position data and smooth itself out. I ended up getting confused and it never worked well. This time, since I could already easily store the entire world state in a struct, I was able to write just two functions to make it work. One function takes two world states and blends them together. Another function takes a world state and applies it to the game. How big should the buffer delay be? I originally used a constant until I watched a video from the Overwatch devs where they mention adaptive interpolation delay. The buffer delay should smooth out not only the framerate from the server, but also any variance in packet delivery time. This was an easy win. Clients start out with a short interpolation delay, and any time they're missing a packet to interpolate toward, they increase their "lag score". Once it crosses a certain threshold, they tell the server to switch them to a higher interpolation delay. Of course, automated systems like this often act against the user's wishes, so it's important to add switches and knobs to the algorithm! Problem #6: joining servers mid-match Wait, I already have a way to serialize the entire game state. What's the hold up? Turns out, it takes more than one packet to serialize a fresh game state from scratch. And each packet may take multiple attempts to make it to the client. It may take a few hundred milliseconds to get the full state, and as we've seen already, that's an eternity. If the game is already in progress, that's enough time to send 20 packets' worth of new messages, which the client is not ready to process because it hasn't loaded yet. The solution is—you guessed it—another buffer. I changed the messaging system to support two separate streams of messages in the same packet. The first stream contains the map data, which is processed as soon as it comes in. The second stream is just the usual fire-hose of game messages that come in while the client is loading. The client buffers these messages until it's done loading, then processes them all until it's caught up. Problem #7: cross-cutting concerns This next part may be the most controversial. Remember that bit of gamedev wisdom from the beginning? "don't add networked multiplayer to an existing game"? Well, most of the netcode in this game is literally tacked on. It lives in its own 5000-line source file. It reaches into the game, pokes stuff into memory, and the game renders it. Just listen a second before stoning me. Is it better to group all network code in one place, or spread it out inside each game object? I think both approaches have advantages and disadvantages. In fact, I use both approaches in different parts of the game, for various reasons human and technical. But some design paradigms (*cough* OOP) leave no room for you to make this decision. Of course you put the netcode inside the object! Its data is private, so you'll have to write an interface to access it anyway. Might as well put all the smarts in there too. Conclusion I'm not saying you should write netcode like I do; only that this approach has worked for me so far. Read the code and judge for yourself. There is an objectively optimal approach for each use case, although people may disagree on which one it is. You should be free to choose based on actual constraints rather than arbitrary ones set forth by some paradigm. Thanks for reading. DECEIVER is launching on Kickstarter soon. Sign up to play the demo here!
  7. 4 points
    Even though that video is not 'real' -- A CPU from Doom's era could do maybe 11 million general instructions per second. A CPU from Doom(2016)'s era could do maybe 400000 million floating point instructions per second. Given that speed up, if the original game could have 30 enemies on screen, a modern CPU should cope with a million. However, CPU speed is not the only factor. The memory bandwidth stat's have gone from maybe 50MB/s to 50GB/s. If we assume the original game ran at 60Hz, maxed out memory bandwidth, and performance only relies on the number of entities, which is 30, we get 50MB/60Hz/30 = about 28KB of memory transfers per entity per frame. Scale that up to a million entities and suddenly you need 1.6TB/s memory bandwidth! Seeing we've only got 50GB/s on our modern PC, that means we can only cope with around 31k entities due to the RAM bottleneck, even though our CPU is capable of processing a million! So, as for programming techniques to cope with a million entities, focus on memory bandwidth. Keep the amount of storage required per entity as low as possible. Keep the amount of memory touched (read + written) per entity as low as possible. Leverage the CPU cache as much as possible. Optimise for locality of reference. If two entities are both going to perform similar spatial queries on the world geometry, batch them up and run both queries back to back so that the world geometry is more likely to be present in cache for the second query.
  8. 4 points
    Improved lens flares, Glow, and tweaked High Dynamic Range lighting procedure.
  9. 4 points
    I have been a moderator here for about a decade. GDNet is a great community, and being part of the moderation team has been a wonderful experience. But today is, more or less, my last day as a part of that team. As it does when one gets older, life has increasingly intruded on the time I'd normally spend here. Fortunately all those intrusions have been (and continue to be) good things, but just the same I don't feel like I have the time to really do the job justice, and so I am stepping down. One of the remaining moderators will take over the forum in my place, although I don't know who that will be yet. Although it's very likely I'll be much less active for the next few months, I am probably not going away forever, and can be reached via private message on the site if needed. Thanks for everything, it's been great!
  10. 3 points
    This week we analyse our level design practices in a very practical way. Also we make sure the flow is consistent and intuitive. The challenge here is to create organic levels (non grid-based levels), and make sure that the distances are correct etc.. On this picture below, you can see how we balance the safe zone and the threat zone, how we place the rewards, how we force you to learn the basics before progressing further etc.. This gif gives you an idea of the flow and how we make it as smooth as possible for the player. Grayboxing the levels requires to remove all deco, foliage etc.. Using a simple color code is very useful. The red is the threat, the blue and yellow are used for navigation.. In this step, we are blocking all the collisions. that way when we will assemble the art and details all together, the collisions will remain unchanged.
  11. 3 points
    In a (theoretically) ideal game, you want both comeback mechanics in the early/mid game, and an unstable equilibrium in the end game. Why? Because without the comeback mechanics you'll have frustrated players who made one or two mistakes in the early game, and have no chance of catching back up. And without the unstable equilibrium, a player who is clearly losing by a small margin may prolong the game almost indefinitely. Sometimes you apply these via direct rubber-banding (i.e. a MOBA that gives bonus XP when a team which is losing manages to achieve a kill). Or by encouraging high risk/reward tactics for players who have an advantage. Or build it into the tech tree, by making early units have distinct tactical tradeoffs, and late units somewhat overpowered...
  12. 3 points
    I wrote my first program (not including the false start last time I attempted programming) in Python... and it works! after a few bug fixes it's actually quite small and some would think insignificant. But I'm getting used to the syntax and form of the language. print ("hello") print ("nwhat are your 2 favourite foods of all time?") food1 = input ("nt1.") food2 = input ("nt2.") print ("I have made ",food2+food1," for you!") input ("press a key to exit and enjoy your meal :)")
  13. 3 points
    There's your problem. Don't drink coffee in the late afternoons (say, 3pm onward). Drink it judiciously after about 1pm. Decaf is your friend if you need to wean off the afternoon coffee.
  14. 3 points
    A degree on its own is rarely going to get you into the games industry as a programmer. It's widely expected that you will have some sort of portfolio or sample work to be able to show as well. You do not need to assemble a team, and you certainly do not have to spend a decade on it. Just make some simple games to prove that you (a) understand the basics of game development, and (b) that you can actually program. Nobody cares about your ideas, just that you know enough about coding games to be an asset on the team. So, pick some simple things, like 80s arcade games or the kind of thing people make for modern game-jams, and get coding. PS. Don't say "the programming skills for game design" because game design is not programming. Programming is part of game development but design is a separate discipline, like art, audio, etc.
  15. 3 points
    One of the scariest things I can think of from a video game wasn't a horror game at all, but it is something you should be able to apply to a new design. The game System Shock 2 takes place on a space ship. The crew are either dead or have been changed into monsters, and the ship is filled with rogue robots, turrets, and other creatures. Weapons, ammunition, medical supplies, and... well, everything is in limited supply and can be hard to find. There's these monkeys... they're small and fast, so they can be hard to hit. They jump and run, and they're small, so they sometimes come from unexpected directions. And they have a psychic attack where they throw a glowing ball of energy at you, which can be quite damaging. The horrifying thing isn't encountering them though... it's hearing the sounds of a monkey, somewhere nearby through the game's fantastic surround sound. You don't always know if there are one, or if there are several. You don't know if they can actually get to you, or if they're somewhere you can't immediately access. You're low on health, and low on supplies, and there's this dangerous unpredictable thing somewhere nearby. Definitely the most nerve wracking thing I remember from a game, if you can replicate that feeling and build on it you can create a great horror experience.
  16. 3 points
    Since we're not in For Beginners, that's actually a tricky one. If you need any type of dynamic dispatch, virtual is the right approach. Compiler writers have done a great job of optimizing, so any type of rolling your own dynamic dispatch is likely to be slower, or at best on par with what the compiler provides. It is best to avoid dynamic dispatch if you can (the cost is currently around 5-10 nanoseconds per call on PC) but if you need it then you need it, and virtual functions are the best way to do it. In general you'll get far better performance gains by working with collective objects rather than individual items, which is something DOD preaches. That is, an object that contains a pool of items that is operated collectively, in a cache friendly manner. An individual function call that isn't inlined and has a few parameters (with lots of caveats and fine print about stack preservation and register assignment and parameter passing) can have an overhead of several nanoseconds. So instead of perhaps running 500 individual function calls to process 500 blobs of data each with an overhead of 2ns (meaning 500*2ns = 1 microsecond in overhead), you have an object-oriented thing with a single function that processes all 500 items sequentially with a single 2ns overhead. Also, be sure you're focusing on performance where it really matters. Some programmers spend a lot of time on this type of optimization in code where performance doesn't matter at all, there might be no difference if the processing took 10 nanoseconds or 100 microseconds. Knowing when it is important and when it isn't tends to be an advanced skill.
  17. 3 points
    A separable blur isn't a "typical" multi-pass technique where you just draw multiple times with a particular blend state. Instead, the second pass needs to read the results of the first pass as a texture. This requires changing render targets and shader resource view bindings between your draw calls. The basic flow goes something like this (in pseudo-code): // Draw the scene to the "main pass" render target SetRenderTarget(mainPassTarget.RTV); DrawScene(); // Draw the vertical pass using the main pass target as its input SetRenderTarget(blurTargetV.RTV); SetShaderResourceView(mainPassTarget.SRV); SetPixelShader(blurVerticalPS); DrawQuad(); // Draw the horizontal pass using the vertical blur target as its input SetRenderTarget(finalBlurTarget.RTV); SetShaderResourceView(blurTargetV.SRV); SetPixelShader(blurHorizontalPS); DrawQuad();
  18. 2 points
    So I've had the chance to try out Unity Hub for a while, and had ups where I completely switched to it, and then downs where I finally got rid of it. When you start Unity, you get the project selection screen. (Unless you chose to have it auto start on your previous project) Hub is a separate application to replace that screen. It adds new features, like Multiple versions of Unity Assigning projects to start with specific install versions Right clicking on an install version and having it add a new build target (without all the UI's to monitor) It worked great initially. So much so that I uninstalled Unity from the machine (because bug #1, it doesn't recognize existing Unity installations. it seems like they *have* to be under a specific path/format. It says you can select existing ones, but it never recognized the version I have) an let it install Unity for me. Next I also installed the Beta, because it was easy. It seems far faster to install through Hub, than via download/direct install. I loved this feature. But then some problems started in... Some of the features like Android install, I could not get to work after. It took a lot of extra steps to resolve, steps I don't recall having to go through on my normal version. For instance, it could not pick up the JDK install path on its own. It can't install older versions, like 5.6. Which seems a huge reason for supporting this. There where huge transformational changes between 5.6 and 2017. Some projects just needed 5.6. It seemed like this was the key reason to have Hub, so I could go back easily. I already had Visual Studio Enterprise installed on my system, and it couldn't register it. Well not during the install anyway. I had to go in, and find it by path, but then it didn't have any of the Unity properties installed for Visual Studio. Ultimately, I ended up uninstalling it, and going back to the base installation of 2017.Latest and VS Enterprise Latest to get everything happy again. I intend to try Unity Hub again, but will wait until its out of beta. Check it out / download it here: https://blogs.unity3d.com/2018/01/24/streamline-your-workflow-introducing-unity-hub-beta/ If your needs don't step on those issues, like android or needing to jump back to 5.6, I think its a great product.
  19. 2 points
    You can do some basic things with Custom Build Steps, but they're very limited. You'll probably end up cramming a lot of stuff into the command line through that awkward VS properties dialog, and its dependencies really only work for a "one input file, one output file" scenario. So as soon as you want to have multiple entry points in a file, or compile multiple permutations of a shader, things break down. You can do more complex things by working directly with MSBuild, but I'm not really sure that's worth it either. So I'm with galop1n: to do even moderately advanced stuff with shaders you're going to need your own pipeline for compiling them. For my home stuff I have a runtime compilation system where I just call a function to compile a shader, and the results are cached based on a hash of the file contents. This makes it easy to do hot-swapping, since I can just watch the files and re-initiate the compilation when a file is changed. But you could also make your own little shader compilation tool that runs outside of your app if you want. It shouldn't be too hard to whip something up in python/C#/C++/whatever language of your choosing. You could even invoke your compilation script from a VS build, and that you way you could still make sure that you shaders are compiled when you run from Visual Studio.
  20. 2 points
    The refund thing aside, the developers are well within their rights to request code that demonstrates the problem. Look at it from their perspective. You're reporting a bug, possibly one they haven't seen. The developers want some way to reproduce it and verify that: that there is a problem at all, and you aren't just misusing their software what the problem is, if it exists that you aren't trying to scam them for a free copy of their middleware (unlikely, but I could see someone trying it) If you didn't want to give them your source code, you could have offered to put together a minimal project, that is not your actual game, that demonstrates the bug. They just want to see a project that causes the bug to happen so they can fix the bug and get you working again. From actually reading the forum thread, that has nothing to do with refusing a refund. It's baffling to me that you would construe any of that as "give me your source code or no refund." I get the impression that English isn't your first language, so maybe this is down to a language difficulty, but making multiple posts to rage at them based on a misunderstanding is not helping you, them, or anyone else for that matter. It doesn't in the least bit surprise me that they would lock your threads and ignore you.
  21. 2 points
  22. 2 points
    As Kylotan suggests, this indicates that maybe a resource manager that tracks the device will be more useful than one that doesn't. You don't need to have a CreateWhiteTexture on the resource manager itself - you can write a CreatureWhiteTexture function that just takes a resource manager and creation data, and then have the resource manager use its internal D3D device to create the texture. Or, you could put stuff needed for texture creation into a texture factory (which would track the device), and put the texture creation method on that, then pass the created texture to the resource manager when it has already been created. For example, something like: resourceManager.RegisterTexture(CreateWhiteTexture(textureFactory), guid, name); I can think of one reason, which OP doesn't cite - the service locator pattern is just a "better" way to have singletons. Being able to switch out objects from the service locator at runtime is nifty, but it still has the problem that globals and singletons have where it hides dependencies within the methods that use it, rather than making them explicit,. The service locator still looks like global state to client code and most implementations of it still only allow one of each type of object to be located from it - both of which are problems. I see it as a crutch for dealing with globals-heavy legacy code; in new code, it'd call it an anti-pattern.
  23. 2 points
    The easiest way to get started is with a single "main thread" and an unstructured job system. The API for that might look as simple as: void PushJob( std::function<void()> ); From your main thread, whenever you have a data-parallel workload, you can then farm it off to some worker threads: //original vector<Object*> visible; for( int i=0; i!=objects.size(); ++i ) if( IsVisible(camera, objects[i]) ) visible.push_back(objects[i]); //jobified for a split into 4 jobs: #define NUM_THREADS 4 vector<Object*> visiblePerThread[NUM_THREADS]; Atomic32 jobsComplete[NUM_THREADS] = {0}; for( int t=0; t!=NUM_THREADS; ++t ) { vector<Object*>& visible = visiblePerThread[t]; Atomic32& jobComplete = jobsComplete[t]; int minimumWorkPerJob = 64;//dont bother splitting up workloads smaller than some amount int workPerThread = max( minimumWorkPerJob, numObjects/NUM_THREADS ); //calculate a range of the data set for each thread to consume int start = workPerThread * t; int end = workPerThread * (t+1); start = min(start,numObjects); end = min(end,numObjects); if( start == end )//if nothing for this thread to do, just mark as complete instead of launching jobComplete = 1; else//push this functor into the job queue PushJob([&objects, &camera, &visible, start, end]() { for( int i=start; i!=end; ++i ) if( IsVisible(camera, objects[i]) ) visible.push_back(object); jobComplete = 1; }); } //at some point before "visible" is to be used: //use one thread to join all the results into a single list for( int t=1; t!=NUM_THREADS; ++t ) { //block until the job is complete BusyWaitUntil( [&](){ jobsComplete[t] == 1; } ); //append result set [i] onto result set [0] visiblePerThread[0].insert(visiblePerThread[0].end(), visiblePerThread[i].begin(), visiblePerThread[i].end()); } vector<Object*>& visible = visiblePerThread[0]; This simple job API is easy to use from anywhere, but adds some extra strain to its users. e.g. above, the user of the Job API needs to (re)invent their own way of figuring out that a job has finished each time. In this example, the user makes an atomic integer for each job, which gets set to 1 when the job is finished. The main thread can then busy wait (very bad!) until these integers change from 0 to 1. In a fancier job system, PushJob would return some kind of handle, which the main thread could pass into a "WaitUntilJobIsComplete" type function. This is the basics of how game engines spread their workloads across any number of threads these days. Once you're comfortable with these basic job systems, the very fancy ones use pre-declared jobs structures and pre-scheduled graphs, rather than the on-demand, ad-hoc "push anything, anytime" structure, above. The other paradigm is having multiple "main" threads -- e.g. a game update thread and a rendering thread. This is basically just basic message passing with one very big message -- the game state required by the renderer. Going off this bare-bones/basic job system, to answer your questions -- 1. Probably on the main thread, maybe on a job if it's parallelisable work. 2. Above I used simple atomic flags. If you're using a nice threading API, then semaphores might be a safer choice. 3. Yes. 4. Objects are not persistent in the job queue -- the main thread pushes transient algorithms into the job queue, so there's nothing to remove. The logic is the same as a single-threaded game. With a fancier job system, the answers could/would be different GL is the worst API for this. In D3D11/D3D12/Vulkan, resource creation is free-threaded (creation of textures/buffers/shaders/states can be done from any thread), and in D3D12/Vulkan you can also use many threads to record actual state-setting/drawing commands (D3D11 can too, but you get no performance boost from it, generally). It's probably worthwhile to do all your your GL calls on a single "main rendering thread", rather than trying to make your own multi-threaded-command-queue wrapper over GL. Nonetheless, the entire non-GL / non-D3D part of your renderer can still be multi-threaded. That involves preparing drawable objects, traversing the scene, culling/visibility, sorting objects, etc.... In a D3D12/Vulkan version, you can also multi-thread the actual submission of drawable objects to the API.
  24. 2 points
    Also forgot about these videos: And: Case you had any free time left ;-)
  25. 2 points
    You are still going to have to explain to us why you appear to be rewriting your vertex buffers every frame. That's going to be using a significant chunk of bandwidth, and causing some fun pipeline stalls. Can you show us a screenshot of a rendered frame? It's hard to talk about this without seeing how much overdraw you have, how much depth testing is going on, etc.
  26. 2 points
    First off, welcome to GameDev. A lot of your questions have super open ended answers and they all depend on how your engine works and how you designed it to work. A good read about the flow of how a modern 3D engine works is found in these blog posts about the Autodesk Stingray Engine (now discontinued I believe): http://bitsquid.blogspot.com/2017/02/stingray-renderer-walkthrough.html It's a general great blog, reading over older posts wouldn't hurt either. Another good read, though it's mostly for Direct3D 11/12 and Vulkan (but the concepts are sound and would work for OpenGL) if I recall correctly is: As this sounds like it is your first go at a multithreaded engine, you're quite likely going to make design mistakes and likely refactor or restart several times, which is fine. It's part of learning and the best way to learn is from ones own mistakes.
  27. 2 points
    On a more serious note, here are my 2 cents to the whole "subjective experience" discussion: I think it's easy to look at a system that behaves slightly differently than ourselves and go "pff, this thing is clearly not conscious". We've seen this being said about animals and now we're seeing it being said about computers and I don't think it's as clean cut as you might initially think. Self consciousness is the ability to observe and react to ones own actions. Who is to say that a PID control loop is not "self conscious"? You might laugh at this idea, but if you think about it, what arguments can you really make against it? Are living systems not just a highly complex composition of millions of self-regulatory systems, where the thing that identifies as "me" is at the very top of the hundreds of layers? Who is to say that each of those systems is not self conscious in its own way and has its own subjective experience? When a thought enters your mind, you proceed to think about that thought, and think about thinking about that thought, and so forth. This process of self-thinking is some kind of feedback loop, which relies on the results of many "lower level" feedback loops, right down to the molecular level (or perhaps even further, who knows). This is also the reason why you see fractals when you do psychadelics, because systems with feedback loops are recursive, but that's beside the point. And for that matter, who is to say we are at the "top" of this chain? Humans form structures such as societies or companies, which also have an ability to self observe and react accordingly. Who is to say companies aren't conscious? Or the environment isn't conscious? Or the billions of devices connected to the internet haven't formed a self-conscious "brain" of some kind? Or the galaxy isn't one gigantic conscious super-organism? It might be very different from our own consciousness, but again, that doesn't necessarily make it unconscious. Randomness is another point of discussion. Must a self-conscious system necessarily have an element of randomness? There are numerous psychological experiments that predict how a human will respond to specific situations with an astoundingly high degree of accuracy (see: Monkey ladder experiment, see: Stanford Prison experiment, see: Brain scans that predict your behaviour better than you can) It almost seems like we are under an illusion of being in control and perhaps the actions we take are for the most part predetermined. Whether this is true or not is of course unknown, but the real question is: Does it matter? If so, why? Just because it appears that human consciousness is not computable doesn't mean it's random. It is very obviously highly organized, otherwise you'd be unable to respond to this thread, or even have an experience for that matter. So: If I were to add an RNG to my Turing machine to make it less predictable and thus "more conscious", isn't that taking a step back from the actual problem?
  28. 2 points
    Okay, the answer for this case is index = (int) |((angle / (PI / 4))| + .5f);
  29. 2 points
    Lately I've been working on updating the hud. The game is quite colorful (once you press play) so I thought adding a cardboard only hud to make the distinction. Use to look like this: Now it looks like this: Top left is supposed to be the briefing about the current level. Working on localizing those texts. More about posable heroes: Steam Store Page
  30. 2 points
    The New Frontiers Open Test has just started! And we invite everyone to join this open test - hurry up to be the first who checks out all the great things Snowforged Team has prepared for you: new NPC faction, Rings system, PvP mode, ships, weapons, mining system, tasks, pirate fleets, detachments, recall system, ship model progression and more - read the full list of changes here. Open up the door to endless adventures and explore all the seven Rings of Starfall Tactics Galaxy! No keys required - the test is open to everyone! To start playing Starfall Tactics follow these instructions: Register on starfalltactics.com Download Starfall Tactics launcher here. Install Launcher - it will download the game client. Remember, that the game is not available for x32 systems during tests. Push the "Play" button and start playing Starfall Tactics! Join in the discussion with other Commanders at our Forum and on Discord Channel New to Starfall Tactics? Check this short guide out! Got troubles or want to report something? Use #help & #bugreport Discord channel, create a topic on the forum or send a letter to support@snowforged.com! So, what's new? What's new? Well, there are a lot of things to check: New PvP mode - Domination Custom games New recipes New tasks New ships - battlecruisers Weapons Party system for the MMO mode Recall system which allows you to warp in additional ships on the Galaxy map The first neutral faction - Mineworkers Ship model progression Rebalance for special modules, shields, armor, weapons, layouts and hull characteristics. Reworked pirate fleets Rings system Equipment quality system Reworked detachments Cool quest rewards Special fleet abilities for the global map (to be announced in the next WIP!) New visual ship customization abilities And various improvements, designed to make your playtime even more enjoyable! And that's not the full list of all the great things we've prepared for you - full patch notes will be available later. See you in Starfall Tactics!
  31. 2 points
    We have the reputation system (up-/down- votes) for a couple of reasons. 1) It helps to recognise and encourage positive and helpful interactions with the community. Helpful and friendly posts are often up-voted, and over time the reputation score of more helpful community members tends to rise. Some people are motivated by numbers like that and will strive to increase their score through positive interactions or simply being more active, which is a win for the community. Because there is no mechanical impact (that is, we don't hide posts, ban or suspend members, etc. based on score) those who aren't interested are free to ignore it. 2) It provides a form of self-moderation for the community by allowing people to express their disagreement. Basically the opposite of my first point, having a post or two voted down or a decreased reputation score can discourage anti-social behaviour, unfriendly or unhelpful responses, etc., again helping to improve the community. After first implementing a reputation system and after some of the major overhauls we had a notable reduction in trolling and unhelpful posts. 3) It can help to recognise better responses. Whilst a higher voted post or higher reputation member isn't necessarily more correct or more experienced than another, in general you can infer that at least a lot of people agree with highly rated things, and that there may be something questionable about lower rated things. The system isn't perfect, and I don't think any similar system can be, but in general I think it does a pretty good job of it's stated goals, and is implemented in such a way that people who aren't interested can easily ignore it. Abuse does happen, but is pretty rare, and is addressed when we see or are made aware of it. If you think there has been a case of abuse, please report it for investigation. We do also take feedback into account for future tweaks or improvements, so if anyone has specific complaints or suggestions please feel free to speak up; we may not action your feedback immediately, but we definitely do read it and take everything in to account when making changes. Hope that helps!
  32. 2 points
    The purpose is to acknowledge helpful and useful post. OPs can acknowledge helpfulness by posting (and voting) but other experts that don't post can also validate posts (or opposite) by upvotes (or downvotes) I think forums would be filled blank posts without votes. Votes don't harm, they are just a rough indication of the value of a post - as the implicit assumption is that statistically majority of votes are sincere and reflect the intellectual and logical correctness (or incorrectness) of a post As for the abuse (which in a strict sense is unavoidable), it is statistically very low As a side note though- despite the weaknesses of the old voting system - its way far better IMO than this new voting system. To me the new voting system is extremely uninspiring. Not that i'm expecting anything to be done about that at this stage
  33. 2 points
    You need to have one UV coordinate per vertex, not per index.
  34. 2 points
    What Kylotan means is that when the flood fill is performing its outward traversal, if it's at node A and that node does not know about "from" vertices at all, it cannot traverse "uphill" to those nodes. You would have to search all edges for ones where "to" is node A, which would defeat any of the performance gains you might get otherwse. This applies for digraphs where the only info you can access at each node are the "to" vertices. I don't particularly like representing graphs like that, but they can save space since you don't have to store edges in two places. The graph representations I know of are: Adjacency matrix: Edge traversal can be forward or backwards. Incredibly wasteful for sparse graphs. Uses an absurd amount of memory unless you compact it somehow (which usually ends up more complicated and less performant than one of the other options below). Edge list only: Usually uses the least amount of RAM, but most time is spent searching the list for edges where 'from' or 'to' are a specific vertex. Not really acceptable in terms of performance unless the entire edge list is so small the search loop is insignificant. Vertices-which-store-edges: Good for most uses but if you need to store both "to" and "from" edge lists you double your memory use. Can be acceptable since it still scales better than a sparse adjacency matrix. Implicit data structure (i.e. the typical 2D grid where neighboring cells are connected): Great for traversal but only supports graphs that can be mapped onto that data structure.
  35. 2 points
    I did a close look on the webpage. They compare their realtime tracer with an offline renderer and claim a huge speedup. (The offline renderer is made to handle extremely complex scenes AND global illumination, they show neither of that.) They claim a new solution to the problem of building acceleration structures. But i don't need to rebuild the acceleration structure for that dino - i only need to refit it which is a fast operation. (It's common misbelief building acceleration structures would be the main problem of raytracing. The real problem is random memory access and caching.)
  36. 2 points
    Note that the dinosaur shows just simple lighting, no global illumination. Thus 60 fps are not impressive but expected. AFAIK, Brigade is planned to be released for Unity this year (don't know if for free, but guess so). OTOY works on multiple path tracing renderers, for both offline and realtime, Brigade is targeting realtime. With recent advances about denoising, path tracing is very close to realtime on high end GPUs, see e.g. this work with GI: Many people work on this actually, many will just implement on their own when time is ready. Hard to tell if AAA studios will use middleware or do it themselves (i guess the latter).
  37. 2 points
    This is pretty awesome! Love the memory breakdown for delta compression. Looks like you've found a good real-world example about the perils of premature optimisation
  38. 2 points
    If I'd have a 7:1 advantage, my thought would be that I did not attack early enough (forgot or intentionally toyed around). Prolonging the already derailed game even further in that case IMHO won't be fun, same as depriving the player of being able to overrun the opponent.
  39. 2 points
    "during studies finished some simple games." That's quite an important selling point; you've got dev background and can implement games. "During my computer science studies I learned basics needed to work as software developer. After my graduation" And some education but not too much. "My C++ is average I think." And a sense of proportion[1]. If I was in the business of hiring game devs, you just made the "phone interview" stage on that basis.[2] Go meet indies like pcmaster says -- if you can find some ex-studio people see if you can talk them into some mock interviews to practice. [2] Actually you'd get the same decision here, because I am bored of seeing CVs from people with two doctorates and a fifteen year career in academia faffing about with exotic type theories but no actual industry experience of shovelling code into a project... [1] Too many people are already code gods who don't need to learn anything.
  40. 2 points
    This looks conceptually correct. Did you add gravity into the linear velocities before the solve? E.g. v += g * dt Another thing I am not sure is your effective mass (k). I use me = invM1 + invM2 + dot( r1 x n, invI1 * (r1 x n) ) + dot( r2 x n, invI2 * (r2 x n) ) It is totally possible (and also likely) that our formulas are equivalent! I just cannot tell from just looking and would need to write it down.
  41. 2 points
    Thinking about algorithms or implementation details is the programmer version of a common reason for insomnia in which your mind is simply to active to get to sleep or to maintain sleep. I have horrible insomnia (at one point last week it was approaching 100 hours with only 10-16 hours of sleep), and so have strategies for fighting some of these problems. I tend not to program just before bed and instead engage in a long multi-hour wind-down involving video games (Super Smash for Wii U), chess, chat, and piano. It's good to have a variety of things you can do to pass a few hours before bed after programming. I am about to see a sleep specialist and will have better advice to share afterwards, but as grumpyOldDude points out it does tend to worsen with age. In my case sound sleep is almost impossible to obtain, regardless of what I do in the day, and it is impossible to say whether that was always going to be the case as I always naturally tended towards the nocturnal life, or if it got worse because I was unable to fix some tendency such as over-active mind earlier. L. Spiro
  42. 2 points
    There's a famous French quote which goes something like this in translation: It's fairly applicable to game design. If a game isn't fun in its most basic form, adding more elements probably won't make it fun. And too many elements can overwhelm an otherwise fun game. That's not to say you shouldn't add unique elements - just do it with care and attention to detail. A lot of games will choose to introduce them very slowly (say, one new game mechanic per chapter) to avoid overwhelming a new player.
  43. 2 points
    Hi there! The best thing you can do is to look for an open position near your area, given that there are some game companies around, get yourself invited for an interview, ask them all kinds of questions (about working hours, crunches, etc.) and decide if you really want to take it Also, how's your C++? Of course don't go blindly and prepare seriously, as for any job interview. I just wanted to say that the final decision is up to both you and the employer. And you'll learn how things go on the interview. Also, go meet indie developers in your area, if possible, some of them will know (or even be) full-time employees at studios, they'll tell you details of the daily developer's life. I work for a bigger studio in Central Europe, my salary is competitive (but you bet it's lower than classmates doing software for telecom or banks), I work 8 hours a day, colleagues are great, working premises are great, some perks are great, there are tons of ice-cream and I do (kinda) what I like So far I haven't decided to go away from GameDev (5 years). Also, do expect crunches around important milestones and releases, however, our company doesn't force anyone too much - it's more a synergic effect that everyone wants to pull the same rope for the same goal and finally finish it, when crunch happens. Crunch = usually free pizza and paid overtime. If they push too much, people might run away. Can't talk about other studios (not even in my city), I'm not exactly sure how they manage crunching. Before releasing our game in the autumn 2016, I spent several Saturdays and very few Sundays at work during the summer when we were finishing it, and I stayed later (~10 hours instead of 8) during working days when totally needed. It was exhausting but survivable, given it happens only once in a few years. Plus when anyone simply couldn't work extra (family issues, health issues), they simply didn't. Our company seems to obey all regulations. Other studios' experience may vary Does that help?
  44. 2 points
    The building a community part is all about interaction. A good example is here on gamedev.net. when one of the community members post something people are more willing to take a look if it's someone who gave advice or just commented on one of there topics. This is a well known human condition. Think of a friend dying vs a stranger dying, we value things relative to our self.
  45. 2 points
    I think it was Braids' creator that once said 'there is an extent to which it would be true to say that the game designed itself'. this is a common trend among masterpiece indie games from what I can tell, and it makes sense to me. You have an original vision, and if your 'Game Pillars' are strong, then the rest of the game follows suite (decisions get made based on the pillars, and were not necessarily pre-designed). The end product differs greatly from the original vision, but the pillars remain largely untouched.
  46. 2 points
    The truth is, with marketing, you could do everything right and yet fall short of making waves. On the one hand, there are a lot of best practices (thanks jbadams), but there is also such as thing as creative grassroots marketing and making sure everything you do underlines a specific strategy. While making a game is about selling a gameplay, marketing is about selling an idea about the game, and not the game itself. For a while, a lot of developers had success selling the story of the 'indie startup', but that's been overdone, so you need to find something unique about your situation that is noteworthy, and somehow catch the attention of the press. Once it does, it will funnel traffic to your other marketing tools (website, etc.) and that's when it will start paying off. As for finding that actual unique angle, I'd say a lot of people do this for a day job, and few actual succeed, so it is a right place right time kind of thing, but not just gambling. Good luck!
  47. 2 points
    Hello. This is a great challenge! This time I didn't want wait for last days as last time. And rush out quickly something out. So I worked really hard for a few weeks and this is what I came up with. Totally original implementation. Credits to sprites and sounds for old Pacman. I only modified them slightly. As it was required to resemble the original pacman. So I didn't want to get too creative. Pacmanjs is written in JavaScript since I know it well. Probably could have used python+pygame for a change. I used my old game FlyBirdFly game as a layout and starting smashing out code. AI was a bit problematic since I didn't want to slow down Pacmanjs. So I tried find a fine line between performance/AI-smartness. This is my entry: https://mystikkogames.itch.io/pacmanjs Project page: https://www.gamedev.net/projects/204-pacmanjs/ Game name is Pacmanjs 1.0rc. It has a ( single player mode / player vs AI mode / player vs player mode / team mode ) You can also play on mobile. Just press around pacman to move it around. Pacmanjs should scale on different resolutions well. esc - button returns to home screen as usual. It only has 2 levels which keeps rotating. After passing a level the number of ghosts increases by 1. It has highscore lists for each category. Only #1 spots are noted tho. I hope Pacmanjs works well! The last Firefox update killed my last entry ( MissileComm4nd ). There was a little bug. So I updated it too so it works as well!
  48. 2 points
    I had to implement this a year ago. If you want to do it yourself, the basic idea of how I did it was pretty simple. Just go through the pixels of your image, and for each pixel that has zero alpha, set the colour to the nearest non-zero alpha pixel. You could use euclidean distance, manhattan distance etc. Note that this is pretty slow, you will probably want to optimize it if you are using bigger textures. Here is a thread on optimization: https://www.gamedev.net/forums/topic/685357-uv-padding/
  49. 2 points
    Some engines like Unreal will do this for you; importing a texture with a alpha does that. Unity has a manual way: https://docs.unity3d.com/455/Documentation/Manual/HOWTO-alphamaps.html You can do it on your own using Gimp or any 2D tool that supports alpha. Gimp even has tools like displacement that makes this easy, but blur also works. Any texture baking software has this option: XNormal (Free), Substance, Quixel, Blender (Free), 3Ds Max, Zbrush, Maya, etc... A quick search for Substance alternatives gives me this: https://www.slant.co/options/12608/alternatives/~substance-painter-alternatives It's one of those things that has many names but "Alpha Texture Edge Padding" or "Texture Edge Padding" should work for search results. The problem is that each software likes to call this something else.
  50. 2 points
    On the 2nd of November 2017 we launched a Kickstarter campaign for our game Nimbatus - The Space Drone Constructor, which aimed to raise $20,000. By the campaign’s end, 3000 backers had supported us with a total of $74,478. All the PR and marketing was handled by our indie developer team of four people with a very low marketing budget. Our team decided to go for a funding goal we were sure we could reach and extend the game’s content through stretch goals. The main goal of the campaign was to raise awareness for the game and raise funds for the alpha version. Part 1 - Before Launch Is what we believed when we launched our first Kickstarter campaign in 2016. For this first campaign, we had built up a very dedicated group of people before the Kickstarter’s launch. Nimbatus also had a bit of a following before the campaign launched: ~ 300 likes on Facebook ~ 1300 followers on Twitter ~ 1000 newsletter subs ~ 3500 followers on Steam However, there had been little interaction between players and us previous to the campaign's launch. This made us unsure whether or not the Nimbatus Kickstarter would reach its funding goal. A few weeks prior to launch, we started to look for potential ways to promote Nimbatus during the Kickstarter. We found our answer in social news sites. Reddit, Imgur and 9gag all proved to be great places to talk about Nimbatus. More about this in Part 3 - During the campaign. As with our previous campaign, the reward structure and trailer were the most time-consuming aspects of the page setup. We realised early that Nimbatus looks A LOT better in motion and therefore decided that we should show all features in action with animated GIFs. Two examples: In order to support the campaigns storytelling, “we built a ship, now we need a crew!”, we named all reward tiers after open positions on the ship. We were especially interested how the “Navigator” tier would do. This $95 tier would give backers free digital copies of ALL games our company EVER creates. We decided against Early Bird and Kickstarter exclusive rewards in order avoid splitting backers into “winners and losers”, based on the great advice from Stonemaier Game’s book A Crowdfunder’s Strategy Guide (EDS Publications Ltd. (2015). Their insights also convinced us to add a $1 reward tier because it lets people join the update loop to build up trust in our efforts. Many of our $1 backers later increased their pledge to a higher tier. Two of our reward tiers featured games that are similar to Nimbatus. The keys for these games were provided by fellow developers. We think that this is really awesome and it helped the campaign a lot! A huge thanks to Avorion, Reassembly , Airships and Scrap Galaxy <3 Youtubers and streamers are important allies for game developers. They are in direct contact with potential buyers/backers and can significantly increase a campaign’s reach. We made a list of content creators who’d potentially be interested in our game. They were selected mostly by browsing Youtube for “let’s play” videos of games similar to Nimbatus. We sent out a total of 100 emails, each with a personalized intro sentence, no money involved. Additionally, we used Keymailer . Keymailer is a tool to contact Youtubers and streamers. At a cost of $150/month you can filter all available contacts by games they played and genres they enjoy. We personalized the message for each group. Messages automatically include an individual Steam key. With this tool, we contacted over 2000 Youtubers/Streamers who are interested in similar games. How it turned out - About 10 of the 100 Youtubers we contacted manually ended up creating a video/stream during the Kickstarter. Including some big ones with 1 million+ subscribers. - Over 150 videos resulted from the Keymailer outreach. Absolutely worth the investment! Another very helpful tool to find Youtubers/Streamers is Twitter. Before, but also during the campaign we sent out tweets , stating that we are looking for Youtubers/Streamers who want to feature Nimbatus. We also encouraged people to tag potentially interested content creators in the comments. This brought in a lot of interested people and resulted in a couple dozen videos. We also used Twitter to follow up when people where not responding via email, which proved to be very effective. In terms of campaign length we decided to go with a 34 day Kickstarter. The main reason being that we thought it would take quite a while until the word of the campaign spread enough. In retrospective this was ok, but we think 30 days would have been enough too. We were very unsure whether or not to release a demo of Nimbatus. Mainly because we were unsure if the game offered enough to convince players in this early state and we feared that our alpha access tier would potentially lose value because everyone could play already. Thankfully we decided to offer a demo in the end. More on this topic in Part 3 - During the campaign. Since we are based in Switzerland, we were forced to use CHF as our campaign’s currency. And while the currency is automatically re-calculated into $ for American backers, it was displayed in CHF for all other international backers. Even though CHF and $ are almost 1:1 in value, we believed this to be a hurdle. There is no way to tell for us how many backers were scared away because of this in the end. Part 2: Kickstarter Launch We launched our Kickstarter campaign on a Thursday evening (UTC + 1) which is midday in the US. In order to celebrate the launch, we did a short livestream on Facebook. We had previously opened an event page and invited all our Facebook friends to it. Only a few people were watching and we were a bit stressed out. In order to help us spread the word we challenged our supporters with community goals. We promised that if all these goals were reached, each backer above $14 would receive an extra copy of Nimbatus. With most of the goals reached after the first week, we realized that we should have made the challenge a bit harder. The first few days went better than expected. We announced the Kickstarter on Imgur, Reddit, 9gag, Instagram, Facebook, Twitter, in some forums, via our Newsletter and on our Steam page. If you plan to release your game on Steam later on, we’d highly recommend that you set up your Steam page before the Kickstarter launches. Some people might not be interested in backing the game but will go ahead and wishlist it instead. Part 3: During The Campaign We tried to keep the campaign’s momentum going. This worked our mostly thanks to the demo we had released. In order to download the Nimbatus demo, people needed to head over to our website and enter their email address. Within a few minutes, they received an automated email, including a download link for the demo. We used Mailchimp for this process. We also added a big pop up in the demo to inform players about the Kickstarter. At first we were a bit reluctant to use this approach, it felt a bit sneaky. But after adding a line informing players they would be added to the newsletter and adding a huge unsubscribe button in the demo download mail, we felt that we could still sleep at night. For our previous campaign we had also released a demo. But the approach was significantly different. For the Nimbatus Kickstarter, we used the demo as a marketing tool to inform people about the campaign. Our previous Kickstarters’ demo was mainly an asset you could download if you were already checking out the campaign’s page and wanted to try the game before backing. We continued to frequently post on Imgur, Twitter, 9Gag and Facebook. Simultaneously, people streamed Nimbatus on Twitch and released videos on Youtube. This lead to a lot of demo downloads and therefore growth of our newsletter. A few hundred subs came in every day. Only about 10% of the people unsubscribed from the newsletter after downloading the demo. Whenever we updated the demo or reached significant milestones in the campaign, such as being halfway to our goal, we sent out a newsletter. We also opened a Discord channel, which turned out a be a great way to stay in touch with our players. We were quite surprised to see a decent opening and link click rate. Especially if you compare this to our “normal” newsletter, which includes mostly people we personally met at events. Our normal newsletter took over two years to build up and includes about 4000 subs. With the Nimbatus demo, we gathered 50’000 subs within just 4 weeks and without travelling to any conferences. (please note that around 2500 people subscribed to the normal newsletter during the Kickstarter) On the 7th day of the campaign we asked a friend if she would give us a shoutout on Reddit. She agreed and posted it in r/gaming. We will never forget what happened next. The post absolutely took off! In less than an hour, the post had reached the frontpage and continued to climb fast. It soon reached the top spot of all things on Reddit. Our team danced around in the office. Lots of people backed, a total of over $5000 came in from this post and we reached our funding goal 30 minutes after hitting the front page. We couldn’t believe our luck. Then, people started to accuse us of using bots to upvote the post. Our post was reported multiple times until the moderators took the post down. We were shocked and contacted them. They explained that they would need to investigate the post for bot abuse. A few hours later, they put the post back up and stated to have found nothing wrong with it and apologized for the inconvenience. Since the post had not received any upvotes in the past hours while it was taken down it very quickly dropped off the front page and the money flow stopped. While this is a misunderstanding we can understand and accept, people’s reactions hit us pretty hard. After the post was back up, many people on Reddit continued to accuse us and our friend. In the following days, our friend was constantly harassed when she posted on Reddit. Some people jumped over to our companies Twitter and Imgur account and kept on blaming us, asking if we were buying upvotes there too. It’s really not cool to falsely accuse people. Almost two weeks later we decided to start posting in smaller subreddits again. This proved to be no problem. But when we dared to do another post in r/gaming later, people immediately reacted very aggressive. We took the new post down and decided to stop posting in r/gaming (at least during the Kickstarter). After upgrading the demo with a new feature to easily export GIFs, we started to run competitions on Twitter. The coolest drones that were shared with #NimbatusGame would receive a free Alpha key for the game. Lots of players participated and helped to increase Nimbatus’ reach by doing so. We also gave keys to our most dedicated Youtubers/streamers who then came up with all kinds of interesting challenges for their viewers. All these activities came together in a nice loop: People downloaded the Nimbatus demo they heard about on social media/social news sites or from Youtubers/Streamers. By receiving newsletters and playing the demo they learned about the Kickstarter. Many of them backed and participated in community goals/competitions which brought in more new people. Not much happened in terms of press. RockPaperShotgun and PCGamer wrote articles, both resulting in about $500, which was nice. A handful of small sites picked up the news too. We sent out a press release when Nimbatus reached its funding goal, both to manually picked editors of bigger sites and via gamespress.com. Part 4: Last Days Every person that hit the “Remind me” button on a Kickstarter page receives an email 48 hours before a campaign ends. This helpful reminder caused a flood of new pledges. We reached our last stretch goal a few hours before our campaign ended. Since we had already communicated this goal as the final one we withheld announcing any further stretch goals. We decided to do a Thunderclap 24 hours before the campaign ends. Even after having done quite a few Thunderclaps, we are still unsure how big of an impact they have. A few minutes before the Kickstarter campaign was over we cleaned up our campaign page and added links to our Steam page and website. Note that Kickstarter pages cannot be edited after the campaign ends! The campaign ended on a Tuesday evening (UTC + 1) and raised a total of $75’000, which is 369% of the original funding goal. After finishing up our “Thank you” image and sending it to our backers it was time to rest. Part 5: Conclusion We are very happy with the campaign’s results. It was unexpected to highly surpass our funding goal, even though we didn’t have an engaged community when the campaign started. Thanks to the demo we were able to develop a community for Nimbatus on the go. The demo also allowed us to be less “promoty” when posting on social news sites. This way, interested people could get the demo and discover the Kickstarter from there instead of us having to ask for support directly when posting. This, combined with the ever growing newsletter, turned into a great campaign dynamic. We plan to use this approach again for future campaigns. Growth 300 ------------------> 430 Facebook likes 1300 -----------------> 2120 Twitter followers 1000 -----------------> 50’000 Newsletter signups 3500 -----------------> 10’000 Followers on Steam 0 ---------------------> 320 Readers of subreddit 0 ---------------------> 468 People on Discord 0 ---------------------> 300 Members in our forum More data 23% of our backers came directly from Kickstarter. 76% of our backers came from external sites. For our previous campaign it was 36/64. The average pledge amount of our backers was $26. 94 backers decided to choose the Navigator reward, which gives them access to all games our studio will create in the future. It makes us very happy to see that this kind of reward, which is basically an investment in us as a game company, was popular among backers. Main sources of backers Link inside demo / Newsletter 22’000 Kickstarter 17’000 Youtube 15’000 Google 3000 Reddit 2500 Twitter 2000 Facebook 2000 TLDR: Keymailer is awesome, but also contact big Youtubers/streamers via email. Most money for the Kickstarter came in through the demo. Social news sites (Imgur, 9Gag, Reddit, …) can generate a lot of attention for a game. It’s much easier to offer a demo on social news sites than to ask for Kickstarter support. Collecting newsletter subs from demo downloads is very effective. It’s possible to run a successful Kickstarter without having a big community beforehand. We hope this insight helps you plan your future Kickstarter campaign. We believe you can do it and we wish you all the best. About the author: Philomena Schwab is a game designer from Zurich, Switzerland. She co-founded Stray Fawn Studio together with Micha Stettler. The indie game studio recently released its first game, Niche - a genetics survival game and is now developing its second game Nimbatus - The Space Drone Constructor. Philomena wrote her master thesis about community building for indie game developers and founded the nature gamedev collective Playful Oasis. As a chair member of the Swiss Game Developers association she helps her local game industry grow. https://www.nimbatus.ch/ https://strayfawnstudio.com/ https://www.kickstarter.com/projects/strayfawnstudio/nimbatus-the-space-drone-constructor Related Reading: Algo-Bot: Lessons Learned from our Kickstarter failure.
  • Advertisement