• Advertisement

Procedural Content Genration (Devs please share your thoughts)

Recommended Posts

This is for a dissertation im working on  regarding procedural generation directed towards indie Developers so if you're an indie dev please feel free to share your thoughts :)

  1. Does run-time procedural generation limit the designer's freedom and flexibility?
  2. if( Have you ever implemented procedural generation ==true){ talk about  some of the useful algorithms used}  else {explain why you haven't}
  3. Do you think indie Devs are taking advantage of the benefits provided by procedural generation?
  4. What are some of the games that inspired you to take up procedural content generation?
  5. If there is anyway i can see your work regarding proc gen please mention the link ( cz i need actual indie developers to make a valid point in my dissertation)

Thank You So Much

Edited by @Teejay_Cherian

Share this post


Link to post
Share on other sites
Advertisement
22 hours ago, @Teejay_Cherian said:

What are your thoughts on the contribution of procedural content generation in the field of game design? Does Proc Generation limit the design aspect of the game?

It is a essential tool.

It does in fact limit design and content, a better designed system can get around these limits but end up consuming more time to make than doing things by hand.

Art is the one place where procedural content is most limiting.

22 hours ago, @Teejay_Cherian said:
  • if( Have you ever implemented procedural generation ==true){ talk about  some of the useful algorithms used}  else {explain why you haven't}
  • Do you think indie Devs are taking advantage of the benefits provided by procedural generation?

This can be answered at the same time. All developers everywhere uses procedural content in some way, unless they only hard code.

All games have some part of it that was made procedurally, even if it was just the names of loot items or a function that creates crass object to populate a map.

Procedural games are just the ones that use procedural content as a focus of game play.

 

I use procedural generators all the time to create swarm enemies. These will select a material instance, weapons, armor and AI; stitching all the parts together to spawn enemy variants.

22 hours ago, @Teejay_Cherian said:

What are some of the games that inspired you to take up procedural content generation?

None. It was a part of learning how to make games, creating large worlds by hand takes too much time.

22 hours ago, @Teejay_Cherian said:

If there is anyway i can see your work regarding proc gen please mention the link

If you want I can attach a image showing how procedural content is used in modern engines. It will just take some time as I would have to make it all and am a bit busy at the moment.

Engines like Unreal are full of procedural tools.

Share this post


Link to post
Share on other sites
1 hour ago, JTippetts said:

Are you asking only about procedural content used IN the game, or do you include procedural generation as a part of the game creation process?

I was referring to procedural generation as part of the game creation process ....something like the game Spelunky.

Ive Seen your work and im in awe....the dirt and rock textures, that awesome terrain editor and the "voxel" environment are insane......could you refer or name some of the study material that helped you  understand and implement procedural work?

Edited by @Teejay_Cherian

Share this post


Link to post
Share on other sites

Its easy to do a lunar landscape, julia, perlin simplex, diamond square whatever. The hard part is dealing with water. If the terrain is not self aware (ie a super map that spans entire continents) then how do you know if the river is going downhill all the way to the sea? The pic is my current work is a compromise with a swizzled Julia template and simplex style hills (I need the skate park look because I use a VR hoverboard). I deal with rivers by having everything at ground zero, flat earth. The below UAV height map is generated by compute as the camera moves around. Splat maps and content are also generated. In reality its not linear scale like that but the further out the further apart the points are (torus grid). 

image.png.48529f72d1ae88bcd676878b6e3b1ec0.png

I've done height map with overhangs by the way, there's no rule that says you cant displace non vertically you just need to deal with collisions.

Beyond just pure height map generation, its really more about content generation. Who cares about miles and miles of the same stuff. We want roads and towns etc. How do you path roads? What type of content can you randomly generate? 

I generate roads after the fact and do road works following a pathing algorithm that tries to not go up or down. Notice the contradiction here because I just said I don't do that for water but rivers can go for unknown distances and don't make sense if they die out or hit a dead end. Doesn't matter with roads, just put something there.

All I have at the moment are points of interest, I don't actually create villages yet just the roadworks to them so that's as much as I can say.

 

 

 

Share this post


Link to post
Share on other sites
18 hours ago, @Teejay_Cherian said:

I was referring to procedural generation as part of the game creation process ....something like the game Spelunky.

Ive Seen your work and im in awe....the dirt and rock textures, that awesome terrain editor and the "voxel" environment are insane......could you refer or name some of the study material that helped you  understand and implement procedural work?

Mostly just a lot of googling. I 'got started' by playing various roguelikes back in the day, then at some point I purchased the book Texturing and Modeling: A Procedural Approach and that got me started on using noise functions. After that, it was all just various experimentation and some internet googling.

Share this post


Link to post
Share on other sites

I think it is a great tool for laying down a starting point that you like to then hand-design from there.  I think attempting to make a game where everything is procedural generated is a mistake.  The technology is not there yet, and you wind up with something that the human brain perceives as "wrong, empty, and repetitive".  Someday in the future a game that procedural generates its world, even on the fly as new areas are encountered, will work and work well.  But that day is still a long way away.

For now it remains a useful tool to "throw some paint on the wall" to save the development team a lot of time.  But without that then being re-arranged, re-organized, and added too by a human mind... you wind up with No Man's Sky.

 

Edited by Kavik Kang

Share this post


Link to post
Share on other sites
9 hours ago, Kavik Kang said:

For now it remains a useful tool to "throw some paint on the wall" to save the development team a lot of time.  But without that then being re-arranged, re-organized, and added too by a human mind... you wind up with No Man's Sky.

No Man's Sky's Technical foundation is nothing short of amazing but the game literally has nothing in it.....18 Quintilian barren planets ...but at the same time games like "the binding of isaac" and spelunky seemed to have hit the spot.

Share this post


Link to post
Share on other sites
On 30.11.2017 at 5:31 AM, @Teejay_Cherian said:

What are your thoughts on the contribution of procedural content generation in the field of game design? Does Proc Generation limit the design aspect of the game?

Maybe you should not ask that question in such a broad context - sounds a bit unprofessional to me.

Using porcedural content or not is a decision of game design, so any limitations / possibilities will be considered along that. It may be a fundamental decision (No Mans Sky would be impossible without), or a unimportant decision about details (Do we use Speedtree or do we model plants by hand).

Only in the gray area in between your question makes sense, e.g. Asassins Creed: Do we use procedural cities, or do we model each building like GTA? But in this case the answer is more related to available resources than to game design.

On 30.11.2017 at 5:31 AM, @Teejay_Cherian said:

if( Have you ever implemented procedural generation ==true){ talk about  some of the useful algorithms used}  else {explain why you haven't}

I used procedural levels for a mobile action puzzle game, think of Zuma  / Breakout /Match 3. I used perlin noise / worley noise etc. to control which types of bricks appear how often and in what patterns. The goal was to have levels that are unique each time you play, but overall they still feel the same in diffulicty and appearence. This was more work than hand made levels. Finally i did not feel limited about level design which still was a creative process. Everything worked good for that type of game. (I think Spelunky is very similar in how it uses proc. gen, but i never played it)

On 30.11.2017 at 5:31 AM, @Teejay_Cherian said:

Do you think indie Devs are taking advantage of the benefits provided by procedural generation?

Yes. I also think most devs constantly have that idea of procedural worlds in their heads and lots of them do some research when possible. But they are also aware that huge boring worlds are not interesting, so we do not (or at least we should not) notice where procedurals have being used when playing the game.

There are however infinite applications from procedural textures, geometry, levels, bahaviour up to whole worlds. It's an interesting and fun area and the only alternative to real world data to keep up with increasing level of detail in games.

Share this post


Link to post
Share on other sites

1. While procedural generation is powerful, and I am personally a huge fan, it often leads to feature creep in a bad way. The less that is understood about the implementation in detail, the more likely you are to encounter unforeseen problems which increases development time. The problem with implementing effective procedural generation at run-time is that in order for a realistic result you have to approximate the infinite possibilities of reality. In other words, repetition and predictability are often the pitfalls of procedural generation (with some exceptions, i.e random seeds) and counter-acting them sometimes requires more processing power than is available. Basically, procedural generation is a double-edged sword for indie devs. By leveraging a small amount of static assets and creating a lot of content, it can help developers create things not otherwise possible as a small or solo team. The added complexity of the implementation, however, presents certain risks all by itself and tends to lead to undefined scopes for the project.

2. In my experience the most useful procedural generation concept for anything remotely complex is Perlin Noise or its more modern variations like Simplex Noise and Wavelet Noise. Essentially Perlin Noise works on the principle of creating random gradients instead of purely random values. So, in the case of 2D noise, when creating a random sample you get the smooth transitions between extremes instead of a result that looks like static. This is commonly used in terrain generation for heightmaps that produce rolling hills. Since it's so smooth though, layers of Perlin Noise are used on top of one another to create small bumps and ridges in the overall landscape to reduce repetition (a process sometimes called fractal noise). In the case of terrain this often still looks unnatural and is best supplemented with something like the Diamond-Square algorithm, a variation on midpoint displacement where points are randomly raised and lowered by a reduced range while increasing resolution. Terrain isn't the only thing that can be made with noise of course, pretty much anything that has an unnatural-looking gradient can be supplemented with random noise to break up the change. Although procedural generation with a semi-random number generator of some kind is common, it doesn't necessarily have to use one. Procedural mesh generation can be used to create 3d geometry on the fly according to a particular algorithm. For example, creating a firing arc to display the path a projectile will take or a simple ring used for a selection circle.

3. Depends on what kind of procedural generation you mean. Like JTippets pointed out so well, content generation uses quite a bit of procedural generation. Whether indie devs realize they're using it or not, almost all of them are probably using it at some point in the creation process. Photoshop now has so many algorithms to apply filters and transformations it's amazing, Substance Designer and Filter Forge are pretty much designed around procedural generation for textures, and 3D modeling apps use complex algorithms to do a variety of things like boolean operations, decimation, and beveling just to name a few. If you mean run-time procedural generation, I'd refer you to my assertion that it is a double-edged sword and while they may be missing out on some of the pros they're also avoiding the cons. Also, not every project is ideal for procedural generation, sometimes it hurts more than it helps.

4. This is a tough one since there are a lot of games that use procedural generation but I think the biggest impact on me personally has been the Civilization games. I have lost track of the amount of time I've spent playing the beginning of a game and exploring the new terrain. Even after I got bored with actually making a civilization, the fact that the terrain was new and interesting every time made me keep coming back. That idea, the ability to have an infinite amount of content to explore, is both the greatest promise and biggest pitfall of procedural generation. While I think that it is unquestionably the way of the future, I don't think it's there yet, not by a long shot. While a lot of the dreams people have for random generation are unreachable now, that isn't to say they won't be as hardware and software improvements continue to stream in.

5. Most of the work and research I've done on procedural generation has been for small prototypes that I never published. I did make a procedural island generator for this game but I ended up just using static geometry for a set amount of islands instead. Although in that game I did use procedural generation for the firing arc and the docking ring.

Share this post


Link to post
Share on other sites

i didnt have used procedural generation of anything in game related things, since probably my 8086 pc, which was very long time ago. 

but i generate procedurally very lot of things in my operating system, such as the default generic wallpaper, window elements, window and background decoration elements, to save a few mbyte from the binary. 

by the way, i dont like procedural generation, i am not good at it, and it can AND will bug differently on every system. its hard to figure out whats going on after you written it, and not touched it for a while.

it could make sense, if somebody can do it properly, but its somehow not my style.

 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By getoutofmycar
      I'm having some difficulty understanding how data would flow or get inserted into a multi-threaded opengl renderer where there is a thread pool and a render thread and an update thread (possibly main). My understanding is that the threadpool will continually execute jobs, assemble these and when done send them off to be rendered where I can further sort these and achieve some cheap form of statelessness. I don't want anything overly complicated or too fine grained,  fibers,  job stealing etc. My end goal is to simply have my renderer isolated in its own thread and only concerned with drawing and swapping buffers. 
      My questions are:
      1. At what point in this pipeline are resources created?
      Say I have a
      class CCommandList { void SetVertexBuffer(...); void SetIndexBuffer(...); void SetVertexShader(...); void SetPixelShader(...); } borrowed from an existing post here. I would need to generate a VAO at some point and call glGenBuffers etc especially if I start with an empty scene. If my context lives on another thread, how do I call these commands if the command list is only supposed to be a collection of state and what command to use. I don't think that the render thread should do this and somehow add a task to the queue or am I wrong?
      Or could I do some variation where I do the loading in a thread with shared context and from there generate a command that has the handle to the resources needed.
       
      2. How do I know all my jobs are done.
      I'm working with C++, is this as simple as knowing how many objects there are in the scene, for every task that gets added increment a counter and when it matches aforementioned count I signal the renderer that the command list is ready? I was thinking a condition_variable or something would suffice to alert the renderthread that work is ready.
       
      3. Does all work come from a singular queue that the thread pool constantly cycles over?
      With the notion of jobs, we are basically sending the same work repeatedly right? Do all jobs need to be added to a single persistent queue to be submitted over and over again?
       
      4. Are resources destroyed with commands?
      Likewise with initializing and assuming #3 is correct, removing an item from the scene would mean removing it from the job queue, no? Would I need to send a onetime command to the renderer to cleanup?
    • By RJSkywalker
      Hello, I'm trying to design a maze using a mix of procedural and manual generation. I have the maze already generated and would like to place other objects in the maze. The issue is the maze object is created on BeginPlay and so I'm unable to view it in the Editor itself while dragging the object to the Outliner. Any suggestions?
      I'm thinking of doing something in the Construction Script or the object Constructor but not not sure if that would be the way to go.
      I'm still getting familiar with the Engine code base and only have a little experience in Maya or Blender since I'm a programmer.
    • By devbyskc
      Hi Everyone,
      Like most here, I'm a newbie but have been dabbling with game development for a few years. I am currently working full-time overseas and learning the craft in my spare time. It's been a long but highly rewarding adventure. Much of my time has been spent working through tutorials. In all of them, as well as my own attempts at development, I used the audio files supplied by the tutorial author, or obtained from one of the numerous sites online. I am working solo, and will be for a while, so I don't want to get too wrapped up with any one skill set. Regarding audio, the files I've found and used are good for what I was doing at the time. However I would now like to try my hand at customizing the audio more. My game engine of choice is Unity and it has an audio mixer built in that I have experimented with following their tutorials. I have obtained a great book called Game Audio Development with Unity 5.x that I am working through. Half way through the book it introduces using FMOD to supplement the Unity Audio Mixer. Later in the book, the author introduces Reaper (a very popular DAW) as an external program to compose and mix music to be integrated with Unity. I did some research on DAWs and quickly became overwhelmed. Much of what I found was geared toward professional sound engineers and sound designers. I am in no way trying or even thinking about getting to that level. All I want to be able to do is take a music file, and tweak it some to get the sound I want for my game. I've played with Audacity as well, but it didn't seem to fit the bill. So that is why I am looking at a better quality DAW. Since being solo, I am also under a budget contraint. So of all the DAW software out there, I am considering Reaper or Presonus Studio One due to their pricing. My question is, is investing the time to learn about using a DAW to tweak a sound file worth it? Are there any solo developers currently using a DAW as part of their overall workflow? If so, which one? I've also come across Fabric which is a Unity plug-in that enhances the built-in audio mixer. Would that be a better alternative?
      I know this is long, and maybe I haven't communicated well in trying to be brief. But any advice from the gurus/vets would be greatly appreciated. I've leaned so much and had a lot of fun in the process. BTW, I am also a senior citizen (I cut my programming teeth back using punch cards and Structured Basic when it first came out). If anyone needs more clarification of what I am trying to accomplish please let me know.  Thanks in advance for any assistance/advice.
    • By evanofsky
      The more you know about a given topic, the more you realize that no one knows anything.
      For some reason (why God, why?) my topic of choice is game development. Everyone in that field agrees: don't add networked multiplayer to an existing game, you drunken clown.
      Well, I did it anyway because I hate myself. Somehow it turned out great. None of us know anything.
      Problem #1: assets
      My first question was: how do I tell a client to use such-and-such mesh to render an object? Serialize the whole mesh? Nah, they already have it on disk. Send its filename? Nah, that's inefficient and insecure. Okay, just a string identifier then?
      Fortunately, before I had time to implement any of my own terrible ideas, I watched a talk from Mike Acton where he mentions the danger of "lazy decision-making". One of his points was: strings let you lazily ignore decisions until runtime, when it's too late to fix.
      If I rename a texture, I don't want to get a bug report from a player with a screenshot like this:

      I had never thought about how powerful and complex strings are. Half the field of computer science deals with strings and what they can do. They usually require a heap allocation, or something even more complex like ropes and interning. I usually don't bother to limit their length, so a single string expands the possibility space to infinity, destroying whatever limited ability I had to predict runtime behavior.
      And here I am using these complex beasts to identify objects. Heck, I've even used strings to access object properties. What madness!
      Long story short, I cultivated a firm conviction to avoid strings where possible. I wrote a pre-processor that outputs header files like this at build time:
      namespace Asset { namespace Mesh { const int count = 3; const AssetID player = 0; const AssetID enemy = 1; const AssetID projectile = 2; } } So I can reference meshes like this:
      renderer->mesh = Asset::Mesh::player; If I rename a mesh, the compiler makes it my problem instead of some poor player's problem. That's good!
      The bad news is, I still have to interact with the file system, which requires the use of strings. The good news is the pre-processor can save the day.
      const char* Asset::Mesh::filenames[] = { "assets/player.msh", "assets/enemy.msh", "assets/projectile.msh", 0, }; With all this in place, I can easily send assets across the network. They're just numbers! I can even verify them.
      if (mesh < 0 || mesh >= Asset::Mesh::count) net_error(); // just what are you trying to pull, buddy? Problem #2: object references
      My next question was: how do I tell a client to please move/delete/frobnicate "that one object from before, you know the one". Once again, I was lucky enough to hear from smart people before I could shoot myself in the foot.
      From the start, I knew I needed a bunch of lists of different kinds of objects, like this:
      Array<Turret> Turret::list; Array<Projectile> Projectile::list; Array<Avatar> Avatar::list; Let's say I want to reference the first object in the Avatar list, even without networking, just on our local machine. My first idea is to just use a pointer:
       
      Avatar* avatar; avatar = &Avatar::list[0]; This introduces a ton of non-obvious problems. First, I'm compiling for a 64 bit architecture, which means that pointer takes up 8 whole bytes of memory, even though most of it is probably zeroes. And memory is the number one performance bottleneck in games.
      Second, if I add enough objects to the array, it will get reallocated to a different place in memory, and the pointer will point to garbage.
      Okay, fine. I'll use an ID instead.
      template<typename Type> struct Ref { short id; inline Type* ref() { return &Type::list[id]; } // overloaded "=" operator omitted }; Ref<Avatar> avatar = &Avatar::list[0]; avatar.ref()->frobnicate(); Second problem: if I remove that Avatar from the list, some other Avatar will get moved into its place without me knowing. The program will continue, blissfully and silently screwing things up, until some player sends a bug report that the game is "acting weird". I much prefer the program to explode instantly so I at least get a crash dump with a line number.
      Okay, fine. Instead of actually removing the avatar, I'll put a revision number on it:
      struct Avatar { short revision; }; template<typename Type> struct Ref { short id; short revision; inline Type* ref() { Type* t = &Type::list[id]; return t->revision == revision ? t : nullptr; } }; Instead of actually deleting the avatar, I'll mark it dead and increment the revision number. Now anything trying to access it will give a null pointer exception. And serializing a reference across the network is just a matter of sending two easily verifiable numbers.
      Problem #3: delta compression
      If I had to cut this article down to one line, it would just be a link to Glenn Fiedler's blog.
      Which by the way is here: gafferongames.com
      As I set out to implement my own version of Glenn's netcode, I read this article, which details one of the biggest challenges of multiplayer games. Namely, if you just blast the entire world state across the network 60 times a second, you could gobble up 17 mbps of bandwidth. Per client.
      Delta compression is one of the best ways to cut down bandwidth usage. If a client already knows where an object is, and it hasn't moved, then I don't need to send its position again.
      This can be tricky to get right.

      The first part is the trickiest: does the client really know where the object is? Just because I sent the position doesn't mean the client actually received it. The client might send an acknowledgement back that says "hey I received packet #218, but that was 0.5 seconds ago and I haven't gotten anything since."
      So to send a new packet to that client, I have to remember what the world looked like when I sent out packet #218, and delta compress the new packet against that. Another client might have received everything up to packet #224, so I can delta compress the new packet differently for them. Point is, we need to store a whole bunch of separate copies of the entire world.
      Someone on Reddit asked "isn't that a huge memory hog"?
      No, it is not.
      Actually I store 255 world copies in memory. All in a single giant array. Not only that, but each copy has enough room for the maximum number of objects (2048) even if only 2 objects are active.
      If you store an object's state as a position and orientation, that's 7 floats. 3 for XYZ coordinates and 4 for a quaternion. Each float takes 4 bytes. My game supports up to 2048 objects. 7 floats * 4 bytes * 2048 objects * 255 copies = ...
      14 MB. That's like, half of one texture these days.
      I can see myself writing this system five years ago in C#. I would start off immediately worried about memory usage, just like that Redditor, without stopping to think about the actual data involved. I would write some unnecessary, crazy fancy, bug-ridden compression system.
      Taking a second to stop and think about actual data like this is called Data-Oriented Design. When I talk to people about DOD, many immediately say, "Woah, that's really low-level. I guess you want to wring out every last bit of performance. I don't have time for that. Anyway, my code runs fine." Let's break down the assumptions in this statement.
      Assumption 1: "That's really low-level".
      Look, I multiplied four numbers together. It's not rocket science.
      Assumption 2: "You sacrifice readability and simplicity for performance."
      Let's picture two different solutions to this netcode problem. For clarity, let's pretend we only need 3 world copies, each containing up to 2 objects.
      Here's the solution I just described. Everything is statically allocated in the .bss segment. It never moves around. Everything is the same size. No pointers at all.

      Here's the idiomatic C# solution. Everything is scattered randomly throughout the heap. Things can get reallocated or moved right in the middle of a frame. The array is jagged. 64-bit pointers all over the place.

      Which is simpler?
      The second diagram is actually far from exhaustive. C#-land is a lot more complex in reality. Check the comments and you'll probably find someone correcting me about how C# actually works.
      But that's my point. With my solution, I can easily construct a "good enough" mental model to understand what's actually happening on the machine. I've barely scratched the surface with the C# solution. I have no idea how it will behave at runtime.
      Assumption 3: "Performance is the only reason you would code like this."
      To me, performance is a nice side benefit of data-oriented design. The main benefit is clarity of thought. Five years ago, when I sat down to solve a problem, my first thought was not about the problem itself, but how to shoehorn it into classes and interfaces.
      I witnessed this analysis paralysis first-hand at a game jam recently. My friend got stuck designing a grid for a 2048-like game. He couldn't figure out if each number was an object, or if each grid cell was an object, or both. I said, "the grid is an array of numbers. Each operation is a function that mutates the grid." Suddenly everything became crystal clear to him.
      Assumption 4: "My code runs fine".
      Again, performance is not the main concern, but it's important. The whole world switched from Firefox to Chrome because of it.
      Try this experiment: open up calc.exe. Now copy a 100 MB file from one folder to another.

      I don't know what calc.exe is doing during that 300ms eternity, but you can draw your own conclusions from my two minutes of research: calc.exe actually launches a process called Calculator.exe, and one of the command line arguments is called "-ServerName".
      Does calc.exe "run fine"? Did throwing a server in simplify things at all, or is it just slower and more complex?
      I don't want to get side-tracked. The point is, I want to think about the actual problem and the data involved, not about classes and interfaces. Most of the arguments against this mindset amount to "it's different than what I know".
      Problem #4: lag
      I now hand-wave us through to the part of the story where the netcode is somewhat operational.
      Right off the bat I ran into problems dealing with network lag. Games need to respond to players immediately, even if it takes 150ms to get a packet from the server. Projectiles were particularly useless under laggy network conditions. They were impossible to aim.
      I decided to re-use those 14 MB of world copies. When the server receives a command to fire a projectile, it steps the world back 150ms to the way the world appeared to the player when they hit the fire button. Then it simulates the projectile and steps the world forward until it's up to date with the present. That's where it creates the projectile.
      I ended up having the client create a fake projectile immediately, then as soon as it hears back from the server that the projectile was created, it deletes the fake and replaces it with the real thing. If all goes well, they should be in the same place due to the server's timey-wimey magic.
      Here it is in action. The fake projectile appears immediately but goes right through the wall. The server receives the message and fast-forwards the projectile straight to the part where it hits the wall. 150ms later the client gets the packet and sees the impact particle effect.

      The problem with netcode is, each mechanic requires a different approach to lag compensation. For example, my game has an "active armor" ability. If players react quick enough, they can reflect damage back at enemies.

      This breaks down in high lag scenarios. By the time the player sees the projectile hitting their character, the server has already registered the hit 100ms ago. The packet just hasn't made it to the client yet. This means you have to anticipate incoming damage and react long before it hits. Notice in the gif above how early I had to hit the button.
      To correct this, the server implements something I call "damage buffering". Instead of applying damage instantly, the server puts the damage into a buffer for 100ms, or whatever the round-trip time is to the client. At the end of that time, it either applies the damage, or if the player reacted, reflects it back.
      Here it is in action. You can see the 200ms delay between the projectile hitting me and the damage actually being applied.

      Here's another example. In my game, players can launch themselves at enemies. Enemies die instantly to perfect shots, but they deflect glancing blows and send you flying like this:

      Which direction should the player bounce? The client has to simulate the bounce before the server knows about it. The server and client need to agree which direction to bounce or they'll get out of sync, and they have no time to communicate beforehand.
      At first I tried quantizing the collision vector so that there were only six possible directions. This made it more likely that the client and server would choose the same direction, but it didn't guarantee anything.
      Finally I implemented another buffer system. Both client and server, when they detect a hit, enter a "buffer" state where the player sits and waits for the remote host to confirm the hit. To minimize jankiness, the server always defers to the client as to which direction to bounce. If the client never acknowledges the hit, the server acts like nothing happened and continues the player on their original course, fast-forwarding them to make up for the time they sat still waiting for confirmation.
      Problem #5: jitter
      My server sends out packets 60 times per second. What about players whose computers run faster than that? They'll see jittery animation.
      Interpolation is the industry-standard solution. Instead of immediately applying position data received from the server, you buffer it a little bit, then you blend smoothly between whatever data that you have.
      In my previous attempt at networked multiplayer, I tried to have each object keep track of its position data and smooth itself out. I ended up getting confused and it never worked well.
      This time, since I could already easily store the entire world state in a struct, I was able to write just two functions to make it work. One function takes two world states and blends them together. Another function takes a world state and applies it to the game.
      How big should the buffer delay be? I originally used a constant until I watched a video from the Overwatch devs where they mention adaptive interpolation delay. The buffer delay should smooth out not only the framerate from the server, but also any variance in packet delivery time.
      This was an easy win. Clients start out with a short interpolation delay, and any time they're missing a packet to interpolate toward, they increase their "lag score". Once it crosses a certain threshold, they tell the server to switch them to a higher interpolation delay.
      Of course, automated systems like this often act against the user's wishes, so it's important to add switches and knobs to the algorithm!

      Problem #6: joining servers mid-match
      Wait, I already have a way to serialize the entire game state. What's the hold up?
      Turns out, it takes more than one packet to serialize a fresh game state from scratch. And each packet may take multiple attempts to make it to the client. It may take a few hundred milliseconds to get the full state, and as we've seen already, that's an eternity. If the game is already in progress, that's enough time to send 20 packets' worth of new messages, which the client is not ready to process because it hasn't loaded yet.
      The solution is—you guessed it—another buffer.
      I changed the messaging system to support two separate streams of messages in the same packet. The first stream contains the map data, which is processed as soon as it comes in.
      The second stream is just the usual fire-hose of game messages that come in while the client is loading. The client buffers these messages until it's done loading, then processes them all until it's caught up.
      Problem #7: cross-cutting concerns
      This next part may be the most controversial.
      Remember that bit of gamedev wisdom from the beginning? "don't add networked multiplayer to an existing game"?
      Well, most of the netcode in this game is literally tacked on. It lives in its own 5000-line source file. It reaches into the game, pokes stuff into memory, and the game renders it.
      Just listen a second before stoning me. Is it better to group all network code in one place, or spread it out inside each game object?
      I think both approaches have advantages and disadvantages. In fact, I use both approaches in different parts of the game, for various reasons human and technical.
      But some design paradigms (*cough* OOP) leave no room for you to make this decision. Of course you put the netcode inside the object! Its data is private, so you'll have to write an interface to access it anyway. Might as well put all the smarts in there too.
      Conclusion
      I'm not saying you should write netcode like I do; only that this approach has worked for me so far. Read the code and judge for yourself.
      There is an objectively optimal approach for each use case, although people may disagree on which one it is. You should be free to choose based on actual constraints rather than arbitrary ones set forth by some paradigm.
      Thanks for reading. DECEIVER is launching on Kickstarter soon. Sign up to play the demo here!
    • By Bokchee 88
      I am animator by hand, and i am doing game animation for at least 8 years so far. During the last 2 years, i came with a idea for game and maybe some day, i want to start indie game company. As i am thinking to start game company, i am also thinking what kind of value i can give to the company. For example, am experience in animation,sales(I was selling web development services, before i jumped to gaming), bit of rigging- just not for production, i am learning on the side as well. The rest of the gaming production, like modeling, concept art, texturing, i am total noob or to say better, i am no near interest to do modeling for example, don't have such a patience to do it. But before characters and things are made for animating, what the hell i am would do?
      Also, what is the ideal size of the founding team of a game company? Positions to be filled mostly are, Concept artist, Modeler/Texture artist, programmer, animator-rigger. And later would need more people to join, like more animators, programmers, sound, fx,etc.
       
      And lastly, do i need to have something,like a prototype, to show people and get them interest, or should i ask someone i know, for skill that i lack, for example, Modeling would be great, texturing and rigging, and to start all together from scratch?  
  • Advertisement