• Advertisement

Leaderboard


Popular Content

Showing content with the highest reputation since 01/24/18 in all areas

  1. 8 points
    The more you know about a given topic, the more you realize that no one knows anything. For some reason (why God, why?) my topic of choice is game development. Everyone in that field agrees: don't add networked multiplayer to an existing game, you drunken clown. Well, I did it anyway because I hate myself. Somehow it turned out great. None of us know anything. Problem #1: assets My first question was: how do I tell a client to use such-and-such mesh to render an object? Serialize the whole mesh? Nah, they already have it on disk. Send its filename? Nah, that's inefficient and insecure. Okay, just a string identifier then? Fortunately, before I had time to implement any of my own terrible ideas, I watched a talk from Mike Acton where he mentions the danger of "lazy decision-making". One of his points was: strings let you lazily ignore decisions until runtime, when it's too late to fix. If I rename a texture, I don't want to get a bug report from a player with a screenshot like this: I had never thought about how powerful and complex strings are. Half the field of computer science deals with strings and what they can do. They usually require a heap allocation, or something even more complex like ropes and interning. I usually don't bother to limit their length, so a single string expands the possibility space to infinity, destroying whatever limited ability I had to predict runtime behavior. And here I am using these complex beasts to identify objects. Heck, I've even used strings to access object properties. What madness! Long story short, I cultivated a firm conviction to avoid strings where possible. I wrote a pre-processor that outputs header files like this at build time: namespace Asset { namespace Mesh { const int count = 3; const AssetID player = 0; const AssetID enemy = 1; const AssetID projectile = 2; } } So I can reference meshes like this: renderer->mesh = Asset::Mesh::player; If I rename a mesh, the compiler makes it my problem instead of some poor player's problem. That's good! The bad news is, I still have to interact with the file system, which requires the use of strings. The good news is the pre-processor can save the day. const char* Asset::Mesh::filenames[] = { "assets/player.msh", "assets/enemy.msh", "assets/projectile.msh", 0, }; With all this in place, I can easily send assets across the network. They're just numbers! I can even verify them. if (mesh < 0 || mesh >= Asset::Mesh::count) net_error(); // just what are you trying to pull, buddy? Problem #2: object references My next question was: how do I tell a client to please move/delete/frobnicate "that one object from before, you know the one". Once again, I was lucky enough to hear from smart people before I could shoot myself in the foot. From the start, I knew I needed a bunch of lists of different kinds of objects, like this: Array<Turret> Turret::list; Array<Projectile> Projectile::list; Array<Avatar> Avatar::list; Let's say I want to reference the first object in the Avatar list, even without networking, just on our local machine. My first idea is to just use a pointer: Avatar* avatar; avatar = &Avatar::list[0]; This introduces a ton of non-obvious problems. First, I'm compiling for a 64 bit architecture, which means that pointer takes up 8 whole bytes of memory, even though most of it is probably zeroes. And memory is the number one performance bottleneck in games. Second, if I add enough objects to the array, it will get reallocated to a different place in memory, and the pointer will point to garbage. Okay, fine. I'll use an ID instead. template<typename Type> struct Ref { short id; inline Type* ref() { return &Type::list[id]; } // overloaded "=" operator omitted }; Ref<Avatar> avatar = &Avatar::list[0]; avatar.ref()->frobnicate(); Second problem: if I remove that Avatar from the list, some other Avatar will get moved into its place without me knowing. The program will continue, blissfully and silently screwing things up, until some player sends a bug report that the game is "acting weird". I much prefer the program to explode instantly so I at least get a crash dump with a line number. Okay, fine. Instead of actually removing the avatar, I'll put a revision number on it: struct Avatar { short revision; }; template<typename Type> struct Ref { short id; short revision; inline Type* ref() { Type* t = &Type::list[id]; return t->revision == revision ? t : nullptr; } }; Instead of actually deleting the avatar, I'll mark it dead and increment the revision number. Now anything trying to access it will give a null pointer exception. And serializing a reference across the network is just a matter of sending two easily verifiable numbers. Problem #3: delta compression If I had to cut this article down to one line, it would just be a link to Glenn Fiedler's blog. Which by the way is here: gafferongames.com As I set out to implement my own version of Glenn's netcode, I read this article, which details one of the biggest challenges of multiplayer games. Namely, if you just blast the entire world state across the network 60 times a second, you could gobble up 17 mbps of bandwidth. Per client. Delta compression is one of the best ways to cut down bandwidth usage. If a client already knows where an object is, and it hasn't moved, then I don't need to send its position again. This can be tricky to get right. The first part is the trickiest: does the client really know where the object is? Just because I sent the position doesn't mean the client actually received it. The client might send an acknowledgement back that says "hey I received packet #218, but that was 0.5 seconds ago and I haven't gotten anything since." So to send a new packet to that client, I have to remember what the world looked like when I sent out packet #218, and delta compress the new packet against that. Another client might have received everything up to packet #224, so I can delta compress the new packet differently for them. Point is, we need to store a whole bunch of separate copies of the entire world. Someone on Reddit asked "isn't that a huge memory hog"? No, it is not. Actually I store 255 world copies in memory. All in a single giant array. Not only that, but each copy has enough room for the maximum number of objects (2048) even if only 2 objects are active. If you store an object's state as a position and orientation, that's 7 floats. 3 for XYZ coordinates and 4 for a quaternion. Each float takes 4 bytes. My game supports up to 2048 objects. 7 floats * 4 bytes * 2048 objects * 255 copies = ... 14 MB. That's like, half of one texture these days. I can see myself writing this system five years ago in C#. I would start off immediately worried about memory usage, just like that Redditor, without stopping to think about the actual data involved. I would write some unnecessary, crazy fancy, bug-ridden compression system. Taking a second to stop and think about actual data like this is called Data-Oriented Design. When I talk to people about DOD, many immediately say, "Woah, that's really low-level. I guess you want to wring out every last bit of performance. I don't have time for that. Anyway, my code runs fine." Let's break down the assumptions in this statement. Assumption 1: "That's really low-level". Look, I multiplied four numbers together. It's not rocket science. Assumption 2: "You sacrifice readability and simplicity for performance." Let's picture two different solutions to this netcode problem. For clarity, let's pretend we only need 3 world copies, each containing up to 2 objects. Here's the solution I just described. Everything is statically allocated in the .bss segment. It never moves around. Everything is the same size. No pointers at all. Here's the idiomatic C# solution. Everything is scattered randomly throughout the heap. Things can get reallocated or moved right in the middle of a frame. The array is jagged. 64-bit pointers all over the place. Which is simpler? The second diagram is actually far from exhaustive. C#-land is a lot more complex in reality. Check the comments and you'll probably find someone correcting me about how C# actually works. But that's my point. With my solution, I can easily construct a "good enough" mental model to understand what's actually happening on the machine. I've barely scratched the surface with the C# solution. I have no idea how it will behave at runtime. Assumption 3: "Performance is the only reason you would code like this." To me, performance is a nice side benefit of data-oriented design. The main benefit is clarity of thought. Five years ago, when I sat down to solve a problem, my first thought was not about the problem itself, but how to shoehorn it into classes and interfaces. I witnessed this analysis paralysis first-hand at a game jam recently. My friend got stuck designing a grid for a 2048-like game. He couldn't figure out if each number was an object, or if each grid cell was an object, or both. I said, "the grid is an array of numbers. Each operation is a function that mutates the grid." Suddenly everything became crystal clear to him. Assumption 4: "My code runs fine". Again, performance is not the main concern, but it's important. The whole world switched from Firefox to Chrome because of it. Try this experiment: open up calc.exe. Now copy a 100 MB file from one folder to another. I don't know what calc.exe is doing during that 300ms eternity, but you can draw your own conclusions from my two minutes of research: calc.exe actually launches a process called Calculator.exe, and one of the command line arguments is called "-ServerName". Does calc.exe "run fine"? Did throwing a server in simplify things at all, or is it just slower and more complex? I don't want to get side-tracked. The point is, I want to think about the actual problem and the data involved, not about classes and interfaces. Most of the arguments against this mindset amount to "it's different than what I know". Problem #4: lag I now hand-wave us through to the part of the story where the netcode is somewhat operational. Right off the bat I ran into problems dealing with network lag. Games need to respond to players immediately, even if it takes 150ms to get a packet from the server. Projectiles were particularly useless under laggy network conditions. They were impossible to aim. I decided to re-use those 14 MB of world copies. When the server receives a command to fire a projectile, it steps the world back 150ms to the way the world appeared to the player when they hit the fire button. Then it simulates the projectile and steps the world forward until it's up to date with the present. That's where it creates the projectile. I ended up having the client create a fake projectile immediately, then as soon as it hears back from the server that the projectile was created, it deletes the fake and replaces it with the real thing. If all goes well, they should be in the same place due to the server's timey-wimey magic. Here it is in action. The fake projectile appears immediately but goes right through the wall. The server receives the message and fast-forwards the projectile straight to the part where it hits the wall. 150ms later the client gets the packet and sees the impact particle effect. The problem with netcode is, each mechanic requires a different approach to lag compensation. For example, my game has an "active armor" ability. If players react quick enough, they can reflect damage back at enemies. This breaks down in high lag scenarios. By the time the player sees the projectile hitting their character, the server has already registered the hit 100ms ago. The packet just hasn't made it to the client yet. This means you have to anticipate incoming damage and react long before it hits. Notice in the gif above how early I had to hit the button. To correct this, the server implements something I call "damage buffering". Instead of applying damage instantly, the server puts the damage into a buffer for 100ms, or whatever the round-trip time is to the client. At the end of that time, it either applies the damage, or if the player reacted, reflects it back. Here it is in action. You can see the 200ms delay between the projectile hitting me and the damage actually being applied. Here's another example. In my game, players can launch themselves at enemies. Enemies die instantly to perfect shots, but they deflect glancing blows and send you flying like this: Which direction should the player bounce? The client has to simulate the bounce before the server knows about it. The server and client need to agree which direction to bounce or they'll get out of sync, and they have no time to communicate beforehand. At first I tried quantizing the collision vector so that there were only six possible directions. This made it more likely that the client and server would choose the same direction, but it didn't guarantee anything. Finally I implemented another buffer system. Both client and server, when they detect a hit, enter a "buffer" state where the player sits and waits for the remote host to confirm the hit. To minimize jankiness, the server always defers to the client as to which direction to bounce. If the client never acknowledges the hit, the server acts like nothing happened and continues the player on their original course, fast-forwarding them to make up for the time they sat still waiting for confirmation. Problem #5: jitter My server sends out packets 60 times per second. What about players whose computers run faster than that? They'll see jittery animation. Interpolation is the industry-standard solution. Instead of immediately applying position data received from the server, you buffer it a little bit, then you blend smoothly between whatever data that you have. In my previous attempt at networked multiplayer, I tried to have each object keep track of its position data and smooth itself out. I ended up getting confused and it never worked well. This time, since I could already easily store the entire world state in a struct, I was able to write just two functions to make it work. One function takes two world states and blends them together. Another function takes a world state and applies it to the game. How big should the buffer delay be? I originally used a constant until I watched a video from the Overwatch devs where they mention adaptive interpolation delay. The buffer delay should smooth out not only the framerate from the server, but also any variance in packet delivery time. This was an easy win. Clients start out with a short interpolation delay, and any time they're missing a packet to interpolate toward, they increase their "lag score". Once it crosses a certain threshold, they tell the server to switch them to a higher interpolation delay. Of course, automated systems like this often act against the user's wishes, so it's important to add switches and knobs to the algorithm! Problem #6: joining servers mid-match Wait, I already have a way to serialize the entire game state. What's the hold up? Turns out, it takes more than one packet to serialize a fresh game state from scratch. And each packet may take multiple attempts to make it to the client. It may take a few hundred milliseconds to get the full state, and as we've seen already, that's an eternity. If the game is already in progress, that's enough time to send 20 packets' worth of new messages, which the client is not ready to process because it hasn't loaded yet. The solution is—you guessed it—another buffer. I changed the messaging system to support two separate streams of messages in the same packet. The first stream contains the map data, which is processed as soon as it comes in. The second stream is just the usual fire-hose of game messages that come in while the client is loading. The client buffers these messages until it's done loading, then processes them all until it's caught up. Problem #7: cross-cutting concerns This next part may be the most controversial. Remember that bit of gamedev wisdom from the beginning? "don't add networked multiplayer to an existing game"? Well, most of the netcode in this game is literally tacked on. It lives in its own 5000-line source file. It reaches into the game, pokes stuff into memory, and the game renders it. Just listen a second before stoning me. Is it better to group all network code in one place, or spread it out inside each game object? I think both approaches have advantages and disadvantages. In fact, I use both approaches in different parts of the game, for various reasons human and technical. But some design paradigms (*cough* OOP) leave no room for you to make this decision. Of course you put the netcode inside the object! Its data is private, so you'll have to write an interface to access it anyway. Might as well put all the smarts in there too. Conclusion I'm not saying you should write netcode like I do; only that this approach has worked for me so far. Read the code and judge for yourself. There is an objectively optimal approach for each use case, although people may disagree on which one it is. You should be free to choose based on actual constraints rather than arbitrary ones set forth by some paradigm. Thanks for reading. DECEIVER is launching on Kickstarter soon. Sign up to play the demo here!
  2. 8 points
    On the 2nd of November 2017 we launched a Kickstarter campaign for our game Nimbatus - The Space Drone Constructor, which aimed to raise $20,000. By the campaign’s end, 3000 backers had supported us with a total of $74,478. All the PR and marketing was handled by our indie developer team of four people with a very low marketing budget. Our team decided to go for a funding goal we were sure we could reach and extend the game’s content through stretch goals. The main goal of the campaign was to raise awareness for the game and raise funds for the alpha version. Part 1 - Before Launch Is what we believed when we launched our first Kickstarter campaign in 2016. For this first campaign, we had built up a very dedicated group of people before the Kickstarter’s launch. Nimbatus also had a bit of a following before the campaign launched: ~ 300 likes on Facebook ~ 1300 followers on Twitter ~ 1000 newsletter subs ~ 3500 followers on Steam However, there had been little interaction between players and us previous to the campaign's launch. This made us unsure whether or not the Nimbatus Kickstarter would reach its funding goal. A few weeks prior to launch, we started to look for potential ways to promote Nimbatus during the Kickstarter. We found our answer in social news sites. Reddit, Imgur and 9gag all proved to be great places to talk about Nimbatus. More about this in Part 3 - During the campaign. As with our previous campaign, the reward structure and trailer were the most time-consuming aspects of the page setup. We realised early that Nimbatus looks A LOT better in motion and therefore decided that we should show all features in action with animated GIFs. Two examples: In order to support the campaigns storytelling, “we built a ship, now we need a crew!”, we named all reward tiers after open positions on the ship. We were especially interested how the “Navigator” tier would do. This $95 tier would give backers free digital copies of ALL games our company EVER creates. We decided against Early Bird and Kickstarter exclusive rewards in order avoid splitting backers into “winners and losers”, based on the great advice from Stonemaier Game’s book A Crowdfunder’s Strategy Guide (EDS Publications Ltd. (2015). Their insights also convinced us to add a $1 reward tier because it lets people join the update loop to build up trust in our efforts. Many of our $1 backers later increased their pledge to a higher tier. Two of our reward tiers featured games that are similar to Nimbatus. The keys for these games were provided by fellow developers. We think that this is really awesome and it helped the campaign a lot! A huge thanks to Avorion, Reassembly , Airships and Scrap Galaxy <3 Youtubers and streamers are important allies for game developers. They are in direct contact with potential buyers/backers and can significantly increase a campaign’s reach. We made a list of content creators who’d potentially be interested in our game. They were selected mostly by browsing Youtube for “let’s play” videos of games similar to Nimbatus. We sent out a total of 100 emails, each with a personalized intro sentence, no money involved. Additionally, we used Keymailer . Keymailer is a tool to contact Youtubers and streamers. At a cost of $150/month you can filter all available contacts by games they played and genres they enjoy. We personalized the message for each group. Messages automatically include an individual Steam key. With this tool, we contacted over 2000 Youtubers/Streamers who are interested in similar games. How it turned out - About 10 of the 100 Youtubers we contacted manually ended up creating a video/stream during the Kickstarter. Including some big ones with 1 million+ subscribers. - Over 150 videos resulted from the Keymailer outreach. Absolutely worth the investment! Another very helpful tool to find Youtubers/Streamers is Twitter. Before, but also during the campaign we sent out tweets , stating that we are looking for Youtubers/Streamers who want to feature Nimbatus. We also encouraged people to tag potentially interested content creators in the comments. This brought in a lot of interested people and resulted in a couple dozen videos. We also used Twitter to follow up when people where not responding via email, which proved to be very effective. In terms of campaign length we decided to go with a 34 day Kickstarter. The main reason being that we thought it would take quite a while until the word of the campaign spread enough. In retrospective this was ok, but we think 30 days would have been enough too. We were very unsure whether or not to release a demo of Nimbatus. Mainly because we were unsure if the game offered enough to convince players in this early state and we feared that our alpha access tier would potentially lose value because everyone could play already. Thankfully we decided to offer a demo in the end. More on this topic in Part 3 - During the campaign. Since we are based in Switzerland, we were forced to use CHF as our campaign’s currency. And while the currency is automatically re-calculated into $ for American backers, it was displayed in CHF for all other international backers. Even though CHF and $ are almost 1:1 in value, we believed this to be a hurdle. There is no way to tell for us how many backers were scared away because of this in the end. Part 2: Kickstarter Launch We launched our Kickstarter campaign on a Thursday evening (UTC + 1) which is midday in the US. In order to celebrate the launch, we did a short livestream on Facebook. We had previously opened an event page and invited all our Facebook friends to it. Only a few people were watching and we were a bit stressed out. In order to help us spread the word we challenged our supporters with community goals. We promised that if all these goals were reached, each backer above $14 would receive an extra copy of Nimbatus. With most of the goals reached after the first week, we realized that we should have made the challenge a bit harder. The first few days went better than expected. We announced the Kickstarter on Imgur, Reddit, 9gag, Instagram, Facebook, Twitter, in some forums, via our Newsletter and on our Steam page. If you plan to release your game on Steam later on, we’d highly recommend that you set up your Steam page before the Kickstarter launches. Some people might not be interested in backing the game but will go ahead and wishlist it instead. Part 3: During The Campaign We tried to keep the campaign’s momentum going. This worked our mostly thanks to the demo we had released. In order to download the Nimbatus demo, people needed to head over to our website and enter their email address. Within a few minutes, they received an automated email, including a download link for the demo. We used Mailchimp for this process. We also added a big pop up in the demo to inform players about the Kickstarter. At first we were a bit reluctant to use this approach, it felt a bit sneaky. But after adding a line informing players they would be added to the newsletter and adding a huge unsubscribe button in the demo download mail, we felt that we could still sleep at night. For our previous campaign we had also released a demo. But the approach was significantly different. For the Nimbatus Kickstarter, we used the demo as a marketing tool to inform people about the campaign. Our previous Kickstarters’ demo was mainly an asset you could download if you were already checking out the campaign’s page and wanted to try the game before backing. We continued to frequently post on Imgur, Twitter, 9Gag and Facebook. Simultaneously, people streamed Nimbatus on Twitch and released videos on Youtube. This lead to a lot of demo downloads and therefore growth of our newsletter. A few hundred subs came in every day. Only about 10% of the people unsubscribed from the newsletter after downloading the demo. Whenever we updated the demo or reached significant milestones in the campaign, such as being halfway to our goal, we sent out a newsletter. We also opened a Discord channel, which turned out a be a great way to stay in touch with our players. We were quite surprised to see a decent opening and link click rate. Especially if you compare this to our “normal” newsletter, which includes mostly people we personally met at events. Our normal newsletter took over two years to build up and includes about 4000 subs. With the Nimbatus demo, we gathered 50’000 subs within just 4 weeks and without travelling to any conferences. (please note that around 2500 people subscribed to the normal newsletter during the Kickstarter) On the 7th day of the campaign we asked a friend if she would give us a shoutout on Reddit. She agreed and posted it in r/gaming. We will never forget what happened next. The post absolutely took off! In less than an hour, the post had reached the frontpage and continued to climb fast. It soon reached the top spot of all things on Reddit. Our team danced around in the office. Lots of people backed, a total of over $5000 came in from this post and we reached our funding goal 30 minutes after hitting the front page. We couldn’t believe our luck. Then, people started to accuse us of using bots to upvote the post. Our post was reported multiple times until the moderators took the post down. We were shocked and contacted them. They explained that they would need to investigate the post for bot abuse. A few hours later, they put the post back up and stated to have found nothing wrong with it and apologized for the inconvenience. Since the post had not received any upvotes in the past hours while it was taken down it very quickly dropped off the front page and the money flow stopped. While this is a misunderstanding we can understand and accept, people’s reactions hit us pretty hard. After the post was back up, many people on Reddit continued to accuse us and our friend. In the following days, our friend was constantly harassed when she posted on Reddit. Some people jumped over to our companies Twitter and Imgur account and kept on blaming us, asking if we were buying upvotes there too. It’s really not cool to falsely accuse people. Almost two weeks later we decided to start posting in smaller subreddits again. This proved to be no problem. But when we dared to do another post in r/gaming later, people immediately reacted very aggressive. We took the new post down and decided to stop posting in r/gaming (at least during the Kickstarter). After upgrading the demo with a new feature to easily export GIFs, we started to run competitions on Twitter. The coolest drones that were shared with #NimbatusGame would receive a free Alpha key for the game. Lots of players participated and helped to increase Nimbatus’ reach by doing so. We also gave keys to our most dedicated Youtubers/streamers who then came up with all kinds of interesting challenges for their viewers. All these activities came together in a nice loop: People downloaded the Nimbatus demo they heard about on social media/social news sites or from Youtubers/Streamers. By receiving newsletters and playing the demo they learned about the Kickstarter. Many of them backed and participated in community goals/competitions which brought in more new people. Not much happened in terms of press. RockPaperShotgun and PCGamer wrote articles, both resulting in about $500, which was nice. A handful of small sites picked up the news too. We sent out a press release when Nimbatus reached its funding goal, both to manually picked editors of bigger sites and via gamespress.com. Part 4: Last Days Every person that hit the “Remind me” button on a Kickstarter page receives an email 48 hours before a campaign ends. This helpful reminder caused a flood of new pledges. We reached our last stretch goal a few hours before our campaign ended. Since we had already communicated this goal as the final one we withheld announcing any further stretch goals. We decided to do a Thunderclap 24 hours before the campaign ends. Even after having done quite a few Thunderclaps, we are still unsure how big of an impact they have. A few minutes before the Kickstarter campaign was over we cleaned up our campaign page and added links to our Steam page and website. Note that Kickstarter pages cannot be edited after the campaign ends! The campaign ended on a Tuesday evening (UTC + 1) and raised a total of $75’000, which is 369% of the original funding goal. After finishing up our “Thank you” image and sending it to our backers it was time to rest. Part 5: Conclusion We are very happy with the campaign’s results. It was unexpected to highly surpass our funding goal, even though we didn’t have an engaged community when the campaign started. Thanks to the demo we were able to develop a community for Nimbatus on the go. The demo also allowed us to be less “promoty” when posting on social news sites. This way, interested people could get the demo and discover the Kickstarter from there instead of us having to ask for support directly when posting. This, combined with the ever growing newsletter, turned into a great campaign dynamic. We plan to use this approach again for future campaigns. Growth 300 ------------------> 430 Facebook likes 1300 -----------------> 2120 Twitter followers 1000 -----------------> 50’000 Newsletter signups 3500 -----------------> 10’000 Followers on Steam 0 ---------------------> 320 Readers of subreddit 0 ---------------------> 468 People on Discord 0 ---------------------> 300 Members in our forum More data 23% of our backers came directly from Kickstarter. 76% of our backers came from external sites. For our previous campaign it was 36/64. The average pledge amount of our backers was $26. 94 backers decided to choose the Navigator reward, which gives them access to all games our studio will create in the future. It makes us very happy to see that this kind of reward, which is basically an investment in us as a game company, was popular among backers. Main sources of backers Link inside demo / Newsletter 22’000 Kickstarter 17’000 Youtube 15’000 Google 3000 Reddit 2500 Twitter 2000 Facebook 2000 TLDR: Keymailer is awesome, but also contact big Youtubers/streamers via email. Most money for the Kickstarter came in through the demo. Social news sites (Imgur, 9Gag, Reddit, …) can generate a lot of attention for a game. It’s much easier to offer a demo on social news sites than to ask for Kickstarter support. Collecting newsletter subs from demo downloads is very effective. It’s possible to run a successful Kickstarter without having a big community beforehand. We hope this insight helps you plan your future Kickstarter campaign. We believe you can do it and we wish you all the best. About the author: Philomena Schwab is a game designer from Zurich, Switzerland. She co-founded Stray Fawn Studio together with Micha Stettler. The indie game studio recently released its first game, Niche - a genetics survival game and is now developing its second game Nimbatus - The Space Drone Constructor. Philomena wrote her master thesis about community building for indie game developers and founded the nature gamedev collective Playful Oasis. As a chair member of the Swiss Game Developers association she helps her local game industry grow. https://www.nimbatus.ch/ https://strayfawnstudio.com/ https://www.kickstarter.com/projects/strayfawnstudio/nimbatus-the-space-drone-constructor Related Reading: Algo-Bot: Lessons Learned from our Kickstarter failure.
  3. 7 points
    I've got to say, I have rarely seen such defeatist attitude from a person. It actually makes me angry. Nowdays the tools that developers have in their disposal are numerous and quite efficient, especially for making quick prototypes. If I had a design idea that I was sure was revolutionary, and assuming I didn't have enough funds to hire a programmer, I would invest 6months-1year to learn some programming myself, even if I absolutely hated it, whip out a working prototype in Unity that showcased the amazing abilities of my design, blow everyone away, and watch the $$$ come from Kickstarter in order to make the real thing. Then come back here and rub it in our ignorant, amateur faces. Yet here you are, for years on end, shouting at nobody in particular, harping on and on about how you don't get the proper respect, as if that matters in any way. Go. Build. Your. Thing.
  4. 7 points
    We get it. You're an auteur. Ok. However, great designers produce results.
  5. 6 points
    They're not distinct or mutually exclusive. It's not just one or the other in different parts of the code, but you can use both at the same time! Also, a lot of the Anti-OOP rants that you see on the internet are from people who have unfortunately had to work on badly written OOP projects and think that OOP=inheritance (when inheritance isn't even present in OOPs formal description, and any decent OOP practitioner will tell you that a core tenet of OOP is the preference for composition over inheritance). The purpose of OOP is to develop useful abstractions that reduce the surface area of your components to the minimum necessary, allowing million-line-of-code projects to remain maintainable as a collection of decoupled components. OOP is a 'paradigm' in that it provides a bunch of tools and hen it defines a structure using those tools. The purpose of DOD is to analyse the flow of data that's actually required by your results, and then structure your intermediate and source data structures in a way that results in the simplest data transformations. The goal is to produce simpler software that performs well on real hardware. If using OOP, this analysis tells you how your should structure your classes. I wouldn't call DOD a 'paradigm' like OOP, it's more of a methodology, which can be applied to almost any programming paradigm. You should apply both if you're making a large game that you want to be able to maintain, but also want good performance and simple algorithms. Because OOP is about components and their interfaces (while hiding their implementations), you're free to do whatever you want within the implementation of a component too. Also, once an OOP component is created, it can be used by other OOP components, or can be used by procedural-programming algorithms, or functional-programming algorithms, etc... A common trend in my recent experience is actually to merge functional-programming techniques with OOP too. Components in OOP are typically mutable (e.g. Some core OOP rules are actually defined in terms of how the component's state changes/mutates...) but functional code typically works with immutable components and pure-functions, which IMHO leads to better maintainability as the code is easier to reason about. It's also common to use procedural programming to define the' script' of high level loops, like the rendering flow or the game update loop. One reason for C++'s popularity in games is that it is happy to let you write in this mixture of procedural, pure functional, object oriented and data oriented paradigms. [edit] I'd highly recommend reading Albrecht's Pitfalls of Object Oriented Programming ( https://drive.google.com/open?id=1SbmKU0Ev9UjyIpNMOQu7aEMhZBifZkw6 ), which could be interpreted as being Anti-OOP, but I personally interpret as anti-naive-OOP and a practical guide on applying DOD to existing code that already works but has been badly designed regarding performance. The design he ends up with could still have an OOP interface on top of it, but strays from the common OOP idea of associating algorithms with just a single object.
  6. 6 points
    Correct about std::atomic. Volatile doesn't do what most people think it does. The volatile keyword does drastically different things in different languages. Java and C# use "volatile" to indicate objects that follow certain locking patterns, very similar to std::atomic<> in modern c++. C and C++ use "volatile" to indicate that the optimizer should not make assumptions about the object's side effects, they should be treated exactly as the language abstract machine specifies. In C and C++ volatile specifically means "do not optimize this value". On certain hardware and certain compilers and certain data types this can mean that memory barriers and cache coherency and other rules are followed, but they are not guaranteed and are not universal. Note in particular that using volatile like this eliminates many optimizations. The compiler is required to write to main memory and store to main memory instead of using the memory directly. This can mean that instead of a value requiring a partial nanosecond, or even being removed completely, the compiler is required to work with the memory directly, which can take tens or hundreds of nanoseconds. Because so many programs on Windows incorrectly rely on the behavior, most compilers on the platform will perform the extra work if you're using an atomic hardware type like an int or char or long, but they won't automatically do more. That doesn't make the behavior right, it means that the bug of incorrectly using "volatile" when you mean "atomic" is so pervasive they added additional performance-harming behavior around the buggy usage. The C and C++ version of volatile is sometimes erroneously mentioned regarding multithreading because of history. Without the keyword the compiler would see that no code actually writes to the memory, so it will assume the value is never actually used; the code can reuse the value as a compiler constant without ever reading or writing to memory. With the keyword the compiler is required to read or write the value directly to memory every time it is encountered. This meant volatile was perfect for specialized hardware where memory was shared between hardware or software systems. Since early multiprocessing systems would use this for sharing memory, they used volatile as the quick-and-dirty way to manipulate the memory when it was shared, and would cast between volatile and non-volatile versions in ways that worked correctly on that specific system. On the PC that environment hasn't existed for over two decades. Use the proper atomic values because the compiler can optimize them in amazing ways. Don't use volatile for multithreading, since it is almost certainly not doing what you expect.
  7. 6 points
    Ya know what? I've got on a bit of a roll debunking ridiculous claims here, so we'll do one more. I've seen you claim repeatedly that the video game industry does not like "rockstar" developers, and actively tries to keep them out. Ridiculous. Miyamoto Shigeru. Hideo Kojima. John Romero. Sid Meier. Roberta Williams. Chris Taylor. Markus 'Notch' Persson. Edmund McMillen. Jonathon Blow. Will Wright. These are household names in the industry, and I just rattled off ten without even trying. More immediately leap to mind. We write books about our designers. We make movies about them. We listen to them talk at conferences, read their blogs, follow their social accounts. People buy games just because they like the designer's body of work. The industry has no problem whatsoever with so called 'rock star designers', and loves to put them on a pedestal, put their name on things, and follow their every word and action. You have also claimed that you are probably the origin of the term. Again, ridiculous. I can assure you that noone thinks a random guy they've never heard of with three design credits for not-particularly popular games is a 'rockstar'. The people I listed above are not 'rockstars' because someone recognised their untapped genius, or because of their methodology, but rather because they have an established body of released products that's successful and widely loved.
  8. 6 points
    Table top designers with a proven track record get hired all the time. Let's be honest about your verifiable experience. You were one of 40 people to receive design credit for one table top product. You claim to have made a video game mod, but the only references to it online are from you, so it obviously wasn't all that popular. You have design credit for one videogame, which was generally regarded as being pretty but poorly designed. That's three credits in total, one of which is very questionable. If a reference checks were favourable and you interviewed well, this could probably get you an entry level position, but it sounds very much like you have no interest in entry level positions, and I certainly can't see how you could possibly interview well based on your communications online. Noone in any industry hires a random guy just because he says he's top of the field - generally if someone is genuinely the top of the field you will have heard of them and they'll have an impressive list of credits and released products.
  9. 6 points
    I'd definitely recommend starting with D3D11. IMHO it really is the best all around graphics API. All the concepts that you learn in pretty much any GPU API will translate to every other API, so learning the "wrong one" not a waste of time. GL would be my second choice, and then Vulkan/D3D12 in tied third place. My main points would be something like: |D3D9 |D3D11|D3D12 |Vulkan| GL | Easily draw a cube | Yes | No | No | No | Yes | Validation Layer | No | Yes | Yes | Yes | No* | Validated Drivers | MS | MS | MS | Open | No | Legacy APIs mixed in| Yes | No | No | No | Yes | Vendor extensions | No^ | No^ | No^ | Yes | Yes | CPU/GPU concurrency |Auto |Auto |Manual|Manual|Auto | Can crash the GPU | No | No | YES | YES | No* | HLSL | Yes | Yes | Yes | Yes# | No$ | GLSL | No$ | No$ | No$ | Yes | Yes | SPIR-V | No | No$ | No$ | Yes | No* | Windows | Yes | Yes | Yes | Yes | Yes | Linux | No$ | No | No | Yes | Yes | MacOS | No$ | No | No | No$ | Yes@| * = available with vendor extensions ^ = not officially, but vendors hacked them in anyway # = work in progress support $ = DIY/Open Source/Middleware can get you there... @ = always a version of the spec that's 5 years old... D3D10 useless now -- D3D11 lets you support D3D10-era hardware and do all the same things -- so we'll ignore it. The one good thing with ancient APIs (e.g. GL v1.1, D3D9) is that very simple apps are very simple. In comparison, modern APIs make you do a lot of legwork to even get started. When I was starting out, writing simple GL apps with glBegin, glVertex, etc, was great fun If you came across any readable tutorials or books for these old API versions, they could still be a fun learning exercise. Having a validation layer built into the API is really useful for catching your incorrect code. Of course you want to check all of your function calls for errors, but having the debugger halt execution and a several-sentence-long error message appear describing your coding mistake is invaluable. D3D does a great job here. D3D9 used to have a validation layer but MS has broken it on modern Windows (got a WinXP machine handy? ) GL2/3/4 tries to clean up their API every version and officially throws out all the old ways of doing things... but unofficially, all the old ways of doing things still hang around (except on Mac!), making it possible to end up with a horrible mixture of three different APIs. It can also make tutorials a bit suspect when you're not quite sure if you're learning core features from the version you want or not D3D9 also suffers from this, with it supporting both an ancient-style fixed-function drawing API and a modern shader-based drawing API... Vendor extensions are great -- they allow you to access the latest features of every GPU before those features become standard, but for a beginner they just add confusion. D3D made the choice of banning them. They're actually still there, but you have to download some extra vendor-specific SDKs to hack around the official D3D restrictions D3D12 and Vulkan code has to be perfect. If you've got any mistakes in it, you could straight up crash your GPU. This isn't too bad, as Windows will just turn it off and on again... but it can be a nightmare to debug these things. That doesn't make for a good learning environment. This would make them unusable, except that they've got the great validation layers to help guide you! D3D9/D3D11/GL present an abstraction where it looks like your code is running in serial with the GPU -- i.e. you say to draw something, then the GPU draws it immediately. In reality, the GPU is often buffering up several frames of commands and acting on them asynchronously (in order to achieve better throughput), however, D3D11/GL do a great job of hiding all the awful details that make this possible. This makes them much easier to use. In D3D12/Vulkan, it's your job to implement this yourself. To do that, you need to be competent at multi-threaded programming, because you're trying to schedule two independent processors and keep them both busy without either ever stalling/locking the other one. If you mess this up, you can either instantly halve your performance, or worse, introduce memory corruption bugs that only occur sporadically and seem impossible to fix D3D is a middle layer built by Microsoft -- there's your app, then the D3D runtime, then your D3D driver (Intel/NVidia/AMD's code). Microsoft validates that the runtime is correct and that the drivers are interacting with it properly. Finding out that your code runs differently on different GPU's is exceedingly rare. GL is the wild west -- your app talks directly to the GL driver (Intel/NVidia/AMD's code), and there's no authority to make sure that they're implementing GL correctly. Finding out that your code runs differently on different GPU's is common. Vulkan is much better -- your app still talks directly to the Vulkan driver (Intel/NVidia/AMD's code), but there's an open source suite of tests that make sure that they're implementing Vulkan correctly, and the common validation layer written by Khronos. For shading languages, GLSL and HLSL are both valid, but I just have a personal preference for HLSL. There's also a lot of open source projects designed at converting from HLSL->GLSL, but not as many for GLSL->HLSL. Also note, the above choices are valid for desktop PC's. For browsers you have to use WebGL. On Android you have to use GL|ES, and on iOS you can use GL|ES or Metal. On Mac you can use Metal too. On game consoles, there's almost always a custom API for each console. If you end up doing graphics programming as a job, you will learn a lot of different APIs!
  10. 5 points
    If you've got a large mob of actors all trying to navigate to a single location and your navmesh/grid/whatever has uniform movement costs, you can ditch A* and instead do a single breadth-first flood-fill from the goal location outwards, writing an increasing value each time the fill expands to an unvisited location, then have all actors access the resulting gradient map, which lets them move "downhill" towards the goal location.
  11. 5 points
    Coming up with extra features is easy. The hard part is throwing out the ones that distract from the good parts of the game or only serve to complicate things without making it more fun.
  12. 5 points
    In every commercial project I've been on, there's been multiple chefs in the kitchen. Sometimes there is a singular "lead designer" who technically is responsible for holding onto the singular vision, however on every project the person who actually keeps the game in check from behind the curtain is not the "lead designer", nor the "creative director" or even the CEO -- it's the producer. Not only do they have the task of keeping all the leads in line with regards to the designer's vision, but they also have to make sure that (A) the project is actually completed, and (B) it's completed on time and within budget. This job can involve telling the designer/director/executive that their vision needs to go back to the chopping block, which technically puts them in charge of it Some of the most well received games that I've been involved in have actually been quite free from an authoritarian-designer type, and have had defferent game mechanics developed by individual staff members who were in charge of their implementation (and a decent producer who could keep such chaos under control). These front-line staff are well suited to perform design tasks because the idea->experiment->refinement iteration loop is extremely short for them (compared to the lead-designer situation, where iteration on ideas based on feedback from implementations can take days). Some of the most famous game designers (e.g. Sid Meier was mentioned in the OP of the other thread as the canonical example) are actually capable of writing their own code and therefore testing out their own design ideas themselves, which gives them a huge leg up to the idea-guys of the world. That's a valuable skill for a designer to have. On larger projects you want to have a hierarchical team of designers, with some on the front lines who can design and iterate on systems themselves within the guidelines passed down from their lead. To introduce a counter-example... One of the seminal first-person-shooter games, Goldeneye 007, is most important due to its multiplayer mode (before Halo had made split-screen-console-FPS a mainstream genre) that was actually snuck in by a few rogue programmers who took it upon themselves to defy management and implement features that they knew would be fun...
  13. 5 points
    Your vertex array contains 4 vertices which have an associated index, indicating the position of the vertex in the vertex array. Your index array contains the indices that make up your geometric primitives which consist of a number of vertices. The topology describes how the index array should be interpreted. For a triangle list, this means that three consecutive indices make up one triangle, the next three consecutive indices, another triangle and so on. In your case, you describe 2 triangles: a triangle constituted by the vertices at index 0, 1 and 2, and a second triangle constituted by the vertices at index 0, 2 and 3. The idea behind the index array is to reuse the same vertices between multiple geometric primitives to reduce memory usage (the index is much smaller than a vertex) and reduce vertex shader executions ("shared" vertices are only processed once).
  14. 5 points
    Hello everyone, This is going to be my first blog here and I would like to start by introducing you to my new project on which I already worked for over 2 weeks. In these two weeks I implemented joystick system, created a custom map creation tool, added collision detection and pathfinding, programmed simple quest system and much more. The game is written in Java and OpenGL. Here are some screenshots and a video: https://www.youtube.com/watch?time_continue=1&v=LqYiXptAIRI
  15. 4 points
    The more you know about a given topic, the more you realize that no one knows anything. For some reason (why God, why?) my topic of choice is game development. Everyone in that field agrees: don't add networked multiplayer to an existing game, you drunken clown. Well, I did it anyway because I hate myself. Somehow it turned out great. None of us know anything. Problem #1: assets My first question was: how do I tell a client to use such-and-such mesh to render an object? Serialize the whole mesh? Nah, they already have it on disk. Send its filename? Nah, that's inefficient and insecure. Okay, just a string identifier then? Fortunately, before I had time to implement any of my own terrible ideas, I watched a talk from Mike Acton where he mentions the danger of "lazy decision-making". One of his points was: strings let you lazily ignore decisions until runtime, when it's too late to fix. If I rename a texture, I don't want to get a bug report from a player with a screenshot like this: I had never thought about how powerful and complex strings are. Half the field of computer science deals with strings and what they can do. They usually require a heap allocation, or something even more complex like ropes and interning. I usually don't bother to limit their length, so a single string expands the possibility space to infinity, destroying whatever limited ability I had to predict runtime behavior. And here I am using these complex beasts to identify objects. Heck, I've even used strings to access object properties. What madness! Long story short, I cultivated a firm conviction to avoid strings where possible. I wrote a pre-processor that outputs header files like this at build time: namespace Asset { namespace Mesh { const int count = 3; const AssetID player = 0; const AssetID enemy = 1; const AssetID projectile = 2; } } So I can reference meshes like this: renderer->mesh = Asset::Mesh::player; If I rename a mesh, the compiler makes it my problem instead of some poor player's problem. That's good! The bad news is, I still have to interact with the file system, which requires the use of strings. The good news is the pre-processor can save the day. const char* Asset::Mesh::filenames[] = { "assets/player.msh", "assets/enemy.msh", "assets/projectile.msh", 0, }; With all this in place, I can easily send assets across the network. They're just numbers! I can even verify them. if (mesh < 0 || mesh >= Asset::Mesh::count) net_error(); // just what are you trying to pull, buddy? Problem #2: object references My next question was: how do I tell a client to please move/delete/frobnicate "that one object from before, you know the one". Once again, I was lucky enough to hear from smart people before I could shoot myself in the foot. From the start, I knew I needed a bunch of lists of different kinds of objects, like this: Array<Turret> Turret::list; Array<Projectile> Projectile::list; Array<Avatar> Avatar::list; Let's say I want to reference the first object in the Avatar list, even without networking, just on our local machine. My first idea is to just use a pointer: Avatar* avatar; avatar = &Avatar::list[0]; This introduces a ton of non-obvious problems. First, I'm compiling for a 64 bit architecture, which means that pointer takes up 8 whole bytes of memory, even though most of it is probably zeroes. And memory is the number one performance bottleneck in games. Second, if I add enough objects to the array, it will get reallocated to a different place in memory, and the pointer will point to garbage. Okay, fine. I'll use an ID instead. template<typename Type> struct Ref { short id; inline Type* ref() { return &Type::list[id]; } // overloaded "=" operator omitted }; Ref<Avatar> avatar = &Avatar::list[0]; avatar.ref()->frobnicate(); Second problem: if I remove that Avatar from the list, some other Avatar will get moved into its place without me knowing. The program will continue, blissfully and silently screwing things up, until some player sends a bug report that the game is "acting weird". I much prefer the program to explode instantly so I at least get a crash dump with a line number. Okay, fine. Instead of actually removing the avatar, I'll put a revision number on it: struct Avatar { short revision; }; template<typename Type> struct Ref { short id; short revision; inline Type* ref() { Type* t = &Type::list[id]; return t->revision == revision ? t : nullptr; } }; Instead of actually deleting the avatar, I'll mark it dead and increment the revision number. Now anything trying to access it will give a null pointer exception. And serializing a reference across the network is just a matter of sending two easily verifiable numbers. Problem #3: delta compression If I had to cut this article down to one line, it would just be a link to Glenn Fiedler's blog. Which by the way is here: gafferongames.com As I set out to implement my own version of Glenn's netcode, I read this article, which details one of the biggest challenges of multiplayer games. Namely, if you just blast the entire world state across the network 60 times a second, you could gobble up 17 mbps of bandwidth. Per client. Delta compression is one of the best ways to cut down bandwidth usage. If a client already knows where an object is, and it hasn't moved, then I don't need to send its position again. This can be tricky to get right. The first part is the trickiest: does the client really know where the object is? Just because I sent the position doesn't mean the client actually received it. The client might send an acknowledgement back that says "hey I received packet #218, but that was 0.5 seconds ago and I haven't gotten anything since." So to send a new packet to that client, I have to remember what the world looked like when I sent out packet #218, and delta compress the new packet against that. Another client might have received everything up to packet #224, so I can delta compress the new packet differently for them. Point is, we need to store a whole bunch of separate copies of the entire world. Someone on Reddit asked "isn't that a huge memory hog"? No, it is not. Actually I store 255 world copies in memory. All in a single giant array. Not only that, but each copy has enough room for the maximum number of objects (2048) even if only 2 objects are active. If you store an object's state as a position and orientation, that's 7 floats. 3 for XYZ coordinates and 4 for a quaternion. Each float takes 4 bytes. My game supports up to 2048 objects. 7 floats * 4 bytes * 2048 objects * 255 copies = ... 14 MB. That's like, half of one texture these days. I can see myself writing this system five years ago in C#. I would start off immediately worried about memory usage, just like that Redditor, without stopping to think about the actual data involved. I would write some unnecessary, crazy fancy, bug-ridden compression system. Taking a second to stop and think about actual data like this is called Data-Oriented Design. When I talk to people about DOD, many immediately say, "Woah, that's really low-level. I guess you want to wring out every last bit of performance. I don't have time for that. Anyway, my code runs fine." Let's break down the assumptions in this statement. Assumption 1: "That's really low-level". Look, I multiplied four numbers together. It's not rocket science. Assumption 2: "You sacrifice readability and simplicity for performance." Let's picture two different solutions to this netcode problem. For clarity, let's pretend we only need 3 world copies, each containing up to 2 objects. Here's the solution I just described. Everything is statically allocated in the .bss segment. It never moves around. Everything is the same size. No pointers at all. Here's the idiomatic C# solution. Everything is scattered randomly throughout the heap. Things can get reallocated or moved right in the middle of a frame. The array is jagged. 64-bit pointers all over the place. Which is simpler? The second diagram is actually far from exhaustive. C#-land is a lot more complex in reality. Check the comments and you'll probably find someone correcting me about how C# actually works. But that's my point. With my solution, I can easily construct a "good enough" mental model to understand what's actually happening on the machine. I've barely scratched the surface with the C# solution. I have no idea how it will behave at runtime. Assumption 3: "Performance is the only reason you would code like this." To me, performance is a nice side benefit of data-oriented design. The main benefit is clarity of thought. Five years ago, when I sat down to solve a problem, my first thought was not about the problem itself, but how to shoehorn it into classes and interfaces. I witnessed this analysis paralysis first-hand at a game jam recently. My friend got stuck designing a grid for a 2048-like game. He couldn't figure out if each number was an object, or if each grid cell was an object, or both. I said, "the grid is an array of numbers. Each operation is a function that mutates the grid." Suddenly everything became crystal clear to him. Assumption 4: "My code runs fine". Again, performance is not the main concern, but it's important. The whole world switched from Firefox to Chrome because of it. Try this experiment: open up calc.exe. Now copy a 100 MB file from one folder to another. I don't know what calc.exe is doing during that 300ms eternity, but you can draw your own conclusions from my two minutes of research: calc.exe actually launches a process called Calculator.exe, and one of the command line arguments is called "-ServerName". Does calc.exe "run fine"? Did throwing a server in simplify things at all, or is it just slower and more complex? I don't want to get side-tracked. The point is, I want to think about the actual problem and the data involved, not about classes and interfaces. Most of the arguments against this mindset amount to "it's different than what I know". Problem #4: lag I now hand-wave us through to the part of the story where the netcode is somewhat operational. Right off the bat I ran into problems dealing with network lag. Games need to respond to players immediately, even if it takes 150ms to get a packet from the server. Projectiles were particularly useless under laggy network conditions. They were impossible to aim. I decided to re-use those 14 MB of world copies. When the server receives a command to fire a projectile, it steps the world back 150ms to the way the world appeared to the player when they hit the fire button. Then it simulates the projectile and steps the world forward until it's up to date with the present. That's where it creates the projectile. I ended up having the client create a fake projectile immediately, then as soon as it hears back from the server that the projectile was created, it deletes the fake and replaces it with the real thing. If all goes well, they should be in the same place due to the server's timey-wimey magic. Here it is in action. The fake projectile appears immediately but goes right through the wall. The server receives the message and fast-forwards the projectile straight to the part where it hits the wall. 150ms later the client gets the packet and sees the impact particle effect. The problem with netcode is, each mechanic requires a different approach to lag compensation. For example, my game has an "active armor" ability. If players react quick enough, they can reflect damage back at enemies. This breaks down in high lag scenarios. By the time the player sees the projectile hitting their character, the server has already registered the hit 100ms ago. The packet just hasn't made it to the client yet. This means you have to anticipate incoming damage and react long before it hits. Notice in the gif above how early I had to hit the button. To correct this, the server implements something I call "damage buffering". Instead of applying damage instantly, the server puts the damage into a buffer for 100ms, or whatever the round-trip time is to the client. At the end of that time, it either applies the damage, or if the player reacted, reflects it back. Here it is in action. You can see the 200ms delay between the projectile hitting me and the damage actually being applied. Here's another example. In my game, players can launch themselves at enemies. Enemies die instantly to perfect shots, but they deflect glancing blows and send you flying like this: Which direction should the player bounce? The client has to simulate the bounce before the server knows about it. The server and client need to agree which direction to bounce or they'll get out of sync, and they have no time to communicate beforehand. At first I tried quantizing the collision vector so that there were only six possible directions. This made it more likely that the client and server would choose the same direction, but it didn't guarantee anything. Finally I implemented another buffer system. Both client and server, when they detect a hit, enter a "buffer" state where the player sits and waits for the remote host to confirm the hit. To minimize jankiness, the server always defers to the client as to which direction to bounce. If the client never acknowledges the hit, the server acts like nothing happened and continues the player on their original course, fast-forwarding them to make up for the time they sat still waiting for confirmation. Problem #5: jitter My server sends out packets 60 times per second. What about players whose computers run faster than that? They'll see jittery animation. Interpolation is the industry-standard solution. Instead of immediately applying position data received from the server, you buffer it a little bit, then you blend smoothly between whatever data that you have. In my previous attempt at networked multiplayer, I tried to have each object keep track of its position data and smooth itself out. I ended up getting confused and it never worked well. This time, since I could already easily store the entire world state in a struct, I was able to write just two functions to make it work. One function takes two world states and blends them together. Another function takes a world state and applies it to the game. How big should the buffer delay be? I originally used a constant until I watched a video from the Overwatch devs where they mention adaptive interpolation delay. The buffer delay should smooth out not only the framerate from the server, but also any variance in packet delivery time. This was an easy win. Clients start out with a short interpolation delay, and any time they're missing a packet to interpolate toward, they increase their "lag score". Once it crosses a certain threshold, they tell the server to switch them to a higher interpolation delay. Of course, automated systems like this often act against the user's wishes, so it's important to add switches and knobs to the algorithm! Problem #6: joining servers mid-match Wait, I already have a way to serialize the entire game state. What's the hold up? Turns out, it takes more than one packet to serialize a fresh game state from scratch. And each packet may take multiple attempts to make it to the client. It may take a few hundred milliseconds to get the full state, and as we've seen already, that's an eternity. If the game is already in progress, that's enough time to send 20 packets' worth of new messages, which the client is not ready to process because it hasn't loaded yet. The solution is—you guessed it—another buffer. I changed the messaging system to support two separate streams of messages in the same packet. The first stream contains the map data, which is processed as soon as it comes in. The second stream is just the usual fire-hose of game messages that come in while the client is loading. The client buffers these messages until it's done loading, then processes them all until it's caught up. Problem #7: cross-cutting concerns This next part may be the most controversial. Remember that bit of gamedev wisdom from the beginning? "don't add networked multiplayer to an existing game"? Well, most of the netcode in this game is literally tacked on. It lives in its own 5000-line source file. It reaches into the game, pokes stuff into memory, and the game renders it. Just listen a second before stoning me. Is it better to group all network code in one place, or spread it out inside each game object? I think both approaches have advantages and disadvantages. In fact, I use both approaches in different parts of the game, for various reasons human and technical. But some design paradigms (*cough* OOP) leave no room for you to make this decision. Of course you put the netcode inside the object! Its data is private, so you'll have to write an interface to access it anyway. Might as well put all the smarts in there too. Conclusion I'm not saying you should write netcode like I do; only that this approach has worked for me so far. Read the code and judge for yourself. There is an objectively optimal approach for each use case, although people may disagree on which one it is. You should be free to choose based on actual constraints rather than arbitrary ones set forth by some paradigm. Thanks for reading. DECEIVER is launching on Kickstarter soon. Sign up to play the demo here!
  16. 4 points
    Even though that video is not 'real' -- A CPU from Doom's era could do maybe 11 million general instructions per second. A CPU from Doom(2016)'s era could do maybe 400000 million floating point instructions per second. Given that speed up, if the original game could have 30 enemies on screen, a modern CPU should cope with a million. However, CPU speed is not the only factor. The memory bandwidth stat's have gone from maybe 50MB/s to 50GB/s. If we assume the original game ran at 60Hz, maxed out memory bandwidth, and performance only relies on the number of entities, which is 30, we get 50MB/60Hz/30 = about 28KB of memory transfers per entity per frame. Scale that up to a million entities and suddenly you need 1.6TB/s memory bandwidth! Seeing we've only got 50GB/s on our modern PC, that means we can only cope with around 31k entities due to the RAM bottleneck, even though our CPU is capable of processing a million! So, as for programming techniques to cope with a million entities, focus on memory bandwidth. Keep the amount of storage required per entity as low as possible. Keep the amount of memory touched (read + written) per entity as low as possible. Leverage the CPU cache as much as possible. Optimise for locality of reference. If two entities are both going to perform similar spatial queries on the world geometry, batch them up and run both queries back to back so that the world geometry is more likely to be present in cache for the second query.
  17. 4 points
    Improved lens flares, Glow, and tweaked High Dynamic Range lighting procedure.
  18. 4 points
    I have been a moderator here for about a decade. GDNet is a great community, and being part of the moderation team has been a wonderful experience. But today is, more or less, my last day as a part of that team. As it does when one gets older, life has increasingly intruded on the time I'd normally spend here. Fortunately all those intrusions have been (and continue to be) good things, but just the same I don't feel like I have the time to really do the job justice, and so I am stepping down. One of the remaining moderators will take over the forum in my place, although I don't know who that will be yet. Although it's very likely I'll be much less active for the next few months, I am probably not going away forever, and can be reached via private message on the site if needed. Thanks for everything, it's been great!
  19. 4 points
    The last big developer that I worked for did engine in C++, tools / data compilers in C#, and gameplay in Lua and C++. Lua and C# were used because they're more productive languages. Code is less complex than equivalent C++ translations, which makes maintenance easier and development/debugging quicker. Moreover different paradigms/features are available, such as duck typing, garbage collection, mixins or the prototype pattern. Things that require silly/complex "component frameworks" in C++ can just be don't out of the box in Lua. Things like reference counting and weak pointers don't require a library Most gameplay code isn't performance sensitive, but the bits that show up on the profiler would be ported to C++ and given some love. All the programmers are C++ coders who are taught Lua - you wouldn't bother hiring a Lua-only coder for a programmer's role. In my current game, we do most of the gameplay in C++, but pulling all tweakables/configuration data from Lua. Lua can be used to configure the game at a high level (plugging different components/systems together) and then the real frame-to-frame logic happens in C++. This gives us a lot of flexibility that you could get from something like ECS, without having to actually make a C++ component framework. C++ is an extremely dangerous language though, and has to be written with care. The more projects I ship, the more I come to fear C++ for its ability to create extremely subtle yet extremely dangerous bugs, mostly revolving around memory management So, we use a lot of discipline around pointers to help mitigate these risks, such as templates that: tag raw pointers with ownership semantics (and do leak checks in dev builds), tag pointers with array semantics (and do range checks in dev builds), zero initialise pointers (and do uninitiated read checks in dev builds), etc, etc... We also use scope-stack allocations for most things rather than heap allocations to make leaks impossible at a semantic level.
  20. 4 points
    "Responsive" sites are often terrible for mobile devices. While wealthy segments of society replace their phones every year or two, many people have phones that are many years old. Most responsive sites use JavaScript to run their resizing and rescaling and reflowing, which in turn means more processing and memory use on devices that are already low on processing power and memory. Other times they rely on complex CSS rules which require more processing power and memory on the browser. Consequently, visually "responsive" sites tend to have bad performance on phones making them less responsive in terms of performance. Either way you go, responsive designs may look nicer and are formatted in response to screen, but tend to make site load times and page interaction that much slower and painful and less responsive. You asked about media queries, those are also hit-and-miss that are sometimes ignored on older devices and browsers. Many mobile browsers have options that allow users to modify them or for the device to behave as a desktop computer to avoid problems with garbage code. HTML was meant to be display agnostic, displaying just fine on text system, on graphical systems, on any resolution from tiny screens to enormous. Just make proper HTML, the stuff designed to work in a display agnostic format, and the site should work fine. News sites have been particularly awful at this, with heavily enforced fixed-width columns and ads that take enormous screen space that only look reasonable on a large display. The more web site designers pull out the artistic crayon box and demand a picture-perfect Photoshop layout for the web site --- which is what many marketing groups and visual artists push for --- the farther they leave behind the core principles of the Web. One of the driving purposes for the creation of HTML was to get away from display-specific formatting, choosing instead to specify general structure and allowing the individual devices to decide how that looks. Responsive design takes exactly the opposite approach, denying the individual viewer from deciding how something looks, and enforcing the page creator's layout rather than the device's own layout based on content structure.
  21. 4 points
    Cryptocurrencies, like most currencies, have no intrinsic value. They only have value because people are willing to exchange them at a given rate. Much of the volatility in cryptocurrency is not because if issues with the math or the verification or the costs of mining, the volatility comes from the lack of established value and the lack of a link to physical goods except for the link of the cost of mining the coins. Apart from that, currencies are nothing more than a token. The token has value because people are willing to trade goods and services at a rate they deem fair. Tokens like dollars and euros have a constant dynamic on their value, and because so many people use them for a wide range of items the value is easily maintained. Digital currencies like bitcoin have had far fewer users, and even though they are used for many things online their adoption rate is minuscule relative to the transaction rates of other currencies. No matter the currency, their value will always remain whatever value people are willing to trade for them. As for making one unified currency, that sort of thing never works out the way people plan. As long as people desire to trade good and services using a token as value, the token will have value.
  22. 4 points
    There are many paths to your desired end goal. I'm a fan of the traditional CS degree, and it's probably the safest route. You'll end up with a strong general foundation, which in the worst case you can put to work earning a living outside the games industry. Game Development degrees are still a bit like the Wild West - many of the programs haven't been around long enough to establish much of a reputation. But if you find a good one, it may well be more relevant on a day-to-day basis than much of a traditional CS program.
  23. 4 points
    Hi, my two cents: - all you basically can do right now is run around and shoot, and shooting doesn't look fun. Instead of just a ball(?) show a gun on the player, show projectiles and let shooting have more impact. It needs to go !!!BOOOM!!!, things should splash around and be destroyed, a visible and audible reaction to the bullets and explosions. And show the aftermath, let bodies and gore lie around. Check this: - it doesn't look as if it's hard to evade the enemies. Perhaps add more types of enemies, a few faster ones, a few that explode close to the player, a few that can shoot, the usual stuff. Also let the player move into confined spaces, so that it's actually harder to avoid enemies. Good luck!
  24. 4 points
    Hello, it's time for statistics and earnings from my 4th Android/WebGL Game, "Mirkowanie" is idle clicker, it's mostly a special game, because it was made for Polish website community (but not only players from Poland play it), it's specific, so I was not looking into a lot of players + earnings (Poland got very low earnings from ads) I'm just 19 year old newbie, I couldn't sleep, I had weak up at 3-4AM and I started to develop "Mirkowanie". It was fun to develop it ;p Just a lot of work in analyzing data (imagine checking website data from 2005-2018 year). Here are statistics: Game got downloaded 3451x (19x Amazon Store and 3432x Google Play), Amazon Store is that low, because downloads was mostly from Poland. Game got 26k game plays (online). Top download (one day) was 1900x times, I got 500x testers in one day. Kongregate (new players) Unity Analitycs (new players) MAU (Unity analitycs); Earnings? $36.82 (Unity Ads, video ads, can't withdraw it). $29.62 (Kongregate, we got payed from game plays, can withdraw it ). $17.56 Chartboost (ads that are shown in-game, sometimes video ads (if Unity ads fail), can't withdraw it ). So my pure "earning" is $29.62 (can withdraw it). So was not worth it, but that's not problem, my games also don't got IAPs + most of the players was from Poland (low cash from ads), but it was still fun to develop it, I mean it was crazy idea ;p I mean from one side I got 1900x downloads in one day, it's big number from me, but to be honest, for gamedev business it's nothing, but wait, for me it's still 1900x players (my first game got 600x Downloads in ~1 year). Also, fun fact, I was top1 new Free casual game in Poland. Also, ~25% downloads comes from Xiaomi devices. So in last week i got 14.3k sessions, max time spend in-game (one day, max, total ) was 13.85M seconds (~160 days). It's 80 minutes for each user, average 2.5 session per user. Some other stats: Most used phones: Most used android versions: Game name is Mirkowanie - Ilde Money Clicker Also, if any1 wish to check out my other earnings/statistics from other games, there are all on Reddit + I always post them on my Twitter/Facebook (same as my name on Reddit). Also, right now I'm working on space shooter with own story, we get new ships, buy upgrades, we got side and main quests and a lot more. It got story made by me + Original sounds, that's some nice music (not from game), but made by the same person: Check Music Images from the game are also on images with statistics. Check out game + observe me :-} Android Online Firefox play only, use text save, Chrome got bug, reported it to browser, still waiting for fix, that same other browsers on Blink engine (not all)) Amazon Store Facebook Twitter And my new game (W.I.P):
  25. 4 points
    There is actually a rule of thumb here which is fairly easy. If you intend to supply fallback functionality for when you can not get say IDXGIAdapter4, then following point three is the best way to go. If, on the other hand, you absolutely require a certain level of the interface to exist, say you absolutely must have IDXGIAdapter3 or higher, then keep only IDXGIAdapter3 pointers. Basically there is no reason to have multiple pointers or continually query things unless you can actually handle failures and have supplied alternative paths through the code for those cases. A case in point, I'm doing work with DX12 and as such there are guaranteed minimum versions of DXGI which must exist for DX12 devices to even be created. I have no fallbacks since I can't get DX12 functional without the minimums so any failure is a terminal error. On the other hand, I do query for 12.1 features in a couple places just because they make things a bit easier, but I have fallbacks to 12.0 implementations, in those specific cases where it matters, yes I query as needed. Hope this makes sense and is helpful.
  26. 4 points
    It isn't that the moderator is unfriendly, the exact opposite in fact. The moderators give that message to show that the post has moved, it isn't part of the topic. It is only so you know why and that it did move. There is no penalty for it. It's also not uncommon for moderators to not answer these questions that get asked hundreds of times. However from personal experience I know they answer topics that have been left unanswered for a few hours. They also answer the difficult topics, leaving these smaller open discussion topics for the community. The rule of thumb for the game design forum is that if you use the word "design" it probably belongs there. For example: "I was working on this level design..." Some of the topics are difficult to place, so they get moved around a bit. The reason it works like this is because people browse topics they like and can help with. For example I am a artist, so I check the art forum each day. I don't check the Business forum because I have little advice to give there. Your topic gets moved to the people who can best answer it. The people who post are in fact trying to help you. The beginners forum also has a unspoken rule to be extra friendly to the poster, so if you have a difficult topic that you fear you know nothing about, that is a safe place to post it. The Critique and Feedback forum is about honesty above all else; or at least it should be. It's good that your working to finish a project, it's something all developers should aim to do first. Once you make that first game from start to end, everything starts to go easier. Making a small puzzle game is often the easiest or a remake of a retro game. There is lots of developers who posts postmortems on there games in the blogs, they make for great reads and they can be goldmines of information. Welcome to the community, this is a place for sharing knowledge so don't be afraid to respond to posts.
  27. 4 points
    Thinking about doing the latest gamedev challenge, assuming I can find the time. PacTank will rise. Maybe.
  28. 4 points
    Hi again! This week, I was working on the character selection. But, first of all, I want to show you the very first gameplay video which covers a 1vs1 duel. Check it out: Gameplay Character selection Basically, you can start the game with at least two player. Adding players can be done by clicking on the "+"-box. You can of remove a player by clicking on the remove icon under a character box. Currently, there are two characters available - the fox and the mouse. You can select up to four player which will be placed differently on the map. You can take a look at the screenshot. The starting points of the player are the "P" blocks: The screenshot comes from the room editor in Game Maker. (The red blocks are the solid ones). Discord server I set up a basic Discord server in order for you to join and talk to me. Check it out here: Discord.gg I want to be more active on Discord, because it is much easier to provide information and status updates to you. Additionally, I want you to actively talk to me in order to make a real great game, so I would really appreciate any kind of feedback. Pre alpha Next week there's going to be the release of my Pre alpha version for you. The goal is to get as much feedback as possible. That's it for this update. Thank you for reading! As always, if you have questions or any kind of feedback feel free to post a comment or contact me directly. Cheers.
  29. 4 points
    I all you want is pack together 1024 textures to tile. Just use a texture array…
  30. 4 points
    Yes that is the problem and why so many of us keeps checking his posts. When it comes to game design Kavik Kang shows true knowledge but when it comes to Rube he acts like a amateur. He worked his way up from a low score, it's low again, by giving good advice the other developers. When he talks of design, it's often great advice. Personally that is why every time he makes a topic I take the time to read it. It is also what annoys me the most. Since the first post I read of Rube, I've gathered a team to build a Indie game AND released two mobile games, for over 800 000 players. I had to start from scratch each time, with no publisher funding me or crowd funding and was able to do this. By my logic a developer with years more experience than me should be able to do more. @Kavik Kang build a team, make a game. I would really like to see what you can achieve with your skill. Considering how many people keep viewing your posts, I don't believe I am the only one. But no one is going to do it for you.
  31. 4 points
    The thing is, it's not like there's not a grain of truth in what Kavik Kang says - as part of the designing team of SFB it's possible he has significant experience in designing games and is actually competent at it. From the wiki page : ---- Star Fleet Battles was inducted into the Academy of Adventure Gaming, Arts, & Design Hall of Fame in 2005 where they stated that "Star Fleet Battles literally defined the genre of spaceship combat games in the early 1980s, and was the first game that combined a major license with 'high re-playability'."[15] In his 2007 essay, Bruce Nesmith stated "No other game in hobby game history so completely captures the feel of ship-to-ship combat in space than Star Fleet Battles. The fact that it does so in the Star Fleet Universe is icing on the cake."[1] ---- Brush Nesmith, btw, is one of those table-top designers that the industry "never hires" apparently, except Bethesda did and he made, oh, you know, those small obscure games, Daggerfall, Oblivion and Skyrim https://en.wikipedia.org/wiki/Bruce_Nesmith And what do you know, the main designer of Morrowind and Oblivion(and I should also mention Amalur, which was pretty excellent despite its unfortunate commercial failure) came from tabletop-game fame too, https://en.wikipedia.org/wiki/Ken_Rolston The whole thing about the game industry not "paying its dues" to the tabletop industry is nonsense anyway - perhaps the most famous book about the game industry is "Masters of Doom" and a good portion of it is dedicated to describe how the id guys spent all their free time playing D&D games with Carmack as DM - "Quake" was one of the characters in one of those games. And it's not like nobody ever hired Kavik either - he's the designer of Sinistar Unleashed. So all in all, there is a grain of truth in what he says - having worked on SFB should be condidered some serious credentials when it comes to be hired for a space game. My guess is he's actually probably knowledgeable and competent enough when it comes to designing games, but terrible at presenting himself, his ideas, incredibly off-putting attitude, and probably impossible to work or even converse with at this point. You woudn't be able to have him sit down and explain his ideas to the rest of the team without making speeches that make Fidel Castro's look like 30-second commercials. (As to the purpose of this thread - my impression is that Kavik really actually enjoys all this process - this is a venue for him to vent. And...we are bored. Better than nothing... ).
  32. 4 points
    This is another thing you like to claim regularly. I've seen you say that the video game industry ignore everything learned from table top design. I've seen you say that there's no respect for table top designers, and that experience in table top design is not considered relevant experience by the video game industry. These things are false. All of the video game designers I have ever heard talk about it have a huge respect for table top design, and many of them are also avid table top gamers. "Pen and paper" design (that is, making table top or card games) is a very common prototyping method in our industry, even when designing real time action games. When people ask people from the video game industry how to learn design, it's incredibly common to suggest table top design to get started. Many designers from the table top industry have been very successful in the video game industry: Steve Jackson (the one from the UK, not the US) was a co-founder is the very successful Games Workshop, but also co-founded the very successful Lionhead Studios in the videogame industry. Dave Arneson, co-creator of D&D has contributed to video game design and taught at Full Sail. Warren Spector who worked at Steve Jackson Games, TSR, and others also worked for Origin, Ion Storm, and Looking Glass. Colin McComb worked at TSR, and amongst other video game credits has worked at Black Isle Studios. Richard Garfield, the creator of Magic: The Gathering has a number of video game design credits. These are just a few prominent examples found with less than five minutes on Google, I'm sure you could easily find loads more. When you say the video game industry has no respect for video game designers you are wrong. When you say table top designers aren't considered relevant to video game design you are wrong. It's not table top designers in general that have trouble transitioning, it's specifically you, and it's not because you only have table top experience.
  33. 4 points
    Kavik, who exactly is this "You" you are referring to that is insisting that table top games aren't relevant to what "We" do. I've been professionally involved with hundreds of software/app/web titles, and board games have played a pretty large part of developing many of them. Of the titles I've been involved with from the ground up, probably half started out with hand drawn pieces on card stock with. They are especially important in early planning stages for anything top down and strategy related, as a group of designers can sit around a table together and talk through ideas in an afternoon rather than spending days/weeks/months prototyping the mechanics in code. But I've seen them used for pretty near any style of game I can think of, even FPS titles. Just the more abstract from a boardgame view the final game is, the more imagination is needed by the test players to judge how the end product is going to feel. If the planning and thought process to play a game works well with a pen and paper, then odds are good it will hold up when the computer crunches all the numbers for you, but as a designer you need to watch out for issues like time-decision overloads if 'turns' are running forward in real time for the end user rather than running in 'bullet time' because you're slowly juggling all the numbers and tracking by yourself.
  34. 4 points
    We really need to be careful when comparing "One main designer" and "Design by committee" results, and compare apples to apples. If you compare the work of a skilled, experienced, and highly successful designer to that what a group of kindergartners on excessive sugar and coffee come up with, then you're not going to have a very fair time and get a rather biased outcome, much like you would if you sat a group of seasoned designers down and compared their work with that of a single sugar and caffeine pumped kid. Another important factor when gauging the skill of a designer is whether or not they've worked within the project budget - And that is not just a cash value thing, but also a skill issue. What are the overall skills of the team and what are they actually able to achieve? You can have the most graceful and flawless mechanic and visual designs ever planned, but they're not worth much if the team you're working with doesn't have the skills and resources to do everything to spec by launch day. And of course there is the wonderful issue of design complexity, and confusing it for quality: A more detailed and more complex design is not a sign of it being superior, it is a sign of it being more detailed and complex. A mechanical power transfer mechanism with a million intermeshed pieces all working together to transfer rotational energy from a motor to some manner of tool is "Extremely detailed and complex", but odds are it is vastly inferior to a handful of suitably designed gears/cogs. You can run the maths and calculate out all the gravitational forces in the entire solar system to get a "Perfectly accurate simulation" of where every grain of dust will be over the next 10 years, but if all you really wanted was "Where abouts would Mars be..." then you've wasted a lot of time and effort on details that simply didn't matter. A huge part of a game designer's job, whether we're talking about computer or board games, is distilling out the essence of the design and using the parts that are actually needed while discarding those that are merely distractions, and doing so in a way that is readily understood and enjoyed by the player. And when it comes to board games this can become an exercise in information presentation. The last version of RISK rules I read were a great example of "Close, but no cigar" design presentation. I'm specifically thinking of the reinforcement mechanic rules, which frankly were a mess. (Trying to relearn a game's rules when you have a 10 year old hopped up on sugar interrupting and insisting they 'know the rules' is a 'fun' holiday experience.) The information could have been far clearer if done in a more point form format with a small table, but instead they give some long and slightly awkward paragraphs describing the mechanic in a general sense that also ends up burying the minimum reinforcement rate somewhere. - At the very least you should start off a rule's section with what the minimum standard of what a player should expect every turn, and then expand on it from there.
  35. 4 points
    In my solution global env maps are separated from the local ones. Global ones should capture mostly sky and generic features, and be used very sparsely (few per entire level). This way it's enough to blend just two of those to have perfect transitions and far away reflections will look fine. I actually used this system for Shadow Warrior 2, just with a small twist - probes were generated and cached in real-time. If you are interested you can check out some slides with notes: “Rendering of Shadow Warrior 2”.
  36. 4 points
    Half-Life Doom Tie Fighter Street Fighter 2 Starcraft Arkham Asylum Portal All classics. None of them under the guidance of a single overriding vision or even any involvement from Queen. Can a single vision drive a game? Sure, look at Shigeru Miyamoto or David Braben or Hidetaka Miyazaki. Is it essential? Clearly not. Also... slightly ironic that you chose a song from a band where every member contributed songs, and their biggest hit features massive vocal harmonies.
  37. 4 points
    Kavik Kang: American capitalism is the greatest thing ever because everyone is free to choose the trade they want and nothing stops them from thriving in it by being the best in providing goods and services that people want. Also Kavik Kang: My failures is everyone else's fault, the system doesn't recognize my genius, I've done everything I can to break into the industry but everyone in it is an idiot and doesn't know what they're doing and out to get me so it's impossible so I give up, read my 430293402937 page blog, also my grandpa founded CIA or something. Wah wah wah fucking wah.
  38. 4 points
    There is a reason that suspend and resume do not exist in posix and many others, they are inherently unsafe. Because you don't know exactly where the thread will be during a suspend there are a lot of different things which could go wrong. For instance, if you suspend with a lock under the threads control, this can cause a very difficult to find/understand deadlock. For this reason it was deemed best to avoid this and make programmers use explicit synchronization. In general this is the best solution even if the API's are available because you know the exact state at the point of suspension, i.e. I suggest not supporting this even on Window's. See the caution here: https://msdn.microsoft.com/en-us/library/system.threading.thread.suspend(v=vs.110).aspx, and the fact that it is deprecated in .Net moving forward for these and other reasons.
  39. 4 points
    Looks like you're accidentally using the MapOnDefaultBuffers feature: https://msdn.microsoft.com/en-us/library/windows/desktop/dn280377(v=vs.85).aspx MapOnDefaultBuffers Type: BOOL Specifies support for creating ID3D11Buffer resources that can be passed to the ID3D11DeviceContext::Map and ID3D11DeviceContext::Unmap methods. This means that the CPUAccessFlags member of the D3D11_BUFFER_DESC structure may be set with the desired D3D11_CPU_ACCESS_FLAG elements when the Usage member of D3D11_BUFFER_DESC is set to D3D11_USAGE_DEFAULT. The runtime sets this member to TRUE if the hardware is capable of at least D3D_FEATURE_LEVEL_11_0 and the graphics device driver supports mappable default buffers.
  40. 4 points
    You might consider shipping your assets using a virtual file system. There are readily available ones out there. There is nothing unprofessional about shipping your assets with your game. I've played a number of AAA games that do that.
  41. 4 points
    GitHub's Gists, specifically, tend to be the most convenient way to do this. Saves needing to actually have git on the computers you are working on (very handy if some of them are shared/lab computers).
  42. 4 points
    If you're going to force me to your arbitrarily picked resolution, then at least let me run it windowed. 1024x768 looks horrible on my monitors full screen and my eyes tend to get sore after 20 minutes of playing or so and I quit and go and leave a horrible review/look for the refund button/uninstall button. You can also scale non-uniformly your 2D art assets to make them look better at whatever resolution too as 1080p is far more common these days than 4:3 for monitors and even mobile devices.
  43. 4 points
    Now, say you have a wife and a child. You must spend at least 1 hour with your wife, and 1 hour with your son. So it remains 2 hours per work day. During the weekend, your wife and your kid expect more from you. So 8 hours remain for each. So 26 hours remain in total instead of 47 But from these 26 hours, you'll have to buy food, to put your kid to school, to sport, you'll have to deal about papers, you lost a bit time in the transport. And except if you have a robotic life, you would have spent the remaining time with work overtime, talking with colleagues, talking with friends, on the phone, roaming on the internet, resting a bit, reading a book, trying to touch your guitar again, going to the doctor, having a walk in the park nearby, or simply sitting on the sofa, watching something on TV, listening to music....
  44. 4 points
    And I for one do not buy the Luddite "Smash the Machines!" approach to preserving employment all for the sake of maintaining a barely functional economic model. - Why should it be the expectation that so many in the world should work 40+ hours a week for someone else's profit while they themselves struggle to keep lights on, feed themselves, and maintain a comfortable space to live? In all honesty, why should I care if my neighbour stays home watching TV all day while I go out and do a job I'm passionate about and care for, assuming that neighbour isn't making a mess or having a negative impact on my own life? We are approaching a point in history where humans can universally devote their lives to arts and sciences, and where labour is done out of choice rather than economic necessity. One where people choose to build wooden boats by hand, not because it is economically superior to something built by a robot in a factory, but because building a wooden boat with hand tools is fun.
  45. 3 points
    There's your problem. Don't drink coffee in the late afternoons (say, 3pm onward). Drink it judiciously after about 1pm. Decaf is your friend if you need to wean off the afternoon coffee.
  46. 3 points
    A separable blur isn't a "typical" multi-pass technique where you just draw multiple times with a particular blend state. Instead, the second pass needs to read the results of the first pass as a texture. This requires changing render targets and shader resource view bindings between your draw calls. The basic flow goes something like this (in pseudo-code): // Draw the scene to the "main pass" render target SetRenderTarget(mainPassTarget.RTV); DrawScene(); // Draw the vertical pass using the main pass target as its input SetRenderTarget(blurTargetV.RTV); SetShaderResourceView(mainPassTarget.SRV); SetPixelShader(blurVerticalPS); DrawQuad(); // Draw the horizontal pass using the vertical blur target as its input SetRenderTarget(finalBlurTarget.RTV); SetShaderResourceView(blurTargetV.SRV); SetPixelShader(blurHorizontalPS); DrawQuad();
  47. 3 points
    I'm always interested in the hidden impacts social media has on our lives, particularly the negative impacts which we don't realize are affecting us. A month ago, I decided that I was spending an unhealthy amount of time on social media and I needed to put it to a stop. My daily routine was to wake up, check twitter, check facebook, and maybe check the front page of reddit. By "checking" them, what it really means is I consume all of my news feeds. What's going on? who's doing what? What's the latest that's happening? etc etc. It's fine to want to know these things, but the problem comes when I realized that it took me an hour in the morning to do this, and then later on in the day, my news feeds would change, and then I'd spend another hour or two trying to keep up. In all, I could accidentally spend three hours of my day, spread across 15-30 minute intervals, trying to stay updated on social media news feeds. When you add up all that time, and then look at how much time that adds up to over a month, it's a LOT of time! 3 hours a day * 30 days a month = 90 hours! That's like two working weeks with some overtime added in. I was starting to feel mentally unhealthy, like "this isn't the kind of life I want to live or how I want to spend my precious lifes time." The follow up question I had to ask myself: "Okay, does this make my life better or worse?", "Is this a habit or an addiction?", "what would I spend my time on if I got those hours back?" I didn't know the answers, so I decided to conduct an experiment and find out. I would not log in to facebook, twitter, or reddit for a month straight. Completely cold turkey. Now, the results are in. 1) I read the entire Stormlight Archives book series by Brandon Sanderson. Each book was about 1000 pages, so I read 3000 pages in a month. I really enjoyed reading these books and found that it filled a need for me to find wisdom and enlightenment. If you look for it, you can find it sprinkled throughout the series. Here's a rough idea that resonated with me: "It's easy to not fail. If you never try to do anything, you can never fail. But people who never try anything aren't worth a damn, so to fail in life is to live a good life." There's a lot of very carefully thought out dilemmas in there as well, which I really enjoyed. When I was in my teens, I used to be quite a prolific reader and went through a book a week. I think I'm going to return to my love for reading good literature. My intuition says that fundamentally, writing improves with reading a lot. 2) I felt really off balance. Whether social media was a habit or an addiction, I don't know, but if you do something drastic to change your lifestyle, there's going to be some readjustment to the new life. 3) I felt a bit more isolated. Anytime something remarkable happened to me, or I had a profound thought, I couldn't write a social media post about it. In a way, I suppose that's better. It showed me that I need to find other outlets for self expression, like telling people in person. Or, keeping it to myself and just enjoying the moment before it fleets away. 4) I played a bit more video games. 5) My sleep schedule was relatively unaffected. I still hate waking up early and still love staying up until 2am. 6) I feel an order of magnitude less grumpy and irritated. I don't know if its caused by the lack of social media, but it does make me feel happier and my relationships feel a little less strained. 7) I got more work done while I was at work. Instead of habitually opening up social media while I was waiting for something to compile, I just wait for it to compile. I felt a lot more focused. 8) This month feels like it has been very, very long. It feels like 3 months, whereas the normal month feels like a week or two. It's interesting how the perception of life pacing changed. 9) I don't miss reddit at all. Anyways, I was reflecting on the state of my own mind without social media or news media. We are very careful about what sorts of food we consume because it directly affects our body. If you eat garbage, you feel like garbage. Likewise with the mind and what it consumes. Social media feeds are McDonalds for the mind. I'm really curious about what would happen if I cut off all technology from my personal life? No cell phones, no video games, no television, no netflix, no laptop, etc. Cut out anything with a screen. I don't know if I'm brave enough for that right now, I would need other hobbies to fall on. What is your relationship with technology, and do you feel its a healthy one?
  48. 3 points
    While Unreal does put their graphical Blueprint scripting front-and-center, the engine fully supports native development in C++ (and I'd hazard a guess that most projects of significant complexity go that route). There are many, many games developed in either engine, both indie and commercial. Both are good for game developers.
  49. 3 points
    After two and a half years, it's finally, officially announced. And it got some attention! Here's PC Gamer: http://www.pcgamer.com/deceiver-is-a-philosophical-shooter-that-lets-you-shoot-drones-through-enemies/ And Rock Paper Shotgun: https://www.rockpapershotgun.com/2018/01/29/deceiver-is-a-neat-looking-parkour-game-with-added-spiderbots/ And two articles on Kotaku! https://kotaku.com/1822533385 https://www.kotaku.com.au/2018/01/854413/ The bad news is, I'm completely broke. Will I pull off this crazy thing? Stay tuned to find out.
  50. 3 points
  • Advertisement