Jump to content
  • Advertisement

Leaderboard


Popular Content

Showing content with the highest reputation since 11/14/18 in all areas

  1. 30 points
    Today, GameDev.net celebrates its 20th birthday. To the GameDev.net Community and the games industry, I wanted to say Thank You for the last 20 years of inspiration, education, and friendship. My name is Kevin Hawkins (khawk). I was 1 of 8 co-founders to launch GameDev.net on June 15, 1999. We started this platform as a passion project, and while I’m the only founder still involved, twenty years later it’s still a passion project for me. But GameDev.net is not about me. It’s about you, the community, and this common ground we have in game development. At times, any thoughts considering the scope, scale, and consistency of GameDev.net’s impact on people around the world for the last 20 years leaves me with a curious dose of humble pie that can be difficult to explain - few sites from 1999 continue to exist in essentially the same form as when they were launched. As a community, we all come from varying backgrounds, interests, cultures, and generations, but game development is our common thread through the GameDev.net platform. It’s this common thread found in diversity that I appreciate most. If we go back in time to June 1999, a small group of professional, hobbyist, and student developers decided to create a game development community where all types could participate - students, hobbyists, indies, and AAAs alike (all these labels did not exist at the time). A place that provided an independent voice in the games industry where students could learn from professionals, hobbyists could share their projects, and everyone could come together around the science, technology, and art of game development in a friendly, helpful, safe environment. Twenty years later that original vision remains true - and perhaps it’s been a better journey than we could have imagined. On GameDev.net, beginners learn from games industry professionals, indie gamedevs share their journeys, and artists, musicians, and sound designers showcase their skills and support each other through critique and encouragement. And since all of these skills come together on this platform, ad-hoc teams form in the community as developers collaborate on projects and encourage each other through challenges. We’ve witnessed students turn into AAA professionals, friendships form, marriages, deaths, and the birth of future generations. Many names and faces have participated in the forums, shared their knowledge with articles, offered their gamedev experiences in the blogs, and showcased their projects. Members have even become game development book authors - including through the GameDev.net series of books. Some active members have been around since the early months of GameDev.net. Others just joined. There are names you know from industry headlines. Other names quietly making a bigger impact on the world than anyone realizes. Other names you will learn in the coming years. Whatever the pedigree, GameDev.net’s success as a community has been wholly dependent on the people who decide to engage, share, and connect with others. When people contribute to the community, a camaraderie and passion develop between GameDev.net members unlike any I’ve witnessed in any other community. I know of lifelong friendships that started through GameDev.net - some of my own included. I also know of several business partnerships that have formed through group collaborations on the platform and continue to be successful today. This community has helped people turn their dreams into reality, learn new skills, and find new passions. It has even moved beyond game development and helped people cope with grief, depression, relationships, and find humor in everyday life. However GameDev.net has impacted your life - whether it was one small solution you found in search or an experience that helped you live your dreams - I hope it was positive. I appreciate the opportunity I have had to be a part of this community and the games industry through GameDev.net. I know my fellow co-founders, staff, and moderators also share in that appreciation, and most importantly, I hope the feeling is mutual for you. 20 years. Wow. Thank you. Kevin “khawk” Hawkins GameDev.net Check out member thoughts on the pre-20th forum thread here. Founders, Staff, and Moderators in GameDev.net’s 20 Year History Founders Kevin Hawkins (Khawk) Dave Astle (Myopic Rhino) Michael Tanczos Don Thorp John Munsch Geoff Howland (ghowland) Nick Murphy Ernest Pazera (TANSTAAFL) Staff and Moderators Jason Astle-Adams (jbadams) Sean Kent (Washu) Drew Sikora (Gaiiden) John Hattan (jhattan) Richard Fine (superpig) Dave Mark (IADaveMark) Ben Sunshine-Hill (Sneftel) Oli Wilkinson (evolutional) Promit Roy (promit) Tom Sloper Mona Ibrahim (monalaw) Mike Lewis (ApochPiQ) Ben Sizer (Kylotan) Emmanuel Deloget Josh Petrie (jpetrie) Chris Bennett (dwarfsoft) Nathan Madsen (nsmadsen) Andreas Jonsson (WitchLord) Oluseyi Sonaiya (Oluseyi) Josh Tippetts (JTippetts) Graham Rhodes (grhodes_at_work) Jon Watte (hplus0603) Jeromy Walsh (JWalsh) Carsten Haubold (Caste) Luke Benstead (Kazade) Mikael Swartling (Brother Bob) Yann Lombard (Yann L) Dan Marchant (Obscure) Joris Timmermans (MadKeithV) Paul Varga (Null and Void) Timothy Wright (Glass_Knife) Rob Jones (phantom) Sander Marechal Dave Baumgart (dbaumgart) Alex Walker (Sandman) Andrew Russell Yannick Loitière (fruny) Mare Kuntz (sunandshadow) Kelly Murdock Tiffany Smith Trent Polack (mittens) Graham Wihlidal (gwihlidal) Mike Stedman (Ravuya) Jack Hoxley (jollyjeffers) Shannon Barber Bryan Wagstaff (frob) Steve Macpherson (Evil Steve) Mike Caetano (LessBread) Mason McCuskey (mason) Tom Roe (fastcall22) Melissa Astle (frizzlefry) Jim Perry (Machaira) Karl Knechtel (Zahlman) David Michael (DavidRM) Sande Chen (sk8gundy) Amy Young Muhammad Haggag Heather Holland (felisandria) Brooke Hodgman (hodgman) David Michalson Matthew Anderson Bryan Ayeater (bishop_pass) Sean Forbes (riuthamus) Howard Jeng (SiCrane) Matt Pettineo (MJP) Tristam MacDonald (swiftcoder) Leigh Stringer (pan narrans) All8Up Wavinator
  2. 20 points
    This is a blog about our development of Unexplored 2: The Wayfarer's Legacy. This game features state-of-the-art content generation, generative storytelling, emergent gameplay, adaptive music, and a vibrant art style. The content generator of Unexplored 2 generates tile maps. Typical output looks like this (you can read more about our level generator here😞 These tile maps are stacked, using different tiles to indicate tiles ground types (grass or dirt in this example) and various decorations. In this case there are some bushes (large green circles), Rocks (black circles), plants (small green circles), flowers (white circles), and decorative textures (grey squares). There also are special tiles that indicate gameplay data such as the spawning point marked with an 's'. In addition, tiles can be tagged with additional information such as elevation or special subtypes. Tile maps are a convenient data structure for generators to work with. But they also are quite rigid, and often the grid has a tendency to be visible in game. Yet, in our case when the data is loaded into the game and assets are placed the result looks like this: I like to think we did a pretty good job at hiding the tiles, and here's how we did it. Voronoi Magic The trick is that individual tiles are matched to the cells in a Voronoi diagram. Which can be used to generate shapes that are much more natural looking. A Voronoi diagram is created from seeding a plane with random points and partitioning off cells so that each point in the plane belongs to the cell that corresponds to the closest seed point. There are quite a few interesting applications of Voronoi diagrams for procedural content generation (PCG). A typical Voronoi diagram created from a random but fairly even distribution of seed points looks something like this: For Unexplored 2 we use a different type of distribution of seed points. To start with, we seed one point for each tile. That way we are certain every tile in the tilemap can be mapped to one cell in the Voronoi diagram. Now, if you place the seed points in the middle of each cell you end up with straight grid that looks exactly like a tile map (for this image and the others below I also made a checkered version where half of the tiles are rendered yellow so you can see the patterns a little bit better): A simple way of making this look better is to simply randomize the position of each seed point. When shifting the points it helps to make sure the point does not move outside its original tile. The result looks something like this: Better, but very noisy, and you don't get nice flowing lines in this way. It can be improved by 'relaxing' the Voronoi diagram (a standard technique associated with Voronoi diagrams I won't go into here). But it will always stay a little noisy, and it is difficult to effectively suggest shapes on a scale that surpasses the scale of the individual tiles. To get around this you need to do is to move the points around smarter than simply using random displacement. Different types of movement have very different effects. For example, using Perlin noise can create interesting curved tilemaps. Or you can turn the whole thing into hexagonal shaped tiles simply by moving every other row of seed points to the left: The real breakthrough comes when we start moving around the seed points in particular patterns to create rounded corners. The first step of this process is already taken inside the level generator. Corners are detected between ground types and the corner tiles are marked with different shapes, indicating how they should be deformed to generate a better-looking environment: In this case, elevation differences also cause corners to appear in the tilemap. That's the reason you see the extra rounded corners in the grass in the top right and bottom left where slopes were generated. The game uses this information to displace the seed points of the Voronoi graph. Each rounded corner shifts the location of the seed point (see image below). In addition, it also shifts the seed points of its four orthogonal neighbors. This process is cumulative; seed points can be shifted multiple times if they are near several corners. However, after all displacement are processed, the seed points are randomized a little (about 10% of the width of a tile in either direction), and the final displacement is restricted to a maximum of 40% of the width of a tile. The result is already pretty astonishing: But we're not there yet... Smart Decoration The overall shape is better, but the edges are still very straight and somewhat ragged in appearance. The way we cover that up is by using curved assets placed along the edges where the colors are different. The real trick, however, is that one curve is often placed over two edges, using their relative angles to determine the direction of the curve. The result looks like this: Next, we use 3D assets to give extra texture to the cliffs: And finally, we add the other assets to fill out the level. The placement of these assets is dictated by the level data generated earlier, and in general follows a simple philosophy. We use smaller assets to surround larger ones creating natural and nice transitions. Of particular note is the rocks added to the bottom of cliffs to create more variety and to visually dampen the vertical slopes dictated by the gameplay: Local Variation The corners are not the only type of displacement we use. For example, near artificial structures (such as the ruined walls below) you want the edges to be straighter: In our system, this effect is easy to achieve. We simply introduce a different displacement rule that makes sure that tiles featuring artificial structures are not displaced. The generator uses smaller squares to mark these tiles and the game simply makes sure that all displacements are ignored: If you look at the ground you can clearly see how specific areas can be made straight while others curve more naturally: Isn't that neat? There are a few other rules you can use easily mix in with this technique. For example, we occasionally force the tiles into a hexagonal pattern to make sure narrow paths are wide enough to be traversed. And I am sure we will find other uses for other patterns as well. This is one of the many reasons I really love Voronoi diagrams. Another time I will write about how we use them to generate and decorate Unexplored 2 world maps. If you are interested in learning more about the game please check us out on Fig.co or on Steam. Note: This article was originally published on the Ludomotion website, and is reproduced here with the kind permission of the author.
  3. 12 points
    Contributed by Steven Large from Indie Ranger. Check out their coverage of up-and-coming indie games and developers! Developing a video game solo offers amazing creative freedom and incredibly difficult challenges. Anyone who's worked solo on any kind of project — by choice or otherwise — knows how difficult it is to do all the work by themselves. Managing all the planning and making sure every little piece fits together to make the vision from back in the “Ideas” stage a reality is a stressful endeavor — and when there’s a problem, there’s no one to blame but yourself. Eric Nevala (@slayemin), the founder of Wobbly Duck Studios in Seattle, Washington, knows the struggle of solo developing a game from scratch. He’s been working on his game Spellbound for more than five years now and it has not always been easy, even before he started on the game. His experience in being a solo developer allows him to offer a lot of tips on what it will be like for anyone thinking of creating their own video game, or any other kind of product. Tip #1: Get a foundation: you’ll need some knowledge and experience You can’t paint a good portrait without understanding how colors work, nor can you make a good video game without understanding how coding works. Nevala first wanted to become a software developer back when he was 14 years old. He was playing Commander Keen, an old-school platformer where you collect all sorts of different candy and jump on monsters with a pogo stick, when he had an epiphany. “One day over the summer, it dawned on me,” he said. “I was like ‘holy crap! Somebody made my favorite game!’ I want to make a game too. I discovered you have to be a programmer if you want to make games. Whatever it takes to make games, I’m going to [do] that.” This was back in the days before the internet had really become a thing, so there was no YouTube or tech forums to use as a learning tool. Nevala said he tried to teach himself QBasic, a beginners programming language, using an unhelpful help file. “It was really challenging and I almost gave up,” he said. His high school offered introductory classes in visual programming, so he took them and did well because he was extremely motivated. However, as the classes became more advanced, he started to do less well. Still, his dream of making a video game was in his grasp, so he kept pushing forward. After finishing high school and some college, Nevala started working in software development as an independent contractor, mostly building HTML websites. He then served in the Marine Corps as a reservist in the civil affairs unit. As a webmaster for the unit, it was his job to build a reconstruction management tool that would be used to help rebuild Iraq’s infrastructure after the initial invasion in 2003. “We had all these different projects being done by all these different units, like the Army Corps of Engineers and the Navy Construction Battalion and no one was really tracking anything,” Nevala said. “It was all pretty much just Excel spreadsheets. So I spent about three months building a tool from the ground up.” Three months worth of consecutive 18-hour days and 20,000 lines of code later and Nevala had given the reconstruction effort a better way to manage its projects. He said the impetus for working so hard was to help the people of Iraq. “The idea was the faster I get this project done, the faster we can bring peace to the country,” Nevala said. “That was my motivation behind it.” After serving, Nevala worked as a contractor in Afghanistan for 18 months, saving everything he could to get his company started. He founded Wobbly Duck Studios in 2013. Tip #2: Have a goal in mind Having something you want to accomplish with the product you’re making will go a long way in keeping you motivated to work on it. When Nevala built the management tool in Iraq, he wanted to help bring peace to the region. With Spellbound, he wants to make people better through storytelling. “I think people fight each other because of misunderstandings and misalignments in values,” Nevala said. “My idea is the way people get their values in partially informed by the media they consume — and that includes video games — so maybe I can use games to help people become better by informing some ideas in them.” Nevala believes storytelling in any media is a way of instilling values in people. With Spellbound, he wants to tell engaging stories and make people take a step back to think about other worldviews and give people a better way of understanding of one another. “My goal with Spellbound is to tell stories which give people a better idea on how they act and what the nature of evil is,” he said. “The hope is through all these different stories and morals, people become better.” Tip #3: Your plans are going to change Spellbound didn’t start out as an immersive VR roleplaying game. Like many others before it, the project began its life as something completely different. In fact, Spellbound started out as a turn-based strategy game. The only similarity between how the first version and what it is now was that it focused on wizards. “It was supposed to be game that I thought would be fun to play,” Nevala said. “I felt like no one up to the point had done battle with wizards correctly, so I wanted to change that. I wanted to bring a new scale of combat to the battlefield.” Then Nevala got hold of some VR equipment after checking out the Oculus Rift in a bar and the development went onto a different course. With a brand new market to tap into, he had some brand new ideas. “The market on Steam is super competitive, especially for indie developers,” Nevala said. “I was looking at that and thinking it was terrifying. If I released something that got lost in the ocean of games out there before anyone got to play it, that would be a massive disappointment.” VR had just come out and Nevala thought it was a way he could differentiate himself and thrive in the market. “It’s a lot easier to be a big fish in a small pond than a small fish in a big ocean,” he said. Tip #4: You’re going to need help It would be something else if you developed a game completely on your own but even if you are a jack-of-all-trades, you are probably only a master of some. You will need outside help for all the things you can’t do. When Nevala opened Wobbly Duck Studios he worked alone for about a year and Spellbound is a game initiated and built by him. Still, a game isn’t a game without visuals and he needed an artist. So he hired one named Dan Lane. “Dan is one of the only people who also contributed significantly into this project,” Nevala said. “I say it’s my game but there are others who worked on it with me, and for the most part the bulk of the effort has been Dan and [me].” Sometimes you have to look for help, sometimes it might reach out to you. In 2015 Nevala was beginning to show off his game. It had the visuals, it had the gameplay but it was lacking in music. Frantz Widmaier, an independent composer, saw the game on Reddit and offered to make the music for it. “The cool thing about writing music for VR is you’re looking at a scene when you’re looking at a level,” Widmaier said. “You’re immersed in the environment so I could create a more engaged soundtrack.” Sometimes when independent artists work together things click just right and it’s a good experience for everyone. “It was really great working for [Nevala],” Widmaier said. “He always seems like he’s working on something cutting-edge. I’m usually the one looking to try this or try that but this time it was him.” Sidebar: The art of the trade Getting outside help doesn’t have to cost a lot. Sometimes, who you know can be a source of new talent. Nevala works in a shared office space and one of the other tenants, Cody Cannell who was working there in 2015, happened to be a musician who wanted to learn about game development. So Nevala reached out and offered him a trade. If Cannell made some sound effects for Spellbound, Nevala would teach him how to program a video game. The project was simple, just a small Pong-like game. But that was enough to spur Cannell on to seek a more formal education in software development. Nevala got something he needed as well. Cannell used his own voice to make the sound effects for the zombie and wraith enemies in Spellbound. Tip #5: Recognize when something just isn’t worth it. Even though Nevala had some seed money saved up and some consulting jobs going on the side, he quickly learned game development is expensive. He went to an event where he would pitch his product to a group of investors hoping they would invest in him and his game but the whole thing turned out to be a bust. “It was really just a bunch of old people with a lot of money sitting around and drinking wine,” Nevala said. “We took turns pitching our ideas and were ranked based on our performance. Theoretically, the winner would have been given some money but the reality is no one got any money and the investors just got drunk.” In terms of performance, Nevala was dead last. Trying to pitch a VR game to a bunch of old people who don’t really know what a video game is, let alone VR, is a tough thing to sell. “It’s like speaking Greek to them,” Nevala said. “If I could go back and do it again, I wouldn’t do it in the first place. I had to pay $500 for the ‘privilege’ of speaking to them. It was a waste of time and money.” The biggest lessons Nevala learned from dealing with investors were one: developing software and selling it are two different things entirely and two: he was going to hear the word ‘no’ a lot. “One of my biggest takeaways is that you need to do 50 pitches before you get good at it and you’re going to hear no 50 times,” he said. “But you have to get through those 50 to get the yes.” However, losing money is the way of business. At a different event, Nevala spoke to a CEO named Archie Gupta who told him he needed to be prepared to lose 50% of his money. “When he told me that I looked at where I was with my company and I realized he was telling the truth,” Nevala said. “When you’re a startup, you’re like a blind person walking from one end of a maze to the other. To get to the end, your going to walk down a bunch of paths that are dead ends, but you won’t know it’s a dead end until you reach the end and you have to turn around and try a different path.” Tip #6: Learn to make do with what you have In 2015, Nevala had a working demo for Spellbound, he just needed a way to show it off and PAX, a massive gaming convention held in several cities in the US — including Seattle — seemed like a great way to get people playing his game. “If you wanted a booth at PAX you needed to pay a ridiculous amount of money,” Nevala said. “The other option was to be part of the Indie MEGABOOTH. I had put myself in but they didn’t accept me because they thought my game wasn’t up to par or I wasn’t active enough in the community.” So, he wasn’t actually in PAX, but he wanted the public to play his game and his office is across the street from Benny Royal Hall, right in the conventions gravity well. He was able to use the PAX pull to grab a few of the attendees for himself using some homemade sandwich boards, laminated posters and his girlfriend asking people to come up and play his game. “We had about 100 people come in that day,” Nevala said. “The feedback that we got was pretty good.” He said the validation he got from people made him feel like he was on the right track and all he needed to do was keep polishing the game and making it better. Tip #7: Inspiration is Everywhere During the PAX-adjacent showcase in his office, a little girl about 8-years old wanted to play the game. “Does it have fairies?” She asked. “No, no fairies,” Nevala told her. “What about unicorns?” “No, no unicorns either. It’s kind of a dark and scary game,” he said. Still, the girl wanted to try it. She put on the VR headset and loaded in. Like everyone else that day, she was awestruck by the virtual world that surrounded her. She gleefully looked around the star-filled sky and atmospheric forest. Until she saw the first zombie come shambling toward her. That’s when she started crying and ripped the (expensive) headset off and ran to her mom. An embarrassing moment for some but Nevala thought it was a moment to rethink how he was going to make his game. “The original game was supposed to be about this red wizard using elemental magic but for this little girl, I want to make a game that she would be happy playing,” he said. “Something totally not violent. So the next series, if I get that far, is going to be about a sorceress of light who uses diplomacy and peaceful magic to bring peace to the world.” Even gameplay mechanics can be inspired by everyday events. One of Spellbound’s more innovative features is locomotion. Instead of using teleportation or a trackpad, the wizard is controlled by the player swinging their arms back and forth. “I can’t have people teleporting away from the zombies,” Nevala said. “That would completely negate the fear factor. I needed something that felt natural and didn’t break the game mechanics.” After about a month of brainstorming a locomotion system, Nevala was walking to work and he noticed he was swinging his arms as he walked. “That became the stroke of genius moment for me,” he said. “I spent about a month trying to make that work. It was challenging because people walk forward based on where their hips are facing, not their heads. It was difficult making the system work in VR, about 3 weeks of design challenge to discover and overcome.” Tip #8: You Might Fail, and Fail Hard With a lack of investors and no motivation to find any, Nevala’s money started to run out. In 2016 Dan Lane, the artist Nevala had worked with on so much of the game had to quit. Mounting financial pressure had stymied Spellbound’s development and Nevala was running side businesses renting out rooms and his car so he could keep his office. Debt and a starving-artist diet continued in and out for a few years with an occasional contracting job to help keep things going, but in November of 2018, Nevala hit what he called a “rock bottom” moment. “I had zero money, nothing,” he said. “I hadn’t eaten in a few days or if I had it was just oatmeal. Just enough to get me through to the next day. I needed to make money as fast as I could. It was a really scary time.” He was mad at himself for letting it get to this point. Not because he was doing poorly in his earnings, but because he had paid off a debt entirely. “I shouldn’t have done that,” Nevala said. “I needed to make sure I had enough money to fill my belly.” Tip #9: Don’t Give Up! Remember What You’re Trying to Make In order to make some grocery money, Nevala tried selling some wine-openers at a farmers market using what he calls the ugliest tent. It was grueling to watch people go by and do demo after demo without making a sale. But at the end of the day, he managed to make a little cash. Enough to buy some food. “It was a good experience,” Nevala said. “It showed me when my back is against the wall I can still sell. I can make things happen.” Even with the money troubles and the sheer amount of work and uncertainty involved with solo developing a game, Nevala keeps working on it, little by little. “I’m putting so much effort into it because I think it’s my magnum opus,” he said. “It’s the masterpiece of art that’s an expression of my identity. If it takes me 10 years to produce this game than so be it. It’s going to be what speaks to who I am. For Nevala, Spellbound is the work of art that will live on after him. “If you look at the painters from the Renaissance, they are outlived by the art they created,” he said. “They are remembered for what they made. Hopefully, this will be not only a part of who I am but something that will change the way people look at the world.” Nevala said his challenges are in no way unique. The struggle he faces to make his game is something almost all indie developers deal with. "[These challenges] are something every indie dev, especially a solo dev will face," he said. "The ones that survive are very few and far between. It's like winning the lottery but the ones that survive get all the visibility and that turns into inspiration for the rest of us." "Indie Ranger is a niche website dedicated to covering independently made video games. Our coverage spans reviews, previews, interviews, profiles, showcases and more. At Indie Ranger, we want to show you the greatest games that you've never heard of!" Check out more indie game reviews and interviews at Indie Ranger. Twitter: @indieranger Instagram: @indieranger Facebook: theindieranger
  4. 11 points
    Inspiration The game Objects in Space has been demoing their game at shows such as PAX with a giant space-ship cockpit controller, complete with light-up buttons, analog gauges, LED status indicators, switches, etc... It adds a lot to the immersion when playing the game: They have an Arduino tutorial over on their site, and a description of a communication protocol for these kinds of controllers. I want to do the same thing for my game 😁 For this example, I'm going to spend ~$40 to add some nice, big, heavy switches to my sim-racing cockpit. The main cost involved is actually these switches themselves -- using simple switches/buttons could halve the price! They're real hardware designed to withstand 240W of power, and I'll only be putting 0.03W or so through them. Also, as a word of warning, I am a cheapskate, so I'm linking to the cheap Chinese website where I buy lots of components / tools. One of the downsides of buying components on the cheap is that they often don't come with any documentation, so I'll deal with that issue below, too Main components Arduino Mega2560 - $9 Racing ignition switch panel - $26 A pile of jump wire (male to male, male to female, etc...) - $2 + + Recommended Tools A soldering iron (a good one is worth it, but a cheap one from your hardware store will do) Solder (60/40 lead rosin core is easy to work with, though bad for the environment...) Heat shrink (and a hot air gun, hair dryer, or cigarette lighter) or electrical tape A hot glue gun (and glue sticks), or some epoxy resin A multimeter Wire cutters / stripper, or a pair of scissors if you're cheap. Software Arduino IDE for programming the main Arduino CPU For making a controller that appears as a real hardware USB gamepad/joystick: FLIP for flashing new firmware onto the Arduino's USB controller The arduino-usb library on github For making a controller that your game talks to directly (or appears as a virtual USB gamepad/joystick😞 My ois_protocol library on github The vJoy driver, if you want to use it as a virtual USB gamepad/joystick. Disclaimer I took electronics in high school and learned how to use a soldering iron, and that you should connected red wires to red wires, and black wires to black wires... Volts, amps and resistance have an equation that relates them? That's about the extent of my formal electronics education This has been a learning project for me, so there may be bad advice or mistakes in here! Please post in the comments. Part 1, making the thing! Dealing with undocumented switches... As mentioned above, I'm buying cheap parts from a low-margin retailer, so my first job is figuring out how these switches/buttons work. A simple two-connector button/switch The button is easy, as it doesn't have any LEDs in it and just has two connectors. Switch the multimeter to continuity mode ( ) and touch one probe to each of the connectors -- the screen will display OL (open loop), meaning there is no connection between the two probes. Then push the button down while the probes are still touching the connectors -- the screen will display something like 0.1Ω and the multimeter will start beeping (indicating there is now a very low resistance connection between the probes -- a closed circuit). Now we know that when the button is pressed, it's a closed circuit, and otherwise it's an open circuit. Or diagrammatically, just a simple switch: Connecting a switch to the Arduino Find two pins on the Arduino board, one labelled GND, and one labelled "2" (or any other arbitrary number -- these are general purpose IO pins that we can control from software). If we connect our switch like this and then tell the Arduino to configure pin "2" to be an INPUT pin, we get the circuit on the left (below). When the button is pressed, pin 2 will be directly connected to ground / 0V, and when not pressed, pin 2 will not be connected to anything. This state (not connected to anything) is called "floating", and unfortunately it's not a very good state for our purposes. When we read from the pin in our software (with digitalRead(2)) we will get LOW if the pin is grounded, and an unpredictable result of either LOW or HIGH if the pin is in the floating state! To fix this, we can configure the pin to be in the INPUT_PULLUP mode, which connects a resistor inside the processor and produces the circuit on the right. In this circuit, when the switch is open, our pin 2 has a path to +5V, so will reliably read HIGH when tested. When the switch is closed, the pin still has that high-resistance path to +5V, but also has a no-resistance path to ground / 0V, which "wins", causing the pin to read LOW. This might feel a little backwards to software developers -- pushing a button causes it to read false / LOW, and when not pressed it reads true / HIGH We could do the opposite, but the processor only has in-built pull-up resistors and no in-built pull-down resistors, so we'll stick with this model. The simplest Arduino program that reads this switch and tells your PC what state it's in looks something like below. You can click the upload button in the Arduino IDE and then open the Serial Monitor (in the Tools menu) to see the results. void setup() { Serial.begin(9600); pinMode(2, INPUT_PULLUP); } void loop() { int state = digitalRead(pin); Serial.println( state == HIGH ? "Released" : "Pressed" ); delay(100);//artifically reduce the loop rate so the output is at a human readable rate... } More semi-documented switches... An LED switch with three connectors The main switches on my panel thankfully have some markings on the side labeling the three connectors: I'm still not 100% certain how it works though, so again, we get out the multimeter in continuity mode and touch all the pairs of connectors with the switch both on and off... however, this time, the multimeter doesn't beep at all when we put the probes on [GND] and [+] with the switch "on"! The only configuration where the multimeter beeps (detects continuity) is when the switch is "on" and the probes are on [+] and [lamp]. The LED within the switch will block the continuity measurement, so from the above test, we can guess that the LED is connected directly to the [GND] connector, and not the [+] or [lamp] connectors. Next, we can put the multimeter onto diode testing mode ( symbol) and test the pairs of connectors again - but this time polarity matters (red vs black probe). Now, when we put the red probe on [lamp] and the black probe on [GND], the LED lights up and the multimeter reads 2.25V. This is the forward voltage of the diode, or the minimum voltage required to make it light up. Regardless of switch position, 2.25V from [lamp] to [GND] causes the LED to come on. When we put the red probe on [+] and the black probe on [GND] the LED only comes on if the switch is also on. From these readings, we can guess that the internals of this switch looks something like below, reproducing our observations: [+] and [lamp] short circuit when the switch is on / closed. A positive voltage from [lamp] to [GND] always illuminates the LED. A positive voltage from [+] to [GND] illuminates the LED only when the switch is on / closed. Honestly the presence of the resistor in there is a bit of a guess. LED's need to be paired with an appropriate resistor to limit the current fed into them, or they'll burn out. Mine haven't burned out, seem to be working correctly, and I found a forum post on the seller's website saying there was an appropriate resistor installed for 12V usage, so this saves me the hassle of trying to guess/calculate the appropriate resistor to use here Connecting this switch to the Arduino The simplest way to use this on the Arduino is to ignore the [lamp] connector, connect [GND] to GND on the Arduino, and connect [+] to one of the Arduino's numbered pins, e.g. pin 3. If we set pin 3 up as INPUT_PULLUP (as with the previous button) then we'll end up with the result below. The top left shows the value that we will recieve with "digitalRead(3)" in our Arduino code. When the switch is on/closed, we read LOW and the LED will illuminate! To use this kind of switch in this configuration you can use the same Arduino code as in the button example. Problems with this solution Connected to the Arduino, the full circuit looks like: Here, we can see though, that when the switch is closed, instead of there being quite a small current-limiting resistor in front of the LED (I'm guessing 100Ω here), there is also the 20kΩ pullup resistor which will further reduce the amount of current that can flow through the LED. This means that while this circuit works, the LED will not be very bright. Another downside with this circuit is we don't have programmatic control over the LED - it's on when the switch is on, and off when the switch is off. We can see what happens if we connect the [lamp] connector to either 0V or +5V below. When [lamp] is connected to 0V, the LED is permanently off (regardless of switch position), and the Arduino position sensing still behaves. This gives us a nice way of programatically disabling the LED if we want to! When [lamp] is connected to +5V, the LED is permanently on (regardless of switch position), however, the Arduino position sensing is broken - our pin will always read HIGH Connecting this switch to the Arduino properly We can overcome the limitations described about (low current / LED intensity, and no LED programmatic control) by writing a bit more software! To resolve the conflict between being able to control the LED and our position sensing getting broken by LED control, we can time-slice the two -- i.e. temporarily turn off the LED when reading the sensor pin (#3). First, connect the [lamp] pin to another general purpose Arduino pin, e.g. 4, so we can control the lamp. To make a program that reads the switch position reliably and also controls the LED (we'll make it blink here) we just have to be sure to turn the LED off before reading the switch state. Hopefully the LED will only be disabled for a fraction of a millisecond, so it shouldn't be a noticeable flicker: int pinSwitch = 3; int pinLed = 4; void setup() { //connect to the PC Serial.begin(9600); //connect our switch's [+] connector to a digital sensor, and to +5V through a large resistor pinMode(pinSwitch, INPUT_PULLUP); //connect our switch's [lamp] connector to 0V or +5V directly pinMode(pinLed, OUTPUT); } void loop() { int lampOn = (millis()>>8)&1;//make a variable that alternates between 0 and 1 over time digitalWrite(pinLed, LOW);//connect our [lamp] to +0V so the read is clean int state = digitalRead(pinSwitch); if( lampOn ) digitalWrite(pinLed, HIGH);//connect our [lamp] to +5V Serial.println(state);//report the switch state to the PC } On the Arduino Mega, pins 2-13 and 44-46 can use the analogWrite function, which doesn't actually produce voltages between 0V and +5V, but approximates them with a square wave. You can use this to control the brightness of your LED if you like! This code will make the light pulse from off to on instead of just blinking: void loop() { int lampState = (millis()>>1)&0xFF;//make a variable that alternates between 0 and 255 over time digitalWrite(pinLed, LOW);//connect our [lamp] to +0V so the read is clean int state = digitalRead(pinSwitch); if( lampState > 0 ) analogWrite(pinLed, lampState); } Assembly tips This post is already big enough so I won't add a soldering tutorial on top, I'll leave that to google! However, some basic tips: When joining wires to large metal connectors, do make sure that your iron is hot first, and take a moment to also heat up the metal connector too. The point of soldering is to form a permanent bond by creating an alloy, but if only one part of the connection is hot, you can easily end up with a "cold joint", which superficially looks like a connection, but hasn't actually bonded. When joining two wires together, make sure to slip a bit of heat-shrink onto one of them first - you can't slip the heat-shrink on after the joint is made. This sounds obvious, but I'm always forgetting to do so and having to fall back to using electrical tape instead of heat-shrink... Also, slip the heat-shrink far away from the joint so it doesn't heat up prematurely. After testing your soldered connection, slide the heat-shrink over the joint and heat it up. The thin little dupont / jump wire that I listed at the start are great for solderless connections (e.g. plugging into the Arduino!) but are quite fragile. After soldering these, use hot glue to hold them in place and move any strains away from the joint itself. e.g. the red wires below might get pulled on while I'm working on this, so after soldering them to the switches, I've fixed them in place with a dab of hot glue: Part 2, make it act as a game controller! To make it appear as a USB game controller in your OS, the code is quite simple, but unfortunately we also have to replace the firmware on the Arduino's USB chip with a custom one that you can grab from here: https://github.com/harlequin-tech/arduino-usb Unfortunately, once you put this custom firmware on your Arduino, it is now a USB-joystick, not an Arduino any more So in order to reprogram it, you have to again re-flash it with the original Arduino firmware. Iterating is a bit of a pain -- upload arduino code, flash joystick firmware, test, flash arduino firmware, repeat... An example of an Arduino program for use with this firmware is below -- it sets up three buttons as inputs, reads their values, copies it into the data structure expected by this custom firmware, and send the data. Rinse and repeat. // define DEBUG if you want to inspect the output in the Serial Monitor // don't define DEBUG if you're ready to use the custom firmware #define DEBUG //Say we've got three buttons, connected to GND and pins 2/3/4 int pinButton1 = 2; int pinButton2 = 3; int pinButton3 = 4; void setup() { //configure our button's pins properly pinMode(pinButton1, INPUT_PULLUP); pinMode(pinButton2, INPUT_PULLUP); pinMode(pinButton3, INPUT_PULLUP); #if defined DEBUG Serial.begin(9600); #else Serial.begin(115200);//The data rate expected by the custom USB firmware delay(200); #endif } //The structure expected by the custom USB firmware #define NUM_BUTTONS 40 #define NUM_AXES 8 // 8 axes, X, Y, Z, etc typedef struct joyReport_t { int16_t axis[NUM_AXES]; uint8_t button[(NUM_BUTTONS+7)/8]; // 8 buttons per byte } joyReport_t; void sendJoyReport(struct joyReport_t *report) { #ifndef DEBUG Serial.write((uint8_t *)report, sizeof(joyReport_t));//send our data to the custom USB firmware #else // dump human readable output for debugging for (uint8_t ind=0; ind<NUM_AXES; ind++) { Serial.print("axis["); Serial.print(ind); Serial.print("]= "); Serial.print(report->axis[ind]); Serial.print(" "); } Serial.println(); for (uint8_t ind=0; ind<NUM_BUTTONS/8; ind++) { Serial.print("button["); Serial.print(ind); Serial.print("]= "); Serial.print(report->button[ind], HEX); Serial.print(" "); } Serial.println(); #endif } joyReport_t joyReport = {}; void loop() { //check if our buttons are pressed: bool button1 = LOW == digitalRead( pinButton1 ); bool button2 = LOW == digitalRead( pinButton2 ); bool button3 = LOW == digitalRead( pinButton3 ); //write the data into the structure joyReport.button[0] = (button1?0x01:0) | (button2?0x02:0) | (button3?0x03:0); //send it to the firmware sendJoyReport(joyReport) } Part 3, integrate it with YOUR game! As an alternative the the above firmware hacking, if you're in control of the game that you want your device to communicate with, then you can just talk to your controller directly -- no need to make it appear as a Joystick to the OS! At the start of this post I mentioned Objects In Space; this is exactly the approach that they used. They developed a simple ASCII communication protocol that can be used to allow your controller and your game to talk to each other. All you have to do is enumerate the serial ports on your system (A.K.A. COM ports on Windows, and btw look how awful this is in C), find the one with a device named "Arduino" connected to it, open that port and start reading/writing ASCII to that handle. On the Arduino side, you just keep using the Serial.print functions that I've used in the examples so far. At the start of this post, I also mentioned my library for solving this problem: https://github.com/hodgman/ois_protocol This contains C++ code that you can integrate into your game to act as the "server", and Arduino code that you can run on your controller to act as the "client". Setting up your Arduino In example_hardware.h I've made classes to abstract the individual buttons / switches that I'm using. e.g. `Switch` is a simple button as in the first example, and `LedSwitch2Pin` is the controllable LED switch from my second example. The actual example code for my button panel is in example.ino. As a smaller example, let's say we have a single button to be sent to the game, and a single LED controlled by the game. The required Arduino code looks like: #include "ois_protocol.h" //instantiate the library OisState ois; //inputs are values that the game will send to the controller struct { OisNumericInput myLedInput{"Lamp", Number}; } inputs; //outputs are values the controller will send to the game struct { OisNumericOutput myButtonOutput{"Button", Boolean}; } outputs; //commands are named events that the controller will send to the game struct { OisCommand quitCommand{"Quit"}; } commands; int pinButton = 2; int pinLed = 3; void setup() { ois_setup_structs(ois, "My Controller", 1337, 42, commands, inputs, outputs); pinMode(pinButton, INPUT_PULLUP); pinMode(pinLed, OUTPUT); } void loop() { //read our button, send it to the game: bool buttonPressed = LOW == digitalRead(pin); ois_set(ois, outputs.myButtonOutput, buttonPressed); //read the LED value from the game, write it to the LED pin: analogWrite(pinLed, inputs.myLedInput.value); //example command / event: if( millis() > 60 * 1000 )//if 60 seconds has passed, tell the game to quit ois_execute(ois, commands.quitCommand); //run the library code (communicates with the game) ois_loop(ois); } Setting up your Game The game code is written in the "single header" style. Include oisdevice.h into your game to import the library. In a single CPP file, before #including the header, #define OIS_DEVICE_IMPL and #define OIS_SERIALPORT_IMPL -- this will add the source code for the classes into your CPP file. If you have your own assertions, logging, strings or vectors, there are several other OIS_* macros that can be defined before importing the header, in order to get it to use your engine's facilities. To enumerate the COM ports and create a connection for a particular device, you can use some code such as this: OIS_PORT_LIST portList; OIS_STRING_BUILDER sb; SerialPort::EnumerateSerialPorts(portList, sb, -1); for( auto it = portList.begin(); it != portList.end(); ++it ) { std::string label = it->name + '(' + it->path + ')'; if( /*device selection choice*/ ) { int gameVersion = 1; OisDevice* device = new OisDevice(it->id, it->path, it->name, gameVersion, "Game Title"); ... } } Once you have an OisDevice instance, you should call its Poll member function regularly (e.g. every frame), you can retrieve the current state of the controller's output with DeviceOutputs(), can consume events from the device with PopEvents() and can send values to the device with SetInput(). An example application that does all of this is available at example_ois2vjoy/main.cpp Part 4, what if you wanted part 2 and 3 at the same time!? To make your controller work in other games (part 2) we had to install custom firmware and one Arduino program, but to make the controller fully programmable by your game, we used standard Arduino firmware and a different Arduino program. What if we want both at once? Well, the example application that is linked to above, ois2vjoy, solves this problem This application talks to your OIS device (the program from Part 3) and then, on your PC, converts that data into regular gamepad/joystick data, which is then sent to a virtual gamepad/joystick device. This means you can leave your custom controller using the OIS library all the time (no custom firmware required), and when you want to use it as a regular gamepad/joystick, you just run the ois2vjoy application on your PC, which does the translation for you. Part 5, wrap up I hope this was useful or interesting to some of you. Thanks for making it to the end! If this does tickle you fancy, please consider collaborating / contributing to the ois_protocol library! I think it would be great to make a single protocol to support all kinds of custom controllers in games, and encourage more games to directly support custom controllers!
  5. 10 points
    edit: Seeing this has been linked outside of game-development circles: "ECS" (this wikipedia page is garbage, btw -- it conflates EC-frameworks and ECS-frameworks, which aren't the same...) is a faux-pattern circulated within game-dev communities, which is basically a version of the relational model, where "entities" are just ID's that represent a formless object, "components" are rows in specific tables that reference an ID, and "systems" are procedural code that can modify the components. This "pattern" is always posed as a solution to an over-use of inheritance, without mentioning that an over-use of inheritance is actually bad under OOP guidelines. Hence the rant. This isn't the "one true way" to write software. It's getting people to actually look at existing design guidelines. Inspiration This blog post is inspired by Aras Pranckevičius' recent publication of a talk aimed at junior programmers, designed to get them to come to terms with new "ECS" architectures. Aras follows the typical pattern (explained below), where he shows some terrible OOP code and then shows that the relational model is a great alternative solution (but calls it "ECS" instead of relational). This is not a swipe at Aras at all - I'm a fan of his work and commend him on the great presentation! The reason I'm picking on his presentation in particular instead of the hundred other ECS posts that have been made on the interwebs, is because he's gone through the effort of actually publishing a git repository to go along with his presentation, which contains a simple little "game" as a playground for demonstrating different architecture choices. This tiny project makes it easy for me to actually, concretely demonstrate my points, so, thanks Aras! You can find Aras' slides at http://aras-p.info/texts/files/2018Academy - ECS-DoD.pdf and the code at https://github.com/aras-p/dod-playground. I'm not going to analyse the final ECS architecture from that talk (yet?), but I'm going to focus on the straw-man "bad OOP" code from the start. I'll show what it would look like if we actually fix all of the OOD rule violations. Spoiler: fixing the OOD violations actually results in a similar performance improvement to Aras' ECS conversion, plus it actually uses less RAM and requires less lines of code than the ECS version! TL;DR: Before you decide that OOP is shit and ECS is great, stop and learn OOD (to know how to use OOP properly) and learn relational (to know how to use ECS properly too). I've been a long-time ranter in many "ECS" threads on the forum, partly because I don't think it deserves to exist as a term (spoiler: it's just a an ad-hoc version of the relational model), but because almost every single blog, presentation, or article that promotes the "ECS" pattern follows the same structure: Show some terrible OOP code, which has a terribly flawed design based on an over-use of inheritance (and incidentally, a design that breaks many OOD rules). Show that composition is a better solution than inheritance (and don't mention that OOD actually teaches this same lesson). Show that the relational model is a great fit for games (but call it "ECS"). This structure grinds my gears because: (A) it's a straw-man argument.. it's apples to oranges (bad code vs good code)... which just feels dishonest, even if it's unintentional and not actually required to show that your new architecture is good, but more importantly: (B) it has the side effect of suppressing knowledge and unintentionally discouraging readers from interacting with half a century of existing research. The relational model was first written about in the 1960's. Through the 70's and 80's this model was refined extensively. There's common beginners questions like "which class should I put this data in?", which is often answered in vague terms like "you just need to gain experience and you'll know by feel"... but in the 70's this question was extensively pondered and solved in the general case in formal terms; it's called database normalization. By ignoring existing research and presenting ECS as a completely new and novel solution, you're hiding this knowledge from new programmers. Object oriented programming dates back just as far, if not further (work in the 1950's began to explore the style)! However, it was in the 1990's that OO became a fad - hyped, viral and very quickly, the dominant programming paradigm. A slew of new OO languages exploded in popularity including Java and (the standardized version of) C++. However, because it was a hype-train, everyone needed to know this new buzzword to put on their resume, yet no one really groked it. These new languages had added a lot of OO features as keywords -- class, virtual, extends, implements -- and I would argue that it's at this point that OO split into two distinct entities with a life of their own. I will refer to the use of these OO-inspired language features as "OOP", and the use of OO-inspired design/architecture techniques as "OOD". Everyone picked up OOP very quickly. Schools taught OO classes that were efficient at churning out new OOP programmers.... yet knowledge of OOD lagged behind. I argue that code that uses OOP language features, but does not follow OOD design rules is not OO code. Most anti-OOP rants are eviscerating code that is not actually OO code. OOP code has a very bad reputation, I assert in part due to the fact that, most OOP code does not follow OOD rules, thus isn't actually "true" OO code. Background As mentioned above, the 1990's was the peak of the "OO fad", and it's during this time that "bad OOP" was probably at its worst. If you studied OOP during this time, you probably learned "The 4 pillars of OOP": Abstraction Encapsulation Polymorphism Inheritance I'd prefer to call these "4 tools of OOP" rather than 4 pillars. These are tools that you can use to solve problems. Simply learning how a tool works is not enough though, you need to know when you should be using them... It's irresponsible for educators to teach people a new tool without also teaching them when it's appropriate to use each of them. In the early 2000's, there was a push-back against the rampant misuse of these tools, a kind of second-wave of OOD thought. Out of this came the SOLID mnemonic to use as a quick way to evaluate a design's strength. Note that most of these bits of advice were well actually widely circulated in the 90's, but didn't yet have the cool acronym to cement them as the five core rules... Single responsibility principle. Every class should have one reason to change. If class "A" has two responsibilities, create a new class "B" and "C" to handle each of them in isolation, and then compose "A" out of "B" and "C". Open/closed principle. Software changes over time (i.e. maintenance is important). Try to put the parts that are likely to change into implementations (i.e. concrete classes) and build interfaces around the parts that are unlikely to change (e.g. abstract base classes). Liskov substitution principle. Every implementation of an interface needs to 100% comply the requirements of that interface. i.e. any algorithm that works on the interface, should continue to work for every implementation. Interface segregation principle. Keep interfaces as small as possible, in order to ensure that each part of the code "knows about" the least amount of the code-base as possible. i.e. avoid unnecessary dependencies. This is also just good advice in C++ where compile times suck if you don't follow this advice Dependency inversion principle. Instead of having two concrete implementations communicate directly (and depend on each other), they can usually be decoupled by formalizing their communication interface as a third class that acts as an interface between them. This could be an abstract base class that defines the method calls used between them, or even just a POD struct that defines the data passed between them. Not included in the SOLID acronym, but I would argue is just as important is the: Composite reuse principle. Composition is the right default™. Inheritance should be reserved for use when it's absolutely required. This gives us SOLID-C(++) From now on, I'll refer to these by their three letter acronyms -- SRP, OCP, LSP, ISP, DIP, CRP... A few other notes: In OOD, interfaces and implementations are ideas that don't map to any specific OOP keywords. In C++, we often create interfaces with abstract base classes and virtual functions, and then implementations inherit from those base classes... but that is just one specific way to achieve the idea of an interface. In C++, we can also use PIMPL, opaque pointers, duck typing, typedefs, etc... You can create an OOD design and then implement it in C, where there aren't any OOP language keywords! So when I'm talking about interfaces here, I'm not necessarily talking about virtual functions -- I'm talking about the idea of implementation hiding. Interfaces can be polymorphic, but most often they are not! A good use for polymorphism is rare, but interfaces are fundamental to all software. As hinted above, if you create a POD structure that simply stores some data to be passed from one class to another, then that struct is acting as an interface - it is a formal data definition. Even if you just make a single class in isolation with a public and a private section, everything in the public section is the interface and everything in the private section is the implementation. Inheritance actually has (at least) two types -- interface inheritance, and implementation inheritance. In C++, interface inheritance includes abstract-base-classes with pure-virtual functions, PIMPL, conditional typedefs. In Java, interface inheritance is expressed with the implements keyword. In C++, implementation inheritance occurs any time a base classes contains anything besides pure-virtual functions. In Java, implementation inheritance is expressed with the extends keyword. OOD has a lot to say about interface-inheritance, but implementation-inheritance should usually be treated as a bit of a code smell! And lastly I should probably give a few examples of terrible OOP education and how it results in bad code in the wild (and OOP's bad reputation). When you were learning about hierarchies / inheritance, you probably had a task something like: Let's say you have a university app that contains a directory of Students and Staff. We can make a Person base class, and then a Student class and a Staff class that inherit from Person! Nope, nope nope. Let me stop you there. The unspoken sub-text beneath the LSP is that class-hierarchies and the algorithms that operate on them are symbiotic. They're two halves of a whole program. OOP is an extension of procedural programming, and it's still mainly about those procedures. If we don't know what kinds of algorithms are going to be operating on Students and Staff (and which algorithms would be simplified by polymorphism) then it's downright irresponsible to dive in and start designing class hierarchies. You have to know the algorithms and the data first. When you were learning about hierarchies / inheritance, you probably had a task something like: Let's say you have a shape class. We could also have squares and rectangles as sub-classes. Should we have square is-a rectangle, or rectangle is-a square? This is actually a good one to demonstrate the difference between implementation-inheritance and interface-inheritance. If you're using the implementation-inheritance mindset, then the LSP isn't on your mind at all and you're only thinking practically about trying to reuse code using inheritance as a tool. From this perspective, the following makes perfect sense: struct Square { int width; }; struct Rectangle : Square { int height; }; A square just has width, while rectangle has a width + height, so extending the square with a height member gives us a rectangle! As you might have guessed, OOD says that doing this is (probably) wrong. I say probably because you can argue over the implied specifications of the interface here... but whatever. A square always has the same height as its width, so from the square's interface, it's completely valid to assume that its area is "width * width". By inheriting from square, the rectangle class (according to the LSP) must obey the rules of square's interface. Any algorithm that works correctly with a square, must also work correctly with a rectangle. Take the following algorithm: std::vector<Square*> shapes; int area = 0; for(auto s : shapes) area += s->width * s->width; This will work correctly for squares (producing the sum of their areas), but will not work for rectangles. Therefore, Rectangle violates the LSP rule. If you're using the interface-inheritance mindset, then neither Square or Rectangle will inherit from each other. The interface for a square and rectangle are actually different, and one is not a super-set of the other. So OOD actually discourages the use of implementation-inheritance. As mentioned before, if you want to re-use code, OOD says that composition is the right way to go! For what it's worth though, the correct version of the above (bad) implementation-inheritance hierarchy code in C++ is: struct Shape { virtual int area() const = 0; }; struct Square : public virtual Shape { virtual int area() const { return width * width; }; int width; }; struct Rectangle : private Square, public virtual Shape { virtual int area() const { return width * height; }; int height; }; "public virtual" means "implements" in Java. For use when implementing an interface. "private" allows you to extend a base class without also inheriting its interface -- in this case, Rectangle is-not-a Square, even though it's inherited from it. I don't recommend writing this kind of code, but if you do like to use implementation-inheritance, this is the way that you're supposed to be doing it! TL;DR - your OOP class told you what inheritance was. Your missing OOD class should have told you not to use it 99% of the time! Entity / Component frameworks With all that background out of the way, let's jump into Aras' starting point -- the so called "typical OOP" starting point. Actually, one last gripe -- Aras calls this code "traditional OOP", which I object to. This code may be typical of OOP in the wild, but as above, it breaks all sorts of core OO rules, so it should not all all be considered traditional. I'm going to start from the earliest commit before he starts fixing the design towards "ECS": "Make it work on Windows again" 3529f232510c95f53112bbfff87df6bbc6aa1fae // ------------------------------------------------------------------------------------------------- // super simple "component system" class GameObject; class Component; typedef std::vector<Component*> ComponentVector; typedef std::vector<GameObject*> GameObjectVector; // Component base class. Knows about the parent game object, and has some virtual methods. class Component { public: Component() : m_GameObject(nullptr) {} virtual ~Component() {} virtual void Start() {} virtual void Update(double time, float deltaTime) {} const GameObject& GetGameObject() const { return *m_GameObject; } GameObject& GetGameObject() { return *m_GameObject; } void SetGameObject(GameObject& go) { m_GameObject = &go; } bool HasGameObject() const { return m_GameObject != nullptr; } private: GameObject* m_GameObject; }; // Game object class. Has an array of components. class GameObject { public: GameObject(const std::string&& name) : m_Name(name) { } ~GameObject() { // game object owns the components; destroy them when deleting the game object for (auto c : m_Components) delete c; } // get a component of type T, or null if it does not exist on this game object template<typename T> T* GetComponent() { for (auto i : m_Components) { T* c = dynamic_cast<T*>(i); if (c != nullptr) return c; } return nullptr; } // add a new component to this game object void AddComponent(Component* c) { assert(!c->HasGameObject()); c->SetGameObject(*this); m_Components.emplace_back(c); } void Start() { for (auto c : m_Components) c->Start(); } void Update(double time, float deltaTime) { for (auto c : m_Components) c->Update(time, deltaTime); } private: std::string m_Name; ComponentVector m_Components; }; // The "scene": array of game objects. static GameObjectVector s_Objects; // Finds all components of given type in the whole scene template<typename T> static ComponentVector FindAllComponentsOfType() { ComponentVector res; for (auto go : s_Objects) { T* c = go->GetComponent<T>(); if (c != nullptr) res.emplace_back(c); } return res; } // Find one component of given type in the scene (returns first found one) template<typename T> static T* FindOfType() { for (auto go : s_Objects) { T* c = go->GetComponent<T>(); if (c != nullptr) return c; } return nullptr; } Ok, 100 lines of code is a lot to dump at once, so let's work through what this is... Another bit of background is required -- it was popular for games in the 90's to use inheritance to solve all their code re-use problems. You'd have an Entity, extended by Character, extended by Player and Monster, etc... This is implementation-inheritance, as described earlier (a code smell), and it seems like a good idea to begin with, but eventually results in a very inflexible code-base. Hence that OOD has the "composition over inheritance" rule, above. So, in the 2000's the "composition over inheritance" rule became popular, and gamedevs started writing this kind of code instead. What does this code do? Well, nothing good To put it in simple terms, this code is re-implementing the existing language feature of composition as a runtime library instead of a language feature. You can think of it as if this code is actually constructing a new meta-language on top of C++, and a VM to run that meta-language on. In Aras' demo game, this code is not required (we'll soon delete all of it!) and only serves to reduce the game's performance by about 10x. What does it actually do though? This is an "Entity/Component" framework (sometimes confusingly called an "Entity/Component system") -- but completely different to an "Entity Component System" framework (which are never called "Entity Component System systems" for obvious reasons). It formalizes several "EC" rules: the game will be built out of featureless "Entities" (called GameObjects in this example), which themselves are composed out of "Components". GameObjects fulfill the service locator pattern - they can be queried for a child component by type. Components know which GameObject they belong to - they can locate sibling componets by querying their parent GameObject. Composition may only be one level deep (Components may not own child components, GameObjects may not own child GameObjects). A GameObject may only have one component of each type (some frameworks enforced this, others did not). Every component (probably) changes over time in some unspecified way - so the interface includes "virtual void Update". GameObjects belong to a scene, which can perform queries over all GameObjects (and thus also over all Components). This kind of framework was very popular in the 2000's, and though restrictive, proved flexible enough to power countless numbers of games from that time and still today. However, it's not required. Your programming language already contains support for composition as a language feature - you don't need a bloated framework to access it... Why do these frameworks exist then? Well to be fair, they enable dynamic, runtime composition. Instead of GameObject types being hard-coded, they can be loaded from data files. This is great to allow game/level designers to create their own kinds of objects... However, in most game projects, you have a very small number of designers on a project and a literal army of programmers, so I would argue it's not a key feature. Worse than that though, it's not even the only way that you could implement runtime composition! For example, Unity is based on C# as a "scripting language", and many other games use alternatives such as Lua -- your designer-friendly tool can generate C#/Lua code to define new game-objects, without the need for this kind of bloated framework! We'll re-add this "feature" in a later follow-up post, in a way that doesn't cost us a 10x performance overhead... Let's evaluate this code according to OOD: GameObject::GetComponent uses dynamic_cast. Most people will tell you that dynamic_cast is a code smell - a strong hint that something is wrong. I would say that it indicates that you have an LSP violation on your hands -- you have some algorithm that's operating on the base interface, but it demands to know about different implementation details. That's the specific reason that it smells. GameObject is kind of ok if you imagine that it's fulfilling the service locator pattern.... but going beyond OOD critique for a moment, this pattern creates implicit links between parts of the project, and I feel (without a wikipedia link to back me up with comp-sci knowledge) that implicit communication channels are an anti-pattern and explicit communication channels should be preferred. This same argument applies to bloated "event frameworks" that sometimes appear in games... I would argue that Component is a SRP violation because its interface (virtual void Update(time)) is too broad. The use of "virtual void Update" is pervasive within game development, but I'd also say that it is an anti-pattern. Good software should allow you to easily reason about the flow of control, and the flow of data. Putting every single bit of gameplay code behind a "virtual void Update" call completely and utterly obfuscates both the flow of control and the flow of data. IMHO, invisible side effects, a.k.a. action at a distance, is the most common source of bugs, and "virtual void Update" ensures that almost everything is an invisible side-effect. Even though the goal of the Component class is to enable composition, it's doing so via inheritance, which is a CRP violation. The one good part is that the example game code is bending over backwards to fulfill the SRP and ISP rules -- it's split into a large number of simple components with very small responsibilities, which is great for code re-use. However, it's not great as DIP -- many of the components do have direct knowledge of each other. So, all of the code that I've posted above, can actually just be deleted. That whole framework. Delete GameObject (aka Entity in other frameworks), delete Component, delete FindOfType. It's all part of a useless VM that's breaking OOD rules and making our game terribly slow. Frameworkless composition (AKA using the features of the #*@!ing programming language) If we delete our composition framework, and don't have a Component base class, how will our GameObjects manage to use composition and be built out of Components. As hinted in the heading, instead of writing that bloated VM and then writing our GameObjects on top of it in our weird meta-language, let's just write them in C++ because we're #*@!ing game programmers and that's literally our job. Here's the commit where the Entity/Component framework is deleted: https://github.com/hodgman/dod-playground/commit/f42290d0217d700dea2ed002f2f3b1dc45e8c27c Here's the original version of the source code: https://github.com/hodgman/dod-playground/blob/3529f232510c95f53112bbfff87df6bbc6aa1fae/source/game.cpp Here's the modified version of the source code: https://github.com/hodgman/dod-playground/blob/f42290d0217d700dea2ed002f2f3b1dc45e8c27c/source/game.cpp The gist of the changes is: Removing ": public Component" from each component type. I add a constructor to each component type. OOD is about encapsulating the state of a class, but since these classes are so small/simple, there's not much to hide -- the interface is a data description. However, one of the main reasons that encapsulation is a core pillar is that it allows us to ensure that class invariants are always true... or in the event that an invariant is violated, you hopefully only need to inspect the encapsulated implementation code in order to find your bug. In this example code, it's worth us adding the constructors to enforce a simple invariant -- all values must be initialized. I rename the overly generic "Update" methods to reflect what they actually do -- UpdatePosition for MoveComponent and ResolveCollisions for AvoidComponent. I remove the three hard-coded blocks of code that resemble a template/prefab -- code that creates a GameObject containing specific Component types, and replace it with three C++ classes. Fix the "virtual void Update" anti-pattern. Instead of components finding each other via the service locator pattern, the game objects explicitly link them together during construction. The objects So, instead of this "VM" code: // create regular objects that move for (auto i = 0; i < kObjectCount; ++i) { GameObject* go = new GameObject("object"); // position it within world bounds PositionComponent* pos = new PositionComponent(); pos->x = RandomFloat(bounds->xMin, bounds->xMax); pos->y = RandomFloat(bounds->yMin, bounds->yMax); go->AddComponent(pos); // setup a sprite for it (random sprite index from first 5), and initial white color SpriteComponent* sprite = new SpriteComponent(); sprite->colorR = 1.0f; sprite->colorG = 1.0f; sprite->colorB = 1.0f; sprite->spriteIndex = rand() % 5; sprite->scale = 1.0f; go->AddComponent(sprite); // make it move MoveComponent* move = new MoveComponent(0.5f, 0.7f); go->AddComponent(move); // make it avoid the bubble things AvoidComponent* avoid = new AvoidComponent(); go->AddComponent(avoid); s_Objects.emplace_back(go); } We now have this normal C++ code: struct RegularObject { PositionComponent pos; SpriteComponent sprite; MoveComponent move; AvoidComponent avoid; RegularObject(const WorldBoundsComponent& bounds) : move(0.5f, 0.7f) // position it within world bounds , pos(RandomFloat(bounds.xMin, bounds.xMax), RandomFloat(bounds.yMin, bounds.yMax)) // setup a sprite for it (random sprite index from first 5), and initial white color , sprite(1.0f, 1.0f, 1.0f, rand() % 5, 1.0f) { } }; ... // create regular objects that move regularObject.reserve(kObjectCount); for (auto i = 0; i < kObjectCount; ++i) regularObject.emplace_back(bounds); The algorithms Now the other big change is in the algorithms. Remember at the start when I said that interfaces and algorithms were symbiotic, and both should impact the design of the other? Well, the "virtual void Update" anti-pattern is also an enemy here. The original code has a main loop algorithm that consists of just: // go through all objects for (auto go : s_Objects) { // Update all their components go->Update(time, deltaTime); You might argue that this is nice and simple, but IMHO it's so, so bad. It's completely obfuscating both the flow of control and the flow of data within the game. If we want to be able to understand our software, if we want to be able to maintain it, if we want to be able to bring on new staff, if we want to be able to optimise it, or if we want to be able to make it run efficiently on multiple CPU cores, we need to be able to understand both the flow of control and the flow of data. So "virtual void Update" can die in a fire. Instead, we end up with a more explicit main loop that makes the flow of control much more easy to reason about (the flow of data is still obfuscated here, we'll get around to fixing that in later commits) // Update all positions for (auto& go : s_game->regularObject) { UpdatePosition(deltaTime, go, s_game->bounds.wb); } for (auto& go : s_game->avoidThis) { UpdatePosition(deltaTime, go, s_game->bounds.wb); } // Resolve all collisions for (auto& go : s_game->regularObject) { ResolveCollisions(deltaTime, go, s_game->avoidThis); } The downside of this style is that for every single new object type that we add to the game, we have to add a few lines to our main loop. I'll address / solve this in a future blog in this series. Performance There's still a lot of outstanding OOD violations, some bad design choices, and lots of optimization opportunities remaining, but I'll get to them with the next blog in this series. As it stands at this point though, the "fixed OOD" version either almost matches or beats the final "ECS" code from the end of the presentation... And all we did was take the bad faux-OOP code and make it actually obey the rules of OOP (and delete 100 lines of code)! Next steps There's much more ground that I'd like to cover here, including solving the remaining OOD issues, immutable objects (functional style programming) and the benefits it can bring to reasoning about data flows, message passing, applying some DOD reasoning to our OOD code, applying some relational wisdom to our OOD code, deleting those "entity" classes that we ended up with and having purely components-only, different styles of linking components together (pointers vs handles), real world component containers, catching up to the ECS version with more optimization, and then further optimization that wasn't also present in Aras' talk (such as threading / SIMD). No promises on the order that I'll get to these, or if, or when...
  6. 10 points
    Hey! My name’s Tim Trankle and I’m a 3D Artist for Pixelmatic. My most recent task was to create the XM-03 Hornet for Infinite Fleet. I overhauled our 3D asset production workflow to take advantage of new tools and techniques to keep our visuals on the cutting edge. The new workflow has really taken our ships to another level and I’d like to show you just how I make them. The Model Every ship model has to start somewhere, and I usually start a ship by making its most recognizable features. In the case of the Hornet, that would be the energy-sapping arms and the wings. They contribute heavily to the silhouette and help guide the overall shape. While I’ve worked with several modelling packages, my go-to for hard surface work is Blender. This is because its modifier toolset is great for creating detail in a non-destructive way and the modelling tools themselves are built for making meshes quickly. It also has great addon support for extending its functionality and there’s one addon that was crucial to this ship’s production that I’ll talk about in a little bit. For hard surface modelling, I usually like to start with a single polygon plane and build my way up from there. I find that it allows for the most flexibility when creating more unusual shapes. For the Hornet, nearly every large component that wasn’t based on a cylinder started its life as a single plane. Here, all of the components are finished, but the polygons need to be smoothed. Right now, you can see that they are all hard shaded and identifiable. Typically, the solution to this is to create Smoothing Groups where some edges are soft shaded and others are hard shaded. This gets you most of the way there, but the edge between the two surfaces is perfectly sharp. On any surface, no matter how sharp the edge is, you’ll still see a highlight due to a bevel between the sides. Simply changing the smoothing groups doesn’t give you this. In the past, I’ve used an addon for Blender called TexTools to bake out a specially made normal map which would give me the bevel. As you can see, the edge around the top has a highlight on it conveying a bevel even though the model itself hasn’t changed. This works very well and requires very little setup but it is dependent on the texture resolution. If the texture is too low-res, then the bevel will appear pixelated and show some artifacts. For a ship as complex as the Hornet, this would’ve required an absurdly high-resolution texture to get a crisp bevel on the entire ship. Since most of the surface is flat, that means the normal map is mostly wasted space. I needed a new solution. Bevelled Edges and Custom Normals The solution I used was to create real bevels in the model. Normally, doing this by itself would result in shading artifacts as large faces are used to blend between vertex normals that are facing different directions. However, by adjusting the vertex normals manually, I can change the shading to be much cleaner. In Blender, I used an addon called Blend4Web which makes the adjustment of these normals very straightforward. I went in and added these bevels to all the hard edges around the ship. Now, I don’t need smoothing groups as the entire model would be smooth shaded. I just needed to adjust the vertex normals. The results are edges that give crisp highlights and look much more solid and realistic. Adding the Detail The Hornet is covered in panel seams, extrusions, markers, and lights. Before, I’d do all of this with the texture. I’d paint custom normal map details in Quixel nDo and then do the rest of the texturing in 3D-Coat but that strategy wouldn’t work here. Like with the normal map bevels mentioned earlier, the texture for the ship would need to be exceedingly high-res to capture all the detail I wanted to put on and have it be crisp and legible. The answer to this is to only create textures for the area where the detail needs to be. This is accomplished with decals. The decals, in this case, are actually polygons that rest just above the surface of the model. They can affect the color, normal, and roughness maps of the surface underneath them. This means that they can add detail that blends seamlessly with the underlying textures and still adhere to all of the environmental lighting conditions. To create these, I use an addon for Blender called DecalMachine. It provides a wide assortment of tools for creating decal textures and applying them to your model. You can create panel lines based on mesh intersections, place decals directly on the surface, or project planes that will conform to the shape of your model if it’s curved. All of this makes the decal creation process fast, flexible, and fun. I can place the decals anywhere on the model and use Blenders modifiers like mirroring and creating arrays to easily populate the surface of the ship with detail. Not only do the decals affect the normal map to create things like rivets or vents, but I can also create text and icons like the USF logo and place it wherever I want without having to erase texture layers or rebake any maps. Once I place all the decals that the ship needs, I’ll create a texture atlas. This is a single texture that has all of the decals that I used. This is the real power of using decals as this texture is independent of the rest of the ship. I can make the decals as small as I want and they won’t lose resolution. On top of that, I can use this atlas for all of the ships in the game, which drastically reduces the number of textures that need to be stored in memory. For the XM-03 Hornet, every detail you see is done with decals. The panel seams, the vents, the warning labels, and even the lights are all separate polygons resting on the surface. This results in quite a few decals. Texturing Since all of the unique detail on the ship is accomplished with the decals, this means that I can cover the rest of the surface with a set of tiling materials. These textures can tile independently of the decals which means I can scale them down and maintain a high level of detail even when the ship is viewed from very close up. Even though these textures are only 512x512, there is no pixelation. One of the downsides of this approach though is that the surface of the ship becomes very homogeneous since everything is the same color. To break up the color and the roughness, I made a special shader that added a tiling dirt texture to the more occluded areas of the ship based on an ambient occlusion map. Just like the rest of the textures for the paint, this dirt texture can be tiled independently from the decals or other textures. With that, the ship is finished! All that’s left is to set up some lights and the right post-processing effects to really show it off. We hope you enjoy gorgeous graphics as much as we do. While we still need to improve and optimize this new workflow, this is a huge step forward for us to be able to provide you with stunning models in Infinite Fleet. Stay tuned for more to come and feel free to reach us on our Discord if you have any questions. Note: This article was originally published on the Infinite Fleet blog, and is reproduced here with the kind permission of the author. You can chat with the creators on their Discord or Twitter, or check out the trailer for the game on YouTube.
  7. 9 points
    Hey guys, We have new screenshots of our game in development showing you the art style. And even if there is still a lot of work to do, things missing here and there, and unfinished textures, we can say that we are quite happy with it. I hope you will enjoy them. More screenshots on our Steam page Thank you for reading!
  8. 8 points
    This tutorial is published with the permission of Erik Roystan Ross of https://roystan.net. Roystan writes articles about game development using Unity. GameDev.net strongly encourages you to support authors of high quality content, so if you enjoy Toon Shader Using Unity, then please consider becoming a patron. You can view the original article at https://roystan.net/articles/toon-shader.html. Source code at https://github.com/IronWarrior/UnityToonShader. Become a Patron! Interested in posting your articles on GameDev.net? Contact us or submit directly.
  9. 8 points
    Introduction Explicit resource state management and synchronization is one of the main advantages and main challenges that modern graphics APIs such as Direct3D12 and Vulkan offer application developers. It makes rendering command recording very efficient, but getting state management right is a challenging problem. This article explains why explicit state management is important and introduces a solution implemented in Diligent Engine, a modern cross-platform low-level graphics library. Diligent Engine has Direct3D11, Direct3D12, OpenGL/GLES and Vulkan backends and supports Windows Desktop, Universal Windows, Linux, Android, Mac and iOS platforms. Its full source code is available on GitHub and is free to use. This article gives an introduction to Diligent Engine. Synchronization in Next-Gen APIs Modern graphics applications can best be described as client-server systems where CPU is a client that records rendering commands and puts them into queue(s), and GPU is a server that asynchronously pulls commands from the queue(s) and processes them. As a result, commands are not executed immediately when CPU issues them, but rather sometime later (typically one to two frames) when GPU gets to the corresponding point in the queue. Besides that, GPU architecture is very different from CPU because of the kind of problems that GPUs are designed to handle. While CPUs are great at running algorithms with lots of flow control constructs (branches, loops, etc.) such as handling events in an application input loop, GPUs are more efficient at crunching numbers by executing the same computation thousands and even millions of times. Of course, there is a little bit of oversimplification in that statement as modern CPUs also have wide SIMD (single instruction multiple data) units that allow them to perform computations efficiently as well. Still, GPUs are at least order of magnitude faster in these kinds of problems. The main challenge that both CPUs and GPUs need to solve is memory latency. CPUs are out-of-order machines with beefy cores and large caches that use fancy prefetching and branch-prediction circuitry to make sure that data is available when a core actually needs it. GPUs, in contrast, are in-order beasts with small caches, thousands of tiny cores and very deep pipelines. They don't use any branch prediction or prefetching, but instead maintain tens of thousands of threads in flight and are capable of switching between threads instantaneously. When one group of threads waits for a memory request, GPU can simply switch to another group provided it has enough work. When programming CPU (when talking about CPU I will mean x86 CPU; things may be a little bit more involved for ARM ones), the hardware does a lot of things that we usually take for granted. For instance, after one core has written something at a memory address, we know that another core can immediately read the same memory. The cache line containing the data will need to do a little bit of travelling through the CPU, but eventually, another core will get the correct piece of information with no extra effort from the application. GPUs, in contrast, give very few explicit guarantees. In many cases, you cannot expect that a write is visible to subsequent reads unless special care is taken by the application. Besides that, the data may need to be converted from one form to another before it can be consumed by the next step. Few examples where explicit synchronization may be required: After data has been written to a texture or a buffer through an unordered access view (UAV in Direct3D) or an image (in Vulkan/OpenGL terminology), the GPU may need to wait until all writes are complete and flush the caches to memory before the same texture or buffer can be read by another shader. After shadow map rendering command is executed, the GPU may need to wait until rasterization and all writes are complete, flush the caches and change the texture layout to a format optimized for sampling before that shadow map can be used in a lighting shader. If CPU needs to read data previously written by the GPU, it may need to invalidate that memory region to make sure that caches get updated bytes. These are just a few examples of synchronization dependencies that a GPU needs to resolve. Traditionally, all these problems were handled by the API/driver and were hidden from the developer. Old-school implicit APIs such as Direct3D11 and OpenGL/GLES work that way. This approach, while being convenient from a developer's point of view, has major limitations that result in suboptimal performance. First, a driver or API does not know what the developer's intent is and have to always assume the worst-case scenario to guarantee correctness. For instance, if one shader writes to one region of a UAV, but the next shader reads from another region, the driver must always insert a barrier to guarantee that all writes are complete and visible because it just can't know that the regions do not overlap and the barrier is not really necessary. The biggest problem though is that this approach makes parallel command recording almost useless. Consider a scenario where one thread records commands to render a shadow map, while the second thread simultaneously records commands to use this shadow map in a forward rendering pass. The first thread needs the shadow map to be in depth-stencil writable state, while the second thread needs it to be in shader readable state. The problem is that the second thread does not know what the original state of the shadow map is. So what happens is when an application submits the second command buffer for execution, the API needs to find out what the actual state of the shadow map texture is and patch the command buffer with the right state transition. It needs to do this not only for our shadow map texture but for any other resource that the command list may use. This is a significant serialization bottleneck and there was no way in old APIs to solve it. Solution to the aforementioned problems is given by the next-generation APIs (Direct3D12 and Vulkan) that make all resource transitions explicit. It is up to the application now to track the states of all resources and assure that all required barriers/transitions are executed. In the example above, the application will know that when the shadow map is used in a forward pass, it will be in the depth-stencil writable state, so the barrier can be inserted right away without the need to wait for the first command buffer to be recorded or submitted. The downside here is that the application is now responsible for tracking all resource states which could be a significant burden. Let's now take a closer look at how synchronization is implemented in Vulkan and Direct3D12. Synchronization in Vulkan Vulkan enables very fine-grain control over synchronization operations and provides tools to individually tweak the following aspects: Execution dependencies, i.e. which set of operations must be completed before another set of operations can begin. Memory dependencies, i.e. which memory writes must be made available to subsequent reads. Layout transitions, i.e. what texture memory layout transformations must be performed, if any. Executions dependencies are expressed as dependencies between pipeline stages that naturally map to the traditional GPU pipeline. The type of memory access is defined by VkAccessFlagBits enum. Certain access types are only valid for specific pipeline stages. All valid combinations are listed in Section 6.1.3 of Vulkan Spec, which are also given in the following table: | Access flag (VK_ACCESS_) | Pipeline Stages | | |(VK_PIPELINE_STAGE_) | Access Type Description |------------------------------------|-----------------------------|--------------------------------------------------------------------- | INDIRECT_COMMAND_READ_BIT | DRAW_INDIRECT_BIT | Read access to indirect draw/dispatch command data attributes stored in a buffer | INDEX_READ_BIT | VERTEX_INPUT_BIT | Read access to an index buffer | VERTEX_ATTRIBUTE_READ_BIT | STAGE_VERTEX_INPUT_BIT | Read access to a vertex buffer | UNIFORM_READ_BIT | ANY_SHADER_BIT | Read access to a uniform (constant) buffer | SHADER_READ_BIT | ANY_SHADER_BIT | Read access to a storage buffer (buffer UAV), uniform texel buffer (buffer SRV), sampled image (texture SRV), storage image (texture UAV) | SHADER_WRITE_BIT | ANY_SHADER_BIT | Write access to a storage buffer (buffer UAV), or storage image (texture UAV) | INPUT_ATTACHMENT_READ_BIT | FRAGMENT_SHADER_BIT | Read access to an input attachment (render target) during fragment shading | COLOR_ATTACHMENT_READ_BIT | COLOR_ATTACHMENT_OUTPUT_BIT | Read access to a color attachment (render target) such as via blending or logic operations | COLOR_ATTACHMENT_WRITE_BIT | COLOR_ATTACHMENT_OUTPUT_BIT | Write access to a color attachment (render target) during render pass or via certain operations such as blending | DEPTH_STENCIL_ATTACHMENT_READ_BIT | EARLY_FRAGMENT_TESTS_BIT or | | | LATE_FRAGMENT_TESTS_BIT | Read access to depth/stencil buffer via depth/stencil operations | DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | EARLY_FRAGMENT_TESTS_BIT or | | | LATE_FRAGMENT_TESTS_BIT | Write access to depth/stencil buffer via depth/stencil operations | TRANSFER_READ_BIT | TRANSFER_BIT | Read access to an image (texture) or buffer in a copy operation | TRANSFER_WRITE_BIT | TRANSFER_BIT | Write access to an image (texture) or buffer in a clear or copy operation | HOST_READ_BIT | HOST_BIT | Read access by a host | HOST_WRITE_BIT | HOST_BIT | Write access by a host Table 1. Valid combinations of access flags and pipeline stages. ANY_SHADER_BIT means TESSELLATION_CONTROL_SHADER_BIT, TESSELLATION_EVALUATION_SHADER_BIT, GEOMETRY_SHADER_BIT, FRAGMENT_SHADER_BIT, or COMPUTE_SHADER_BIT As you can see most access flags correspond 1:1 to a pipeline stage. For example, quite naturally vertex indices can only be read at the vertex input stage, while final color can only be written at color attachment (render target in Direct3D12 terminology) output stage. For certain access types, you can precisely specify what stage will use that access type. Most importantly, for shader reads (such as texture sampling), writes (UAV/image stores) and uniform buffer access it is possible to precisely tell the system what shader stages will be using that access type. For depth-stencil read/write access it is possible to distinguish if the access happens at the early or late fragment test stage. Quite honestly I can't really come up with any examples where this flexibility may be useful and result in measurable performance improvement. Note that it is against the spec to specify access flag for a stage that does not support that type of access (such as depth-stencil write access for vertex shader stage). An application may use these tools to very precisely specify dependencies between stages. For example, it may request that writes to a uniform buffer from vertex shader stage are made available to reads from the fragment shader in a subsequent draw call. An advantage here is that since dependency starts at the fragment shader stage, the driver will not need to synchronize the execution of the vertex shader stage, potentially saving some GPU cycles. For image (texture) resources, a synchronization barrier also defines layout transitions, i.e. potential data reorganization that the GPU may need to perform to support the requested access type. Section 11.4 of the Vulkan spec describes available layouts and how they must be used. Since every layout can only be used at certain pipeline stages (for example, color-attachment-optimal layout can only be used by color attachment read/write stage), and every pipeline stage allows only few access types, we can list all allowed access flags for every layout, as presented in the table below: |Image layout (VK_IMAGE_LAYOUT) | Access (VK_ACCESS_) | Description |----------------------------------|------------------------------------|---------------------------------------------------- | UNDEFINED | n/a | This layout can only be used as initial layout when creating an image or as the old layout in image transition. When transitioning out of this layout, the contents of the image is not preserved. | GENERAL | Any,All types of device access. | | COLOR_ATTACHMENT_OPTIMAL | COLOR_ATTACHMENT_READ_BIT | | | COLOR_ATTACHMENT_WRITE_BIT | Must only be used as color attachment. | DEPTH_STENCIL_ATTACHMENT_OPTIMAL | DEPTH_STENCIL_ATTACHMENT_READ_BIT | | | DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | Must only be used as depth-stencil attachment. | DEPTH_STENCIL_READ_ONLY_OPTIMAL | DEPTH_STENCIL_ATTACHMENT_READ_BIT | | | SHADER_READ_BIT | Must only be used as read-only depth-stencil attachment or as read-only image in a shader. | SHADER_READ_ONLY_OPTIMAL | SHADER_READ_BIT | Must only be used as a read-only image in a shader (sampled image or input attachment). | TRANSFER_SRC_OPTIMAL | TRANSFER_READ_BIT | Must only be used as source for transfer (copy) commands. | TRANSFER_DST_OPTIMAL | TRANSFER_WRITE_BIT | Must only be used as destination for transfer (copy and clear) commands. | PREINITIALIZED | n/a | This layout can only be used as initial layout when creating an image or as the old layout in image transition. When transitioning out of this layout, the contents of the image is preserved, as opposed to UNDEFINED layout. Table 2. Image layouts and allowed access flags. As with access flags and pipeline stages, there is very little freedom in combining image layouts and access flags. As a result, image layouts, access flags and pipeline stages in many cases form uniquely defined triplets. Note that Vulkan also exposes another form of synchronization called render passes and subpasses. The main purpose of render passes is to provide implicit synchronization guarantees such that an application does not need to insert a barrier after every single rendering command (such as draw or clear). Render passes also allow expressing the same dependencies in a form that may be leveraged by the driver (especially on GPUs that use tiled deferred rendering architectures) for more efficient rendering. Full discussion of render passes is out of scope of this post. Synchronization in Direct3D12 Synchronization tools in Direct3D12 are not as expressive as in Vulkan, but are also not as intricate. With the exception of UAV barriers described below, Direct3D12 does not define the distinction between the execution barrier and memory barrier and operates with resource states (see Table 3). | Resource state | | (D3D12_RESOURCE_STATE_) | Description |----------------------------|------------------------------------------------------- | VERTEX_AND_CONSTANT_BUFFER | The resource is used as vertex or constant buffer. | INDEX_BUFFER | The resource is used as index buffer. | RENDER_TARGET | The resource is used as render target. | UNORDERED_ACCESS | The resource is used for unordered access via an unordered access view (UAV). | DEPTH_WRITE | The resource is used in a writable depth-stencil view or a clear command. | DEPTH_READ | The resource is used in a read-only depth-stencil view. | NON_PIXEL_SHADER_RESOURCE | The resource is accessed via shader resource view in any shader stage other than pixel shader. | PIXEL_SHADER_RESOURCE | The resource is accessed via shader resource view in pixel shader. | INDIRECT_ARGUMENT | The resource is used as the source of indirect arguments for an indirect draw or dispatch command. | COPY_DEST | The resource is as copy destination in a copy command. | COPY_SOURCE | The resource is as copy source in a copy command. Table 3. Most commonly used resource states in Direct3D12. Direct3D12 defines three resource barrier types: State transition barrier defines transition from one resource state listed in Table 3 to another. This type of barrier maps to Vulkan barrier when old an new access flags and/or image layouts are not the same. UAV barrier is an execution plus memory barrier in Vulkan terminology. It does not change the state (layout), but instead indicates that all UAV accesses (read or writes) to a particular resource must complete before any future UAV accesses (read or write) can begin. Aliasing barrier indicates a usage transition between two resources that are backed by the same memory and is out of scope of this article. Resource state management in Diligent Engine The purpose of Diligent Engine is to provide efficient cross-platform low-level graphics API that is convenient to use, but at the same time is flexible enough to not limit the applications in expressing their intent. Before version 2.4, the ability of the application to control resource state transitions was very limited. Version 2.4 made resource state transitions explicit and introduced two ways to manage the states. The first one is fully automatic, where the engine internally keeps track of the state and performs necessary transitions. The second one is manual and completely driven by the application. Automatic State Management Every command that may potentially perform state transitions uses one of the following state transitions modes: RESOURCE_STATE_TRANSITION_MODE_NONE - Perform no state transitions and no state validation. RESOURCE_STATE_TRANSITION_MODE_TRANSITION - Transition resources to the states required by the command. RESOURCE_STATE_TRANSITION_MODE_VERIFY - Do not transition, but verify that states are correct. The code snippet below gives an example of a sequence of typical rendering commands in Diligent Engine 2.4: // Clear the back buffer const float ClearColor[] = { 0.350f, 0.350f, 0.350f, 1.0f }; m_pImmediateContext->ClearRenderTarget(nullptr, ClearColor, RESOURCE_STATE_TRANSITION_MODE_TRANSITION); m_pImmediateContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f, 0, RESOURCE_STATE_TRANSITION_MODE_TRANSITION); // Bind vertex buffer Uint32 offset = 0; IBuffer *pBuffs[] = {m_CubeVertexBuffer}; m_pImmediateContext->SetVertexBuffers(0, 1, pBuffs, &offset, RESOURCE_STATE_TRANSITION_MODE_TRANSITION, SET_VERTEX_BUFFERS_FLAG_RESET); m_pImmediateContext->SetIndexBuffer(m_CubeIndexBuffer, 0, RESOURCE_STATE_TRANSITION_MODE_TRANSITION); // Set pipeline state m_pImmediateContext->SetPipelineState(m_pPSO); // Commit shader resources m_pImmediateContext->CommitShaderResources(m_pSRB, RESOURCE_STATE_TRANSITION_MODE_TRANSITION); DrawAttribs DrawAttrs; DrawAttrs.IsIndexed = true; DrawAttrs.IndexType = VT_UINT32; // Index type DrawAttrs.NumIndices = 36; // Verify the state of vertex and index buffers DrawAttrs.Flags = DRAW_FLAG_VERIFY_STATES; m_pImmediateContext->Draw(DrawAttrs); Automatic state management is useful in many scenarios, especially when porting old applications to Diligent API. It has the following limitations though: The state is tracked for the whole resource only. Individual mip levels and/or texture array slices cannot be transitioned. The state is a global resources property. Every device context that uses a resource sees the same state. Automatic state transitions are not thread safe. Any operation that uses RESOURCE_STATE_TRANSITION_MODE_TRANSITION requires that no other thread accesses the states of the same resources simultaneously. Explicit State Management As we discussed above, there is no way to efficiently solve resource management problem in a fully automated manner, so Diligent Engine is not trying to outsmart the industry and makes state transitions part of the API. It introduces a set of states that mostly map to Direct3D12 resource states as we believe this method is expressive enough and is way more clear compared to Vulkan's approach. If an application needs a very fine-grain control, it can use native API interoperability to directly insert Vulkan barriers into a command buffer. The list of states defined by Diligent Engine as well as their mapping to Direct3D12 and Vulkan is given in Table 4 below. | Diligent State | Direct3D12 state | Vulkan Image Layout | Vulkan Access Type | (RESOURCE_STATE_) | (D3D12_RESOURCE_STATE_) | (VK_IMAGE_LAYOUT_) | (VK_ACCESS_) |-------------------|----------------------------|----------------------------------|---------------------------------- | UNKNOWN | n/a | n/a | n/a | UNDEFINED | COMMON | UNDEFINED | 0 | VERTEX_BUFFER | VERTEX_AND_CONSTANT_BUFFER | n/a | VERTEX_ATTRIBUTE_READ_BIT | CONSTANT_BUFFER | VERTEX_AND_CONSTANT_BUFFER | n/a | UNIFORM_READ_BIT | INDEX_BUFFER | INDEX_BUFFER | n/a | INDEX_READ_BIT | RENDER_TARGET | RENDER_TARGET | COLOR_ATTACHMENT_OPTIMAL | COLOR_ATTACHMENT_READ_BIT | COLOR_ATTACHMENT_WRITE_BIT | UNORDERED_ACCESS | UNORDERED_ACCESS | GENERAL | SHADER_WRITE_BIT | SHADER_READ_BIT | DEPTH_READ | DEPTH_READ | DEPTH_STENCIL_READ_ONLY_OPTIMAL | DEPTH_STENCIL_ATTACHMENT_READ_BIT | DEPTH_WRITE | DEPTH_WRITE | DEPTH_STENCIL_ATTACHMENT_OPTIMAL | DEPTH_STENCIL_ATTACHMENT_READ_BIT | DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | SHADER_RESOURCE | NON_PIXEL_SHADER_RESOURCE | SHADER_READ_ONLY_OPTIMAL | SHADER_READ_BIT | | PIXEL_SHADER_RESOURCE | | | INDIRECT_ARGUMENT | INDIRECT_ARGUMENT | n/a | INDIRECT_COMMAND_READ_BIT | COPY_DEST | COPY_DEST | TRANSFER_DST_OPTIMAL | TRANSFER_WRITE_BIT | COPY_SOURCE | COPY_SOURCE | TRANSFER_SRC_OPTIMAL | TRANSFER_READ_BIT | PRESENT | PRESENT | PRESENT_SRC_KHR | MEMORY_READ_BIT Table 4. Mapping between Diligent resource state, Direct3D12 state, Vulkan image layouts and access flags. Diligent resource states map almost exactly 1:1 to Direct3D12 resource states. The only real difference is that in Diligent, SHADER_RESOURCE state maps to the union of NON_PIXEL_SHADER_RESOURCE and PIXEL_SHADER_RESOURCE states, which does not seem to be a real issue. Compared to Vulkan, resource states in Diligent are a little bit more general, specifically: RENDER_TARGET state always defines writable render target (sets both COLOR_ATTACHMENT_READ_BIT, COLOR_ATTACHMENT_WRITE_BIT access type flags). UNORDERED_ACCESS state always defines writable storage image/storage buffer (sets both SHADER_WRITE_BIT, SHADER_READ_BIT access type flags). Transitions to and out of CONSTANT_BUFFER, UNORDERED_ACCESS, and SHADER_RESOURCE states always set all applicable pipeline stage flags as given by Table 1. None of the limitations above seem to be causing any measurable performance degradation. Again, if an application really needs to specify more precise barrier, it can rely on native API interoperability. Note that Diligent defines both UNKNOWN and UNDEFINED states, which have very different meanings. UNKNOWN means that the state is not known to the engine and that application manually manages the state of this resource. UNDEFINED means that the state is known to the engine and is undefined from the point of view of the underlying API. This state has well-defined counterparts in Direct3D12 and Vulkan. Explicit resource state transitions in Diligent Engine are performed with the help of IDeviceContext::TransitionResourceStates() method that takes an array of StateTransitionDesc structures: void IDeviceContext::TransitionResourceStates(Uint32 BarrierCount, StateTransitionDesc* pResourceBarriers) Every element in the array defines resource to transition (a texture or a buffer), old state, new state as well as the range of mip levels and array slices, for a texture resource: struct StateTransitionDesc { ITexture* pTexture = nullptr; IBuffer* pBuffer = nullptr; Uint32 FirstMipLevel = 0; Uint32 MipLevelsCount = 0; Uint32 FirstArraySlice= 0; Uint32 ArraySliceCount= 0; RESOURCE_STATE OldState = RESOURCE_STATE_UNKNOWN; RESOURCE_STATE NewState = RESOURCE_STATE_UNKNOWN; bool UpdateResourceState = false; }; If the state of the resource is known to the engine, the OldState member can be set to UNKNOWN, in which case the engine will use the state from the resource. If the state is not known to the engine, OldState must not be UNKNOWN. NewState can never be UNKNOWN. An important member is UpdateResourceState flag. If set to true, the engine will set the state of the resource to value given by NewState. Otherwise, the state will remain unchanged. Switching between explicit and automatic state management Diligent Engine provides tools to allow switching between and mixing automatic and manual state management. Both ITexture and IBuffer interfaces expose SetState() and GetState() methods that allow an application to get and set the resource state. When the state of a resource is set to UNKNOWN, this resource will be ignored by all methods that use RESOURCE_STATE_TRANSITION_MODE_TRANSITION mode. State transitions will still be performed for all resources whose state is known. An application can thus mix automatic and manual state management by setting the state of resources that are manually managed to UNKNOWN. If an application wants to hand over state management back to the system, it can use SetState() method to set the resource state. Alternatively, it can set UpdateResourceState flag to true, which will have the same effect. Multithreaded Safety As we discussed above, the main advantage of manual resource state management is the ability to record rendering commands in parallel. As resource states are tracked globally in Diligent Engine, the following precautions must be taken: Recording state transitions of the same resource in multiple threads simultaneously with IDeviceContext::TransitionResourceStates() is safe as long as UpdateResourceState flag is set to false. Any thread that uses RESOURCE_STATE_TRANSITION_MODE_TRANSITION mode with any method must be the only thread accessing resources that may be transitioned. This also applies to IDeviceContext::TransitionShaderResources() method. If a thread uses RESOURCE_STATE_TRANSITION_MODE_VERIFY mode with any method (which is recommended whenever possible), no other thread should alter the states of the same resources. Discussion Diligent Engine adopts D3D11-style API with immediate and deferred contexts to record rendering commands. Since it is well known that deferred contexts did not work well in Direct3D11, a natural question one may ask is why they work in Diligent. And the answer is because of the explicit state transition control. While in Direct3D11, resource state management was always automatic, Diligent gives the application direct control of how resource states must be handled by every operation. At the same time, device contexts incorporate dynamic memory, descriptor management and other tasks that need to be handled by a thread that records rendering commands. Conclusion Explicit resource state management system introduced in Diligent Engine v2.4 combines flexibility, efficiency and convenience to use. An application may rely on automatic resource state management in typical rendering scenarios and switch to manual mode when the engine does not have enough knowledge to manage the states optimally or when it is not possible such as in the case of multithreaded rendering command recording. At the moment Diligent Engine only supports one command queue exposed as single immediate context. One of the next steps is to expose multiple command queues through multiple immediate contexts as well as primitives to synchronize execution between queues to allow async compute and other advanced rendering techniques.
  10. 8 points
    Hello, it’s time for statistics from my last game “Fisherman” (downloads and earnings), expect it, I will share data from my other games + total earnings + plays (online) + downloads from my 8 games (6 Android + WebGL, 2 only WebGL). My game (fisherman) has been released for Android (Google Play, Amazon Store) + WebGL (Kongregate, Crazy Games). Fisherman is a clicker/idle game, your purpose is to earn the highest amount of cash + buy new upgrades (more about game here). Game has been downloaded 740x (698x Google Play, 42x Amazon Store). The Highest number of downloads (one day) is 434x. WebGL version (Online), got: 17000 (Kongregate) game plays 6000 (Crazy Games) game plays Total 23000 game plays. Kongregate Data, daily new players Installs image (Android): Google Play Downloads Amazon Store Downloads Google Play users country What about earnings? Android: $6,42 Unity Ads (reward video ads, only played when users want). $2.55 ChartBoost (full screen ads, only visible, when player have enough high earnings + they got CD for 5–8 minutes, expect it if Unity Ads fail to show ad, this one will show reward ad). $2.55 from Google Play, $0 from Amazon Store. Online (WebGL): $32.53 Kongregate (played based on game plays + they quality). €13.97 Crazy Games (based on game plays + they quality). In total, it gives me: ~$57.41 (- taxes and -graphics costs and other cost, expect it there is fee for withdrawing cash + PayPal got weak exchange rate). My game (this one and others), don’t got IAP. I expected these earnings, to be honest I never expected something (you know big cash, popularity, MMO RPG with millions of players), that’s why I’ve never been disappointed. It could be worse, game earnings Check out my game: Google Play Amazon Store Online And my social media: Facebook Twitter (^ You won’t miss info about my new games + about new stats from future games) OK, time for most interesting statistics: Most amount of rates (Android) comes from Xiaomi Redmi Note 4. In terms of downloads, most popular phones are Redmi 4x and Redmi Note 4, expect it there is a lot of other Xiaomi Devices + a lot of Samsung smartphones. Total daily play time ( the best day) is 8.68M seconds (~100days), total daily play time per user (the best day) is 3.13k seconds. Expect it, in the best day, I had notice 38.53 shop button clicks (only one shop) and 11k transports (started). Best Day = day with highest rate of installs Android Rate (Google Play) Oh, I had thought, that I will get a lot of 1 star rates (I got 2x crash bug): 1. I had updated Google Services, because older was working very bad, I got no errors, on my phone (Android 6), all was working fine, then after releasing game to Google Play, I got crashes on users with Android 6+. 2. I just released a very small upgrade, then my game crashes for users (Unity corrupted packaged game files, only Android). Those crash didn’t affect rates, because I got very good users, they had reported me those bugs so fast + I fixed them in less than 24 hours (ussaly day of bug report = day of bug fix) Time for some images: New Users, Data from Unity Analitycs Monthly active users (Data from Unity Analitycs) Country of my game users (Google Play data) Android Versions of my users (Google Play) Here are stats from my game before “Fisherman”, “Zarsthor”: Game got downloaded 152x (111x Google Play + 41x Amazon Store). Expect it, it got 7k game plays (online. Earnings: $0 Chartboost, $0 Unity ads, $8.73 Web Version (WebGL). Ok, time for next bonus stats, after releasing zarsthor (my 5th game), I decided to release 2 small WebGL games (games for 2 players): Throw, it was a simple game, to be honest IDK about what is it (I think we are throwing something), it has earned $10,4 (more than Zarsthor, let’s say Zarsthor was xxx time when throw was x time). It got also 12,162 game plays, Fish Eat Fishes, my next simple game, this time it earned $47.77 earnings. Game got 29,577 game plays. So my total earnings, are from all games (8 in total, 6 released for Web + Android, 2 only for Web): Unity ads $144.24 Chartboost: $114.54 (withdraw from $300) Crazy Games: (Great Website with online games) $238.68 Kongregate: $128.32 In total = $625.78 (-$114 non Withdraw from Chartboost, -taxes, -graphics cost and others). My Web (WebGL) game plays amount (total, from my all game) Game plays: Kongregate: 86,421 Crazy Games: 164,599 In total: ~251k game plays (only online versions). My total downloads of my Android games is: Google Play: 11643x Amazon Store: 1943x Total: ~13586 Expect it I got 2 from my 6 Android games on itch.io, today I had seen, that bomb rain got 6 downloads, tree tap 59 (on itch.io), but I had never linked to my itch.io, so today I had uploaded all my other games to it. My Itch.io Time for some social media data: Facebook: 69 likes Twitter: 50 observing people And some strange bonus data from a Polish website (like Dig): my tag is observed by 381 people, my list (like a e-mail newsletter, but on portal only) is subscribed by 123 people Expect it, I got on that websites (total) 4166 “up votes” (let’s say it’s an up vote) and total 985 comments + 164.1k views. Expect it (on blog like, I mean it’s a website blog on polish website like dig, where all can post various things), I got total 6119 up votes ( the best post got 722 comments and 2715 up votes). What about reedit? I got 945 total up votes (counted up votes — down votes) And total 441 comments. Time for data from my other games (Downloads) Fisherman: Google Play Amazon Store Online 698 Google + 42 Amazon Store = 740 Downloads Zarsthor: Google Play Amazon Store Online 111 Google + 41 Amazon store = 152 Downloads Mirkowanie: Google Play Amazon Store Online 4118 Google + 49 Amazon Store = 4167 Downloads Stickman: Google Play Amazon Store Online 549 Google + 533 Amazon Store = 1082 Downloads Money Tree: Google Play Amazon Store Online 5013 Google + 970 Amazon Store = 5983 Downloads Bomb Rain: Google Play Amazon Store 1154 Google + 308 Amazon Store = 1462 Downloads Expect it, you can check my games on Kongregate and itch.io (Android) New users for all games (Unity Analitycs) My most earnings comes from “Fish eat Fishes”, “Tree Tap” and “Mirkowanie”. Like I said before, I never expected something (you know big cash, popularity, MMO RPG with millions of players), that’s why I’ve never been disappointed. Future Plans: When you are reading it, I’m answering tons of PM (thanks for each of them). Also, I’m leaving gamedev, to be honest it’s so hard work, and I got nothing from it. Joking, I plan to release some updates to my last game + I’m working on small web game (15% done), then I gotta move to my main project. It will be tycoon, special one, with unique theme, no one even released Tycoon with this theme. I don’t want to give details right now, I’m still planing it. Thanks for support, thanks for each Pm, thanks for helping me with beta/alpha tests, also thanks for playing my games :-} When I was making those games, I had learned a lot, and it’s still start for me. I mean, my first game was some random thing, just idk what I was making, I just was making something, no plans, idea. No marketing (only marketing was in day of release). “Releasing Games” is just a start of work, updating game, responding to players, marketing game is a next ton of work. Got any questions? Feel free to ask them :-} I had complete gathering those data at 5AM, total earnings and earnings from games may be other (just not all data is fetched + some old data is not visible). ** I’m not expert, I’m still newbie in gamedev (just a 20 old guy, who one day started making games). Threat this post as a share of curiosity, it’s not a professional data collection. Check out my game: Google Play Amazon Store Online And my social media: Facebook Twitter Also, here is more detail statistics from some of my other games: Mirkowanie Stickman RPG Tree Tap Bomb Rain Data
  11. 8 points
    With all due respect... The individuals spending their time to help you solve your problem are not going to "steal" your code... After all I'm sure everyone that has responded to your problem and understands what is going on could program exactly what you're attempting to do themselves... When you're learning and unsure of things, there is no reason not to show your code so you can better help those trying to assist as they can review your code and understand what you're doing. In turn you'll hopefully come out ahead knowing more than you did prior. Best of luck either way... It's a great opportunity when people are willing to take their time to help others in the community. Pushing back on requests that are made in order to help you further isn't a good way to show appreciate for that help.
  12. 8 points
    This may be one of the harder, more difficult entries to write. I am almost tempted to not even write it, but I've convinced myself that every step of the journey is important. Almost exactly a month ago, I hit rock bottom. I was completely broke. I had nineteen cents in my bank account. My credit cards were maxed. I had next to no food in the cupboards. No gas in the car. My bus pass for community transit was empty. I was so poor that I couldn't afford the $2.75 to take a one way bus trip to my office. And if I did, I also couldn't afford to spend $8.65 for a lunch burrito. If I wanted to take the bus to the office, I would have to sneak onto the bus and keep a wary eye out for fare enforcement officers and hop off the moment they get on (happened a couple times). The only food I had for days was dried quaker oatmeal. Put a bit of it into a bowl with water, microwave it for 2.5 minutes, and then take it out, sprinkle some dried oats on it and try to mix in a pinch of brown sugar. I ate just that for days. Have you ever felt like your stomach is full but you're still starving? That's what eating the same food every day feels like. Trust me when I say this, there is no #*@!ing glory in being a starving artist. I was down hard. I was literally starving, eating oatmeal twice a day to conserve food. I needed to hatch a plan to make money. Whatever I've been doing, it wasn't working out. I need a new plan. Normally, I'd lean on my girlfriend for help, but she's down hard too. The best she could do was loan me $50. I went straight to safeway and I very carefully wandered the aisles looking at how much food costed and what I got for my money. What gives me the most nutritious energy for the least amount of money? Canned soups cost $2.30 each, there was a 2 for 1 deal on loaves of bread, a dozen eggs are ridiculously cheap, a quart of milk is a little over $3, etc. If I cook my own foods, it's a lot cheaper than anything else. Cheap eggs, milk and bread? It's time to feast on french toast! I made that $50 stretch really far and bought over a weeks worth of food supplies. But what happens when the food is all gone? Then I'm back to square one, back to starving. So, the $50 of food is a loan, not a grant or gift. I need a better plan. I decided I would go to the Sunday farmers market and setup a table and sell my girlfriends wine openers to people. I got some stock. I got a folding table and an old tent, a metal folding chair, and a demo stand. All of the previous booth props were stolen by my girlfriends former business partner. No signs, no props, no table cloths -- nothing! I had to do everything from scratch and start over. The booth fee costed $60, and by the grace of god, one of my credit cards was just barely not maxed that morning. I could pay the booth fee. Then, I didn't have a bottle of wine to demonstrate with either. And the table I had was covered in various paint splatters and looked like it had obviously been pulled out of a storage closet (it was). I almost couldn't even run the booth! I bought the cheapest bottle of wine I could find. 10am rolls around and the street fair begins. Throngs of pedestrians show up. I don't have a table cloth to cover the hideous table I brought. I spend 45 minutes walking all over town looking for a place that could sell me *anything* to cover my table with. Meanwhile, I'm beating myself up for being so stupid and short sighted as to not bring one from home. It was costing me 45 minutes and whatever the cost of a table cloth would be! I finally found a lady at the street fair who could sell me some sort of cloth for $25. One credit card swipe later, I'm in business. Without a doubt, I have the shittiest booth in the whole event. It was so miserable, people would want to look away and at something else more interesting. I'm just one random scruffy looking dude standing behind a forgettable table with a forgettable product. I borrowed a total of $100 from my credit card for the privilege to stand there. I had not eaten that morning because I had no food. My credit card was surely maxed, I couldn't even buy a black coffee if I wanted one. I had $0.19 in my bank account. I was starving. The reality was, if I wanted to eat, I would have to #*@!ing *sell* product. There is no standing around waiting for people to maybe stop by my shitty booth. It's shitty, nobody is going to be curious to stop by and window shop when there's an ugly window and nothing appealing. If I wanted to eat today, I had to actively pull people in and sell. Nothing else but me was going to bring in sales. I took that on as a challenge. I told myself, "Eric, it's time to see what you're made of. Can you really sell, or do you rely on crutches like a pretty booth?" Fortunately, I've had a smidgen of direct sales experience. I've given the same demo thousands of times. I knew the pitch by heart. I knew all the jokes. I could put on a performance. I knew how to work a crowd and draw people in... sort of. People are not going to buy from me because they like my product or like my booth, they're going to buy from me because they like my demo and I entertained and wowed them. Or... so I believed. An hour went by. Dozens of demos, but not a single sale. My stomach is grumbling. Another hour went by. Still, more demos but no sales. Two hours, and not a single dollar?! Did I just waste $100 of food money to make nothing?! I was starting to wonder if there was something wrong. Were people truly not buying from me because my booth presentation sucked? No way, I can't believe that. I was giving dozens of demos and people were amazed by the product and laughing at my jokes. It's just a matter of time and patience, and someone will open up their wallet. Finally, my first sale happened. It was a credit card purchase for $30. Damn, no cash -- that means I can't eat. But hey, I got a sale! It was validating! People would #*@!ing buy because of me! my shitty booth didn't matter! My waning confidence was restored! I could do this! And gradually, the sales started coming in, one by one. Finally, someone paid cash. I had no change, so they had to pay exact price. The moment they gave me money, I let them leave and then made a beeline to the nearest food truck and bought some sort of Hawaiian food. It was greasy and disgusting, but hey, it was food. I ended the day with a total of $260 in sales, $60 of which was cash and enough to buy another weeks worth of groceries at Safeway. I could survive for another week at least. And if nothing happened, I could do the fair event again the next weekend. And maybe upgrade my booth with a nicer table cloth? It was going to be desperate times and pure survival mode. My office rent got processed a few days later. My bank account was now $400 in the hole plus a $25 NSF fee. Rent was late. Things were starting to look grim again. Then, something amazing happened. A friend had met someone who was looking for someone that knew how to work with Leap Motion for their project, so he referred them to me. He told them that I was one of the best people in the country (I sort of am). I told my friend that if this goes through, I'll buy him dinner. So, I talk to the client and figure out what's going on. They're an established VR / film company based out of LA and they're having trouble with twisting at the wrist with leap motion and their character model (candy wrapper problem). I told them I'm a freelancer and could help them with their project. So, contracts are quickly signed and I take an initial down payment of $500 (yay, food and bus fare!). The client asks me how my VR work is going, and I reply, "Well, it's a bit of feast and famine cycle..." and he said he knew exactly what I meant. He had no idea how hungry I was. But, what an opportunity! The key thing to realize about freelancing is that it is ALL about building a solid reputation for making happy customers. Be an excellent professional. Work hard, work fast, work smart, get along with everyone, and bring value to your client. If you can do that, you will get a good reputation and have an established, healthy working relationship. That means repeat customers, more business, and good referrals -- which mean even more customers. The same principle of actively selling product behind a shitty booth applies to selling yourself and your services -- delight your customers and present them with something of value greater than their money. This is so key and fundamental to business, you must learn it. If an MBA degree doesn't teach you this, you need to ask for your money back. It turns out that this arm twisting problem was much more difficult than anyone had expected. You have to have two twist bones parented to the elbow, the mesh needs to be weighted correctly, and then you need to read the leap motion bone transform values and apply them to your skeletal rig in real time. The problem is, the only constraints people have on their arms are their physical limitations. An arm and wrist can be oriented in all sorts of funky directions, and if your approach can't handle them all, you get deformed arms, it looks bad, and it breaks the immersion of virtual reality. It's much harder to get perfect than I anticipated. The project itself was relatively simple. This guy had hired a Ukrainian development team to build his app. I looked at it and it looked like it was something barely held together with duct tape and glue. I rebuilt the whole thing in two days using the old work as a prototype, but this time I "did it right". I told my client that if he can't see a difference, I did my job right. He couldn't. Now, some dumber business people might say, "It looks exactly the same! What am I paying you for?!". This guy was smart and technical enough to see how much everything changed under the hood and how much more elegantly simple it was. It's maintainable! And it can change to meet new requirements without falling apart and costing lots of extra time and money to fix! Anyways, to take one step forward, sometimes you have to take two steps back. I ended up taking over development for the project. Sorry ukrainian devs! I ended up finishing the project much faster than they had scheduled. As of today, it's done! The client is ecstatic. Everything works perfectly and it looks amazing! The project is a pilot project for a VR film series, so if my clients client gets funded to produce the series, I think I'll have a lot more work in my future. The total pay for this project was $4,500 and took me about 10-14 days. That will be enough to pay my late rent and buy enough groceries. Whew! No more eating oatmeal for the near future. The extra good news is that this was a "small" project and there are much bigger ones on the horizon. There's a sliver of a small chance that I might actually be able to build a financially sustainable business out of this VR stuff. I must be extremely careful though: If you only rely on one client for your bread and butter (literally), if they go away, you starve. So, I must be sure to diversify and broaden my client list so that losing one doesn't cause me to starve. And, if I'm going to be thinking ten steps ahead here, I should eventually take on an apprentice and train them to become highly proficient VR developers. This would allow me to take on bigger future projects and offload some work to my team. Also, I learned a very, very important lesson about money management: Never, ever repay a debt to anyone if it means you're going to go hungry. They can wait, hungry bellies can't. This is the crucial mistake I made which made me go hungry for a week. I thought it was important to not owe anyone anything. As an ideal, it feels great to be debt free, but a hungry stomach doesn't give a flying #*@! about lofty ideals (A hungry belly also doesn't care about any pathetic excuses for why you won't sell). The other super important, crucial thing to remember is that a business is ultimately about making money. Whether you're an indie game developer, working for a AAA game company, or working in any other industry, your job/company must make more money than it spends or else it will starve and go out of business (and you'll be jobless and starve too). Think very carefully about whether you're adding value to your company, whether other people are adding value or not, and what needs to be changed in order to stay profitable. The size of your company doesn't really matter. In fact, bigger companies can be dangerous too because people get comfortable, and comfort breeds complacency, and complacency kills, especially when it becomes a cultural norm at the company. Sometimes, having a lot of money is a curse too because it shields you from facing harsh realities and changing course when things aren't working. If you look back to my very first blog post, you'll remember I started my adventure as an indie game developer with $500,000 in my personal bank account. Today, although I'm still very poor, I feel I am better armed, wiser and better poised to become successful and profitable than the day I started. The adventurous journey towards financial sustainability (and eventually profits) in a tough industry continues onwards! The future looks brighter now than it did a month ago, though I can't rest and get complacent. (Warning: don't read further if you want to have a pleasant day) Also, on a side note, it's been a bit of a tough last week. I parked my car in an alley behind my office (in a homeless mecca) and worked until 4am, and came back and discovered someone had smashed my window and grabbed $400 in VR equipment. My girlfriends ancient cat also went into really bad health. He wouldn't eat, could barely drink, was almost blind, couldn't even stand, and was constantly meowing in pain. It was dying. I took it to the "value" vet for confirmation. I had to pay $22 to confirm the poor thing was dying and no amount of money could save him. They wanted to charge me $178 to put the cat to sleep, tried to upsell me on cremation services, a few inked paw prints, etc. I declined. The vet then went down on price and offered to inject the cat with a narcotic for $48 to help him die faster and ease his pain. I declined that too. I took the dying cat home, brought him to the backyard and let him see his last morning sunshine and hear the birds and squirrels one last time. I grabbed a shovel and dug a small little grave for the little guy. He was meowing weakly in pain, laying helplessly next to his final resting place. He was suffering. It was time to go. I put him into his little grave and killed him quickly with the shovel blade pressed to his neck followed by a hard stomp. For a brief moment, he had an unforgettable look of shock and betrayal on his face and his legs splayed out in reflex, and then he died and never moved again... I thought I would be a bit more emotionally hard about it (being a war veteran and seeing lots of dead animals at our ranch), but killing a dying cat was not easy. I got choked up afterwards. It was an act of mercy and the ethical thing to do, but still... not easy. I realized that sometimes, to maintain the divine sanctity of life, you must provide death, because to continue living in hopeless suffering inevitably ending in death anyways, only serves to prolong suffering and corrupts the essence of living.
  13. 7 points
    Hi, this is my first blog entry here and also the first time that I actively participate in Gamedev.net. I am a software developer who develops application software at work. At home, I develop games in my spare time. I used to read Gamedev.net articles and blog posts for inspiration and motivation. Since I have the RSS feed subscribed on my phone, I read about the Gamedev.net Frogger challenge and I thought that I'm not interested in Frogger, but that I liked the idea of the community challenges. When the Frogger challenge was over, I read about the dungeon crawler challenge and I thought: "Okay guys, now you've got me!". I always wanted to create an old-school RPG in the style of Daggerfall or Ultima Underworld with modern controls (although I never really played them). More than 10 years ago, I started to write a small raycasting renderer in C++. Raycasting is the rendering technique that was used by the first person shooters of the early 90's (see also [1]). I never really finished that renderer and put the project ad acta, until, some years later, I stumbled across the old code and started porting it to Cython[2] in order to be able to use it with the Python programming language[3]. After some time, I was able to render level geometry (walls, floor, ceilings and sky), as well as sprites. After solving the biggest problems, I lost interest in the project again -- until now. I thought the dungeon crawler challange was a great opportunity for motivating myself to work on the engine. At the same time I would create a game to actually prove that the engine was ready to be used. I started to draw some textures in Gimp[4] in order to get myself in the right mood. Then I started to work on unfinished engine features. The following list shows features that were missing end of December 2018: Level editor Hit tests for mouse clicks Processing mouse scroll events Writing modified level files to disc (for the editor) UI Text Buttons Images Containers (layouting child widgets in particular) Scheduling tasks (async jobs that are run at every frame) Collision detection for sprite entities Collision detection for level geometry (walls) Music playback Fullscreen toggle Animated sprites Directional sprites (sprites look different depending on the view angle) Scaling sprites Refactorings / cleaning up code Documentation & tutorials Fixing tons of bugs Luckily, many of the above features are implemented right now (middle of January 2019) and I can start focusing on the game itself (see screenshots below; all sprites, textures and UI are hand-drawn using Gimp[4] and Inkscape[5]). The game takes place in a world which is infested by the curse of the daemon lord Ardor. Burning like a fire and spreading like a plague, the curse causes people to become greedy and grudging, some of them even turn into bloodthirsty monsters. By reaching out to reign supreme, the fire of Ardor burns its way into our world. The player is a nameless warrior who crested the silver mountain in order to enter Ardor's world and defeat him. To open the dimension gate, the player has to defeat tree dungeons and obtain three soul stones from the daemon's guardians. The following videos show some progress: First attempt: Rendering level geometry and sprites; testing textures; navigating through doors (fade-in and out effects) First update: Adding a lantern to light up the dungeon: Second update: Rendering directional sprites and animations Raycasting (Wikipedia): https://en.wikipedia.org/wiki/Ray_casting ↩︎ Cython: https://cython.org/ ↩︎ Python: https://www.python.org/ ↩︎ Gimp: https://www.gimp.org/ ↩︎ Inkscape: https://www.inkscape.org/ ↩︎
  14. 7 points
    This is a blog about our development of Unexplored 2: The Wayfarer's Legacy. This game features state-of-the-art content generation, generative storytelling, emergent gameplay, adaptive music, and a vibrant art style. Part 1 Unexplored 2 is a roguelite action adventure game where the hero is tasked to travel the world in order to destroy a magic staff. It features perma-death, but when your hero dies you get a chance to keep the world, so you can uncover its many secrets over the course of several runs. In a way, the world is one of the most important and persistent characters in the game. In this article, I'd like to share how we generate it. There are several ways in which you can approach world generation for fantasy games. For example, you can use simulation techniques to generate a realistic topography and populate the world from there. Instead, we choose a different approach for Unexplored 2: we used a process where we sketch a rough outline first and try to fill in the map with in a way that optimizes the affordances and gameplay opportunities the map has to offer. Rough Outline It all starts with a random Voronoi graph with 80 cells placed on a map with a 3:2 aspect ratio: Figure 1 - The initial Voronoi We use a Voronoi because it has a fairly natural distribution of cells and because the structure can be treated as a graph with each cell being an individual node and each edge a connection between nodes. This is useful as we use graph grammar rules to generate the map. In the first step, the cells on the western edge are set to ocean. Then we grow the ocean a little creating a more interesting coastline, and set the remaining cells to be land mass. A starting location is picked along the coast and a goal location is picked somewhere on the eastern side. Each cell in the graph marked with its distance to the start and the distance to the goal. Distance in this case is measured in the number of cells between two locations. Figure 2 - Land and Sea Obviously placing the ocean always on the west is just a choice (Tolkien made us do it). It is easy to make the whole map an island or have the ocean cover other edges of the map. What matters for us, is that this creates a consistently large playing area. But we don't rule out adding other templates and variations in the future. The next step is to make sure that the journey will not be too easy. After all, 'one does not simply walk into Mordor'. The way we achieve is also lifted directly from The Lord of the Rings: we simply make sure there is a mountain range between the start and the goal: Figure 3 - A Tolkienesque mountain range The mountains are started somewhere close to the goal and then allowed to grow using the following graph grammar rule, which basically changes one open cell into a mountain for an open cell that is next to one (but no more) mountain cell, relatively close to the goal, and relatively far from the starting location: Figure 4 - Graph grammar rule to grow the initial mountain range Unexplored 2 has a journey from the start location to the goal. In order to tempt the player to divert from the most direct route and explore the rest of the map a number of 'adventure sites' are placed at some distance of the start and goal location. Creating a nice spread of potential interesting travel destinations. Each site is placed inside a region of a different type. In this case, the goal is placed in swamp (s), a haven (green h) is placed in a hill area close to the start, and other sites are placed in a desert (d), forest (f), and a barren area (b). Orange edges indicate the borders between the regions. Figure 5 - Adventure sites Adding Topography The remaining cells are randomly grouped into additional regions until every cell is assigned a region on the map. The terrain types for these regions are left undetermined for now. Figure 6 - Regions and rivers Next rivers are added to the map. Initially, rivers are allowed to grow along the borders of regions. Unlike a realistic world generation process, we choose to start growing rivers at the ocean, selecting a new edge to grow into at random, favoring to grow alongside mountains as the go along. Figure 7 - Graph grammar rule that changes a region border next to an ocean into a river After rivers have been added, the region types are filled in and reevaluated. In this case, more forests are added and the desert area in the south is changed into a plain because it was next to the ocean and far to the south (our map is located in the southern hemisphere, hence the south is cold). At a later stage, we might take the opportunity to change the desert into something more interesting, such as a frozen waste. Figure 8 - Complete topography Once the regions are set, rivers are allowed to grow a little more, especially through terrains like hills and swaps. Depending on their length rivers be narrow, wide, or very wide. Only narrow rivers are easy to cross, for the wider rivers certain edges are marked to indicate points where the river can be crossed. Figure 9 - Graph grammar rule to grow a river through a swamp Adding Opportunities The topography is fairly basics and we still need to fill in a lot of details. From a design perspective regions (not cells) are the best unit to work with in this respect as we want regions to form coherent units in the experience of the game. To make working with regions a little bit easier, the Voronoi graph is reduced to a graph representation where all cells of each region are combined into one single node. Based on the relative distance to the start and the goal regions are assigned a difficulty and a number of opportunities and dangers are generated accordingly. Figure 10 - Region graph At this stage, the generator starts to look for interesting gameplay opportunities. Using several graph grammar rules a large forest with high difficulty will be assigned different attributes than a small, low difficulty swamp harboring an adventure site. At this stage, special terrains, such as purple 'obscuri' forests or red sand desert are also added to the mix. When generating the game in the world we have the option to request certain special features such as special rare terrain, or special quest content. These are processed first. To the best of the generator's ability, it might be that no good fit is found, at which point either we need to generate a new or continue without the requested feature. One interesting effect is that if certain special terrains require slightly rare conditions to emerge then the terrain type automatically becomes rare content. For example, a special quest might require a large swamp area with a river which will not be present in every world. The downside is that sometimes rarity becomes hard to control or design as there literally is no simple slider to push up if we want to make such a terrain type or quest more frequent. Creating Visuals Figure 11 - The map as it appears in the game Up until this point, the map is all data. The next step is to create the visual representation based on the map. To this end, we generated a new Voronoi diagram with a higher resolution (about 1200 cells) and map each smaller cell to the cells of the original map data. This creates a better resolution of details. Figure 10 shows how to original cells map to the visual map: Figure 12 - Original cells projected onto the map Individual cells can be raised and lowered to create elevation, and colored and decorated to suggest different terrains. Some of these decorations are assets such as trees which can vary in size and density based on the relative temperature and humidity of each cell. For now, we're using a very simple algorithm to approximate individual climate using longitude, latitude (it gets dryer towards the east), elevation and closeness to water. Other decorations are build from simple geometry based on the high-resolution Voronoi graph. This can be easily seen in image 13 below. This geometry includes slightly sloped mountain peaks, elevated patchwork to create the impression of a broken, barren landscape, and sunken centers to create pools. Figure 13 - Map detail showing how decorations use geometry based on the Voronoi graph Regions and their associated terrain types play an important role in the generation of these details. As can be observed in the figure above, forest rise towards their center, as do hills and mountains. Rivers are never elevated (to save us the trouble of trying to do so consistently). Terrain is blended a little so that height difference are not too pronounced where not intended, and interesting borders are created. In many cases, these blended terrains offer ample opportunities to liven op de map with rare features. Setting Up Nodes The world is of Unexplored 2 is not a continuous world. Players travel from node to node and can choose (or are forced) to explore gameplay areas each node represents. Connection between nodes determines where the player can travel. To place the nodes on the map we use to original low-resolution Voronoi diagram. A node is placed on each cell and on each border between cells, as can be witnessed in the image below: Figure 14 - Network of nodes placed on the Voronoi graph Certain connections are special. As mentioned above wide rivers can only be crossed at certain points, and mountains also create barriers. For uncrossable rivers the node that would have been placed on the river is split in two and each node is moved away from the river a little. Where a crossing is needed the node is actually split in three so that a bridge node is created that conveniently only has two connections (and exits) on each side of the river. For mountains and passes across mountains something similar is done. Figure 15 - Detail of the node network showing rivers and mountains Some of the nodes are already marked out as special sites in the generated data. The area templates associated with these sites often indicate special features to appear on the map (for example a volcano, a village, a mud pool, or a group of trees). Although, in some cases these features are only added after the player has visited the area and found its secrets. All other nodes are assigned templates based on the region they belong to and their relative position within that region. Each region has a number of types of locations. Typically a region has one 'heart' location assigned to a node quite central in the region, or a 'smallHeart' location if the region is relatively small. A number of 'rare' locations are scattered out across the locations not on the region's edge, and finally, all other locations are drawn from a random destination table associated with the region's terrain. Figure 16 shows sample entries from the table we use to populate forest and plain regions (the 'locations' in this table are the random encounter location the game uses when travelling between each node). Figure 16 - Random destination table Wrapping Up At the moment of writing the map generation is still a work in progress. We are constantly adding details as the game's content keeps growing. But I don't expect the general approach to change much. We are quite happy with the results, as in our humble opinion the maps are quite beautiful. But what's more important, they are readable: players can predict where barriers and dangerous terrains are to be found. A lot of information is there and we don't hide it behind a fog of war. The map facilitates anticipation and foreshadowing which are two key gameplay design features. We hope that when the game is released in full, players will enjoy simply pouring over the map and plan their journey. If you are interested in learning more about the game please check us out on Fig.co or on Steam. Note: This article was originally published on the Ludomotion website, and is reproduced here with the kind permission of the author.
  15. 7 points
    Congratulations to you guys! Hope you last 20 more and beyond.
  16. 7 points
    Probably one of my more controversial and patronizing posts, I'm sure some will disagree and I did think twice about posting but hey ho I've written: It seems that almost every day a naive post comes up asking how best for a budding 'game developer' to get into 'the industry'. Somewhere along the line, somehow many people seem to have bought into the myth that there is some generalist role in game development that is a viable career and will make them huge $$$. It irks me that this is the case, because in my opinion it is a distortion of the truth, and can lead to mistaken career decisions - particularly wasted education where there may be another far more suitable alternative career they are overlooking as a result. Note that in this article I am addressing the big budget console etc games business, rather than independents, and smaller / mobile games, which may use different business models. Follow the Money One of the basic facts in the games business, is that profits charts tend to follow a classic L shaped curve (long tail). By far the largest proportion of the headline profits made by the industry are made by very few big players (on the left of the dotted line), names you will all have heard of, and the other 99% of the industry makes very little, if any profit. This is similar to the situation in many entertainment industries - movies and music being similar. Newsflash - those companies that aren't making profit won't be able to support you in a viable longterm career. They might by random chance enable you to strike it lucky and make some money from time to time, but they are unlikely to be reliable for putting bread on your table / roof over your head / support a family kind of way. Note how many game companies regularly close down and developers have to move area / country to have a hope of keeping afloat. And that is among medium / bigger companies. This leads to 2 very important points: There are not as many career opportunities as you think. There are far fewer stable jobs than can support the number of graduates of game development courses. Hence there is high competition for roles (and some employers will take advantage of this). There is a high competition between game products. It is often a 'winner takes all' type market. If a kid has 50 dollars to spend he either buys your game or someone elses. To win your game is likely to need better content, better gameplay, better advertising, better hype. Competition In order for a game product to compete on the world market, it is not usually sufficient to buy some assets from the Unity asset store, put them together in an innovative way and expect the dollars to roll in. This is to a certain extent a lie perpetuated by engine sellers, either because it makes them money, or to encourage you onto their ecosystem. Producing AAA games is still a labour intensive business. There are expectations from customers in terms of content etc. Division of Labour To be competitive and profitable, the modern big budget games industry, like most industries, uses division of labour. That is, instead of having generalists who can do every job a little bit well, jobs tend to be done by specialists who are expected to be experts in their field. That means, aside from indie companies and those that make small games, careers in the games industry tend to be for specialists. A comparison with house construction I find the misplaced expectation that game development is filled with generalist careers to be analogous to the situation in the house building industry. Most of us use and enjoy houses on a day to day basis, in much the same way as many people use and and enjoy video games. And yet if you ask most youngsters, they are not so naive about the jobs involved in the construction industry. It is easy for them to understand the difference between using and enjoying a house, and building a house. The type of people involved in building a house are architects, surveyors, planners, brick layers, roofers, plumbers, electricians, plasterers, etc etc. As a young person, you would not likely aspire to be a 'house builder', so much as aspire and train to be one of the roles within the industry. And yet when it comes to the games industry, newcomers seem in denial that it might be the same. But big budget games are not built by generalist game developers. They are built by environmental artists, character artists, character animators, tools programmers, graphics programmers, AI programmers, sound programmers, animation programmers, scripters, designers, musicians, audio technicians etc. In practice these roles are often split and specialized even further, look at some credit lists. Final Ramblings And so when I hear yet another hopeful announce that they want to follow their dream of making their living in the big budget game development world, I have to admit it makes me groan a little inside, and wonder whether they do have any potential career in a team environment, or are just a dreamer barking up the wrong tree. What is more encouraging is when people express an interest in their area : 'how do I educate myself to become an amazing 3d artist', 'how do I become a better graphics programmer', 'how can I improve my AI programming' etc. It is actually surprisingly easy to predict who has potential based on their forum posts, their ability to seek out information and learn and progress. Paradoxically a lot of people who *are* successful in games aren't specialized only in games. They are often quite adaptable and can work in e.g. movie / tv in the case of artists / sound / music, or other branches of programming in the case of programmers, and there is a lot of cross movement in these careers. So many times I have heard people profess that they love games, as if it is the only qualification needed. I love chocolates, it doesn't mean to say I want to spend my life in a chocolate factory. The industry does not need more people that love games, it needs people who love doing the jobs that are involved in making games. If you identify more with the first category than the second, perhaps a career rethink is in order.
  17. 7 points
    I've decided that I simply do not have enough time to meet the requirements for this challenge. I've created all the assets, demo level, lighting and particle effects, movement, camera, ect... but I still don't have combat put in let alone an inventory, item, and stats system. With the deadline being February 16 2019 12AM UTC, that leaves me until Friday February 15 2019 5PM my time to finish which isn't going to happen. Most of today is booked up being Valentine's Day, and Friday I have other commitments. Either way, I didn't devote enough time to this challenge so better luck next time I guess. I'm looking forward to playing the entries that get submitted.
  18. 7 points
    Took me a lot longer than I wanted, but I converted the stats system from my JS prototype into UE4 blueprints. Why BP and not C++? I'm not really sure why, but that's how I did it. Looks very noisy. I'm not that great at making BPs easily readable and looking good yet, but I think I'm getting there. I think. I also added in the injuries, right now they just apply on some hotkeys without any modifications. It's better to see it in action though so I recorded a video. The music is just something I had laying around from another game I worked on. I'll need to make a track or two that matches this game's feel some time. My current plan is to finish the system by adding in treatments. Once that is up and going, I think I'll work on getting the rat mob in there. Then I'll start getting the system to affect the character. Not sure what I'll tackle after that. The major elements I have left to do is, all the mobs, the items/inventory/equipment, sound effects, music, and the dungeon generation. I suppose all the UI elements too, but I don't think I'll have time to do anything fancy, so I'm going to just put in some place holders. I suppose it's all really place holders right now, but maybe you know what I mean. I made a few item graphics too. The chest is actually two frames, one closed and then other open. If you can't tell what they are, I'll have to work on them, but they are a club, sword, dagger, cloth armor, leather armor, chain mail, plate mail, a potion, a scroll, a bandage (not toilette paper), thread, and treasure. So I'm a little behind where I thought I would be, but I still think I can get it done in time,
  19. 7 points
    Thank you and Happy New Year! I'd like to toast the GameDev community as we celebrate the end of 2018! As we enter 2019 and begin to celebrate GameDev.net's 20th anniversary I wanted to take the opportunity to say thank you. Thank you for being part of this community, for contributing to its growth and success, and for supporting GameDev.net as your platform of choice to learn, share, and connect with game developers and technology enthusiasts. You make this community what it is and your participation in it is invaluable for strengthening the community into what it is today. As part of our 20th year celebration and to share the stories of those who have shaped our community - the members - we're asking everyone to submit their GameDev Stories, big and small, by answering some questions about themselves and their participation and the impact and value GameDev.net has had on their lives. You can click here to submit your story. I think I can speak on behalf of the entire GameDev.net team when I say that we are humbled and have enjoyed being a part of our GameDev community and all of the great discussions, articles, blogs, and projects that have come out of it. In 20 years I have watched the games industry evolve from a world where a "game engine" was a new concept and into the powerhouse of creativity and technological change that it is today, and with that I am excited about the future growth and evolution of GameDev.net as the best game development community and platform. Again, thank you. And on behalf of all of us at GameDev.net, have a safe and Happy New Year! Kevin @khawk Hawkins GameDev.net Co-founder
  20. 7 points
    There is no requirement for anyone to replicate the worst behavior on the internet. In general anonymous communication tends to get more heated because of it's nature. I think most people (including myself) have been guilty of bad behavior at times. However it's not something to strive for. I'm an American but I actually live in Russia and the vast majority of people here treat me well. There are low lifes in every country. If you want to associate with them and act like them, that's your choice. However I know that most people in Russia interact on a daily basis with a reasonably high level of civility. In general, you can choose to ignore the few that don't. And BTW I hope this burning hamsters stuff was simply a bad joke. Because to me that's way cruel.
  21. 7 points
    My entry submission pages : PROJECT PAGE : BLOG POST PAGE :
  22. 7 points
    Thankfully I was able to complete the challenge, and as rough as the game might be and look, I'm glad I finished it. Foremost I want to thank GameDev.net for even having these challenges! Even though I'm a bit of a last minute contender, I really do enjoy pushing myself to finish these projects. I want to give a special thanks to @lawnjelly for being the very reason I even bothered to try out Unity for the first time, and most of all for providing free assets to use which saved me! I wouldn't have been able to complete the entry on time otherwise. Thank you to all those that followed my progress too! I also enjoyed following all your entries as well. Post-Mortem What went right: 1. I would have to say using Unity turned out to be a great choice for this project. I normally will use my own engine, or just code from scratch using a library for challenges. I used this challenge as an opportunity to learn the Unity Engine, and it was very successful. I found it extremely easy to jump in and code my own scripts for the various parts of the game. I had to get used to the editor and how things are handled, but it all worked out in the end. 2. My game plan was very successful. With the limited time I had by planning out everything ahead of time allowed me to push through this project a fast space. 3. Templating everything worked amazing... I was able to stream-line every aspect of this game and because of how I setup the rows I could do min-max for distances, speed, variations in spawns, ect... but I didn't implement these features in the final release even though the capability is already there. 4. @lawnjelly's asset pack helped me complete this project. I normally set out to do everything myself graphic wise, but I ran into pitfalls both in terms of motivation and work related things. I only was able to create three assets, but by using his asset pack I was able to finish up the rest of the areas, and ultimately completed the challenge. 5. Building the full project took less than 15 seconds to complete. I was surprised how fast my builds happened. What went wrong: 1. Motivation would've been the biggest issue I had... No idea... Maybe I just wasn't feeling the project but anytime I had free time the last thing I wanted to do was work on the project. Thankfully since I committed to finishing this project I forced myself to pull through in the end anyhow. 2. Work... October and November turned out to be the most profitable month my company had in 2018, and I was busy working on several cases. This didn't leave me with as much free time, but with my motivation issue this wasn't a good combo! 3. Visual Studio debugger kept crashing my Unity... Not sure why but it was getting extremely annoying when attaching. 4. Unity wouldn't sync properly with Visual Studio on several occasions causing me to reload everything when trying to work in code with my scripts. 5. I wasn't able to create all the graphics and at the quality I wanted.. 6. Environment textures turned out very mediocre due to me rushing and using a sloppy method to generate these as PBR materials from pictures. The water could've been done a lot better and with motion, but again not time... Would've made way better textures from scratch. Project:
  23. 7 points
    In the middle of last week I found out about Cinemachine. Something like yesterday, I also discovered Timeline. All in all, I'm pretty happy with what I've been able to achieve (even if the project is a criss-crossing maze of conflicting scripts and animations): Well, not so much "coflicting scripts", but I had to call it a day because both Unity and Visual Studio keep breaking. I can't help suspecting the newfangled Unity addons. Otherwise, even though I feel like I'm not going to finish in time, it suddenly seems that I at least have all the gameplay in place, as far as coding... Oh, except for the most basic part, of course, that is the movement. I've looked at it in the orthographic view, and there's really no rhyme or reason to how far the next jump is going to be. It seems I have to completely re-do it, except I have no idea how. 😞
  24. 7 points
    Finally made a first release of my frogger game for the gamedev challenge yesterday: What went right Using Godot Engine. Godot and GDScript was very quick to learn and get started with, and is very good for these types of small scale games. Overall I preferred it to Unity which I used last challenge. Using skin modifier for modelling in blender. This enabled me to make game creature models fast, around a day for modelling, rigging, animating and texturing a model. Using 3d paint for texturing creatures. Having spent many months developing 3d paint, it is really starting to pay off in quick texturing of assets. Blender can do this texturing too, but the workflow is much faster in 3d paint. What went wrong 3d sound broken in Godot. I had to do some bodging to get any kind of positional sound working, and it is flakey at best. I hope they will fix this soon. Android support not yet working. My android hardware / emulator only seem to support OpenGL ES 2, and Godot only supports ES 3, up until the 3.1 release. I tried the 3.1 alpha but no joy as yet. Creating art assets took most of my time, approx 2/3 of the development time (I am not an artist!). Moving house - I only realistically had the first 3 weeks to work a lot on the game, so tried to finish as much as possible early up. I do not even have access to computer / internet at new house yet. Dealing with different aspect ratios. I don't really deal with this as yet, I may have to address this. Normal mapping the assets. I tried this on the cars but it is very finicky to get working right and I don't have much experience. Took a lot of time and the difference was negligable so I dropped it. Procedural terrain texturing. Implemented but was too slow in GDScript, so I precreated 5 terrain textures and just used them in the levels. Same algorithm was fast enough in Unity in C# so I think GDScript is several times slower currently. (However I do prefer the GDScript language to C#) No wheels on cars. This is just funny, I always intended to put them in but never got round to it!! Dropping lots of features due to lack of time. This is typical of gamedev in general, but luckily I had enough features to make it playable. There is already support for other pickups like score and poison etc, I just didn't have time to make the models.
  25. 7 points
    This forum is for discussion about game design. A lot of people post questions here that do not belong here, probably because they think "game design" is a catch-all term for anything related to game development. IT IS NOT. The catch-all term for anything related to developing games is "game development." The catch-all term for someone involved in developing games is "game developer," not "game designer." Game design is a subset of game development (NOT VICE VERSA). Game design is "the act of defining a game in detail." Game Design is not "programming," and it isn't "graphic design." It's mostly ideation and communication. Please do not post technology questions or career questions or business questions on this board.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!