Popular Content

Showing content with the highest reputation since 12/17/17 in all areas

  1. 11 points
    I'm gonna call [citation needed] on that one. We don't really know what consciousness is yet. Not all of us believe in souls or the supernatural, incidentally. From my point of view, dismissing AI on the grounds that it can't possibly have something that we haven't demonstrated to even exist, never mind form a fundamental aspect of consciousness, seems... premature. This looks like an attempt to have a religion thread...
  2. 10 points
    Beginners don't understand how game development works. They think game X hasn't been made just because no one thought to do it; they never think that it wasn't made because it's either unfun or too difficult to program, or both. There are people who do the same with science, assuming that the only reason science doesn't accept something is because no one came up with the idea. It's a form of the Dunning-Krueger Effect, really. Beginners also tend to underestimate costs and assume that because their idea is obviously so perfect, people are going to swarm in and volunteer to do all the work for free just for a cut of "the profits", which they imagine must be millions and millions of dollars that just keep on coming. So they don't understand how markets work, either. More Dunning-Krueger here. I'm not innocent; I was one of those idiot beginners.
  3. 9 points
    I've always loved video games. As a child, I spent hours playing on my Atari 2600, on our home PC, or at a friend's house on his Nintendo and Super Nintendo. As time went on, I discovered QBasic and started to learn to make simple programs. It clicked - this is how the creators of the games were doing it! From there, I became very interested in learning about the creation process. I experimented, read articles in magazines, and as the World Wide Web took off I ventured online to learn more. Games got bigger and fancier, and some of them in special editions with documentaries about the creation, and I loved all of it. Well funded teams of hundreds of people were now working to create fantastic games with hours of gameplay and breathtaking visuals. I read articles and watched "making of" documentaries which described how developers would work long hours, forgoing days off, working late nights, and sometimes even sleeping in the office to meet deadlines. I was so impressed with the effort and dedication they put in. How much must you love a product to give up your nights and weekends, and spend time away from your family to finish it off? This was how great games were made, and I was in awe of the industry and the process. This was what I idolized. This was what I aspired to. I was wrong. The process I have described above is not necessary, and it is not cool. Developers do not need to sacrifice their free time, their sleep, and their health to create great games. The science is in. Numerous studies have shown that well-rested people are more productive and less prone to mistakes. The stress of this schedule and lack of sleep is profoundly damaging to people's health, causes burnout, and drives talented developers away from our industry. Just think about a cool feature that you loved in a AAA game. If the team went through a period of crunch, the developer who created that feature may have burned out and left the industry - they might not be creating awesome things for gamers anymore. We should not idolize a process that is harmful to developers. When we hear about crunch, the overwhelming reaction should be that this is not ok. Crunch is not cool, and developers still using this process should work towards a better way. Happier, healthier developers will produce better work, and in the long run, they will produce it more efficiently. Happier, healthier developers are more likely to stay in our industry rather than seeking greener pastures, resulting in more people with extensive experience who can then work on more advanced tasks and ideas, and push the industry forward. Thankfully, a growing number of developers have moved on or are moving on from this toxic culture of overtime and crunch, and a growing number of people are willing to speak up. Unfortunately, it's far from everyone, and there are many developers still exploiting workers. I'm putting my foot forward to say that I do not support a culture of overtime and crunch. We can do better, and we need to do better. I'm not the first person to share these sentiments, and I'm not an influential person, but I hope I won't be the last, and if I can convince even one more developer to stand up for better treatment of workers, then hopefully I've made some small difference. If you agree with me, next time you hear someone discussing crunch as if it's just a normal part of the process, let them know that there's a better way and that you don't support crunch and overtime. Let them know that crunch isn't cool.
  4. 9 points
    I think a big factor is that video games aren't physical things, but just data. Game development does - at least in theory(!!) - not require any money, supporters or physical resources beyond what most people (at least in rich countries) already own anyway. With infinite time and infinite knowledge, you could really create any sort of game completely on your own. You don't need to buy anything (all required software is available for free), you usually don't need to care about laws, and there is nobody you depend on who could just say "no" for whatever reason. Compare this to constructing a building: Even if you had all the knowledge that is required for all aspects of the construction, you still need to buy land to build on, you need to buy building material and machines, many tasks are probably physically impossible to do alone, and you need to folllow a ton of regulations to avoid being shut down. Without all these things, it is not so surprising that the efforts of game development are drastically underestimated by many people. Two more thoughts: - Since there is no initial risk involved at all, fantasizing about crazily huge unrealistic software development projects is just much less frightening. All you need to invest is some time, and if you fail, you haven't lost anything other than that time.. - The less you understand about software development, the less you can imagine how much work it is. Everbody has somewhat of an idea of what it takes to build a house, because you can see people doing it frequently somewhere in your town. But in a non-developer's everyday life, you usually don't see people working on video games, so how could you get a realistic impression of it?
  5. 8 points
    You're doing this: http://highexistence.com/spiritual-bypassing-how-spirituality-sabotaged-my-growth/ http://highexistence.com/10-spiritual-bypassing-things-people-total-bullshit/ Even if our brains are some kind of magical antenna that channels in a magical spirit consciousness from another plane of existence... what's stopping us from building our own mechanical antennae that channel magical spirit consciousness into our AI's?
  6. 6 points
    Yep, here is my contribution to GameDev 2018 Missile Command. The source code is inside the zip :https://github.com/wybifu/missile_command/archive/master.zip github: https://github.com/wybifu/missile_command Windows users: Windows_version.exe Linux users: Linux_version or just compile it for any other OS. language: C library: SDL2 I feel that I have to explain myself as the code is awful: I was just writing as I was thinking when a new Idea popup I just hardcoded it and didn't care if it fits the rest of the code that can provide strange lines like: if (((Agent *)(((Agent *)(a->ptr0))->ptr1)) != NULL) ((Agent *)(((Agent *)(a->ptr0))->ptr1))->visible = BKP_F YEAH !! Absolutely horrible. a small video of gameplay : another shootscreen for pleasure:
  7. 6 points
    This is in no way limited to game development - every teenager with an electric guitar thinks they will be the next Jimi Hendrix too. It's easy to underestimate the difficulty of anything, before you have tried and failed a few times.
  8. 6 points
    If that's a problem, give up game development. It's largely thankless, so if you're not doing it for yourself, you shouldn't do it at all.
  9. 6 points
    I n some cases it's because they have what I call a "glorious vision" of what their game will be. They are imagining that they will re-create reality when, and that in their game things are going to be like reality. It's not until they actually try to make the game that they will realize that, just like everyone else's games, their game will necessarily function in the same simple ways as everyone else's games do. You can imagine the combat in your game being just like in real life right up to the point that you attempt to actually implement it. That is when reality sets in. This is one of the better reasons for creating a detailed design document, one that works out exactly and specifically how key aspects of the game will actually work. Because this is the process that shatters the "glorious vision" and brings you back to the reality of how simple it is actually going to be in the end compared to the wishful thinking of your "glorious vision".
  10. 6 points
    When have web developers *ever* made good decisions about technology choices? Someone decides they want something easy and minimal now, they make it available for the unwashed masses, and it spirals out of control for years. Wash, rinse, repeat.
  11. 5 points
    In my own opinion, this is NOT a bad thing. Being competitive and striving to be the best is a very good thing. Such competitive, over-ambitious, overzealous and talented youngsters only need to be mentored and well directed by an experienced guru in the field Imagine a very talented sports person (footballer, ...) but he or she is playing in a team without a manager. The consequence is that they be will all over the place, without proper overarching strategy, without good structure and will be learning from their mistakes the hard way I was like that... without a mentor, so I followed only my instincts and I paid dearly for that. A lot of wasted years. Probably not fully recovered yet. But it wasn't because of my over-ambitions and competitiveness, rather it was because I wasn't mentored or guided.
  12. 5 points
    Originally posted on Medium I released my first game approximately a month and a half ago and actually tried almost all of the methods I could find on various websites out there - all of them will be listed here. With this article I want to share the story of my “promotion campaign”. The very first thing I did was the Medium account creation. I decided to promote the game using thematic articles about game development and related stuff. Well, actually, I still do this, even with this article here :) In addition to Medium the same articles were posted to my Linkedin profile, but mostly to strengthen it. Moreover, you may find a separate topic on Libgdx website (the framework the game is written on). Then, the press release was published. Actually, you should do a press release the same day as the game launch, but I didn’t know about that back then. And to be honest, all of the methods above were not quite successful in terms of game promotion. So I decided to increase the game presence around the web and started to post articles on various indie-game dev related websites and forums (that's how this blog started) Finally, here comes the list of everything created over the past month (some in Russian, be aware): https://www.igdb.com/games/totem-spirits http://www.slidedb.com/games/totem-spirits https://forums.tigsource.com/index.php?topic=63066.0 https://www.gamedev.net/forums/topic/693334-logical-puzzle-totem-spirits/ http://www.gamedev.ru/projects/forum/?id=231428 https://gamejolt.com/games/totem_spirits/298139 https://vk.com/gameru.indie?w=wall-9222155_202256 https://test4test.io/gameDetails/24 Not so many one can say. But I could not find any more good services! If you know one, please, share in comments. What are the results you may ask? Well, I have to admit that they are terrible. I got a little less than a hundred downloads, and I’m pretty sure that most of them from the relatives and friends. And you can’t really count such as a genuine downloads, since I literally just asked them to get my game on their smartphones. But the good thing is that many of those who played Totem Spirits shared their impressions about the game. They truly liked the product! That was so pleasant to hear their thoughts. I know in person several people who finished the game with all diamonds (a.k.a stars) collected. Still, I don’t regret the time spent on the game because I’ve learnt a great lesson — two years of development is a way too much for such simple and narrow-profile game. It seems that now is not a good time for such complicated puzzlers or I just failed badly with the promotion) Now the next plan is to develop and launch a game in a maximum 160 hours (two working months). The coding process has already begun, so hopefully in January you will see the next product of Pudding Entertainment company!
  13. 4 points
    Missile Command Challenge To celebrate the end of 2017 and beginning of 2018 we are making this a 2-month challenge. We also hope this gives late entrants enough time to submit entries. Your challenge is to create a single player Missile Command clone. Create an exact replica or add your own modifications and improvements! From Wikipedia: Play the original Missile Command on IGN at http://my.ign.com/atari/missile-command. Game Requirements The game must have: Start screen Key to return to the start screen Score system Graphics representative and capturing the spirit of a Missile Command clone Sound effects While single player is the goal, a multiplayer mode is also acceptable Gameplay mechanics in the spirit of Missile Command - the game does not need to be an exact clone, but it does need to have Missile Command gameplay Art Requirements The game may be in 2D or 3D Duration December 1, 2017 to January 31, 2018 Submission Post your entries on this thread: Link to the executable (specify the platform) Screenshots: Create a GameDev.net Gallery Album for your Project and upload your screenshots there! Post the link in the thread and your screenshots will be automatically linked and embedded into the post. Same with YouTube or Vimeo trailers. A small post-mortem in a GameDev.net Blog, with a link posted in this thread, is encouraged, where you can share what went right, what went wrong, or just share a nifty trick Source-code link is encouraged for educational purposes Award Developers who complete the challenge in the specified time frame will receive the 2018 New Year Challenge: Missile Command medal, which will be visible in their profiles. We are working on a new system on GameDev.net where you can submit your entries. It will be made public before this Challenge is complete. Details on how to submit them will be posted here when it is available.
  14. 4 points
    It sounds like you have several good reasons to accept the new job. It's up to you to consider what "committed to a 3-month long project" means, as in what type of commitment you made. You dont tell us. So, it's your call as to whether you're breaking some moral code of yours. But generally I'll say that if the company put all their eggs in one basket, so to speak, by having a new project that relies so heavily on just one person, then they are the ones who put the "company in that position" and not you. But again, I dont know what commitment you made to them. Maybe you just accepted the project, or maybe you gave your word that you would be there until completion. Those are different levels of commitment. Assuming your "commitment" was just to accept the project, then I'd say you should take the new job because of all the reasons you gave. You need to do what's best for you and your family, and let the company do what's best for them. If the tables were turned and the company thought it would be best to let you go, they might feel very badly about it, but at the end of the day I'd expect that they'd do what's best for the company. You can also offer to help them with the transition to a new engineer on the project. I'm not sure what the specific details of that would be, but maybe it's something to consider. Maybe you could even work for them on a freelance basis for a while to help them transition, and ask the new company if they'd allow you to work there part-time while you do that. I dont know, you have to think about it. But there might be some way to mitigate the transition for your old company if you really want to.
  15. 4 points
    Once a project has been selected (depending on who holds the purse strings), management will usually select some sort of broad resource plan for the whole project, deciding how many programmers/designers/artists/etc will work on the project, and at which stages in the project lifetime. The project is often divided up into milestones, which act as checkpoints to verify that the project is being delivered as expected and on time. Less formal projects might have a handful of milestones, whereas big publisher-funded projects often have a formal milestone delivery process where the build is explicitly handed over every so often (monthly, or six-weekly, or quarterly, etc) to be assessed. Each milestone typically has a bunch of intended 'deliverables' - they are ideally working features, but can also be assets that may or may not yet be integrated into a feature. And there can be different expectations for the state of a deliverable (e.g. "prototype", "working", "finished", etc). The actual state of those deliverables, relative to what was promised, dictates whether the milestone is 'passed' or 'failed'. In the latter case the team is usually expected to fix up those failed deliverables before the next milestone is handed over. If you keep failing milestones, the publisher may terminate the project, as they lose confidence in your ability to complete it. Scheduling within a milestone is usually done via some sort of agile planning process. With a given set of deliverables in mind, the heads of each department and the project management team ("producers") come up with a list of prioritised tasks and decide who to allocate them to. Those people receive the task descriptions and implement them. The day-to-day work of each person depends entirely on their job and their task. There may be a 'daily standup' meeting or Scrum where people briefly discuss what they're working on and ask for help if they're stuck. Beyond that, they communicate via conversation/meetings/email/Slack to resolve ambiguities and discuss details. The rest of the time, they're probably at their computer, executing their current task by creating code, art, whatever.
  16. 4 points
    Hi everyone! It has been more than two months since I released I Am Overburdened and since I wrote a devlog entry. Please accept my apology for this, I was super busy with the release and support of the game. But now I’m back with an in-depth analysis how the overall production and final numbers turned out . Summary I want to do a fully detailed breakdown of the development and business results, but I don’t want break it up into a typical postmortem format (good, bad, ugly). I’ve drawn my conclusions, I know what I have to improve for my upcoming projects, but I don’t want to dissect the I Am Overburdened story this way, I want emphasize how much work goes into a game project and focus more on how a journey like this actually looks and feels like. If you really want know my takeaways, here it goes in a super short format: I consider the game a success from a development perspective (good), but I failed from a marketing and sales standpoint (bad and ugly). Now I go into the details, but will focus more on the objective “what happened, how it went, what it took” parts. Development The game started out as a simple idea with a simple goal in mind. I partially abandoned my previous project, because it ballooned into a huge ball of feature creep, so I wanted to finish a more humble concept in a much shorter time period. The original plan was to create a fun game in 4 months. I really liked the more casual and puzzle-y take on the roguelike genre like the classic Tower of the sorcerer game, or the more recent Desktop dungeons and the Enchanted cave games so I set out to create my own take. I designed the whole game around one core idea: strip out every “unnecessary” RPG element/trope and keep only the items/loot, but try to make it just as deep as many other roguelikes regardless of its simplicity. From this approach the “differentiating factor” was born, a foolishly big inventory, which helped me to define and present what I Am Overburdened really is. A silly roguelike full of crazy artifacts and a “hero” who has 20 inventory slots. Most of the prototyping and alpha phases of the development (first two months) went smoothly, then I had to shift gears heavily… Reality check After 3 months of development, when all of the core systems were in place and when I deemed big parts of the content non-placeholder, the time came to show the game to others. I realized something at that point, forcing me to make a huge decision about the project. The game was not fun . The idea was solid, the presentation was kind-of ok, but overall it was simply mediocre and a month of polishing and extra content in no way could change that! Back than I was super stressed out due to this and I thought about this as my hardest decision as a game maker, but looking back I think I made the right choice (now I feel like I actually only had this one). I decided to postpone release, explore the idea further even if it doubles!!! the originally planned development time (and it happened ) and most importantly I decided to not make or release a “shovelware”, because the world really isn’t interested in another one and I’m not interested in making/publishing one… Final scope So after 4 months of development, feeling a bit glum, but also feeling reinvigorated to really make the most out of I Am Overburdened I extended the scope of the design & content and I also planned to polish the hell out of the game . This took another 4 months and almost a dozen private beta showings, but it resulted in a game I’m so proud of, that I always speak of it as a worthy addition to the roguelike genre and as a game that proudly stands on its own! Some numbers about the end result: It takes “only” around 30 to 40 minutes to complete the game on normal mode in one sitting, but due to its nature (somewhat puzzle-y, randomized dungeons & monster/loot placements + lots of items, unlocks and multiple game modes), the full content cannot be experienced with one play-through. I suspect it takes around 6 to 12 full runs (depending on skill and luck) to see most of what the game has to offer so it lends quite a few hours of fun . There are 10 different dungeon sets and they are built from multiple dozens of hand authored templates, so that no level looks even similar to the other ones in one session. They are populated by 18 different monsters each having their own skill and archetype (not just the same enemy re-skinned multiple times). And the pinnacle, the artifacts. The game has more than 120 unique items, all of them having a unique sprite and almost all of them having unique bonuses, skills (not just +attributes, but reactive and passive spells) and sound effects. This makes each try feel really different and item pickup/buy choices feel important and determinative. The game was also localized to Hungarian before release, because that is my native language so I could do a good job with the translation relatively fast and this also made sure, that the game is prepared to be easily localized to multiple languages if demand turns out to be high. Production numbers How much code I had to write and content I had to produce all in all to make this game? It is hard to describe the volume/magnitude with exact numbers, because the following charts may mean a totally different thing for a different game or in case of using different underlaying technologies, but a summary of all the asset files and the code lines can still give a vague idea of the work involved. Writing and localization may not sound like a big deal, but the game had close to 5000 words to translate ! I know it may be less than the tenth of the dialogue of a big adventure or RPG game, but it is still way larger than the text in any of my projects before… I’ll go into the detailed time requirements of the full project too after I painted the whole picture, because no game is complete without appropriate marketing work, a super stressful release period and post-release support with updates and community management work . Marketing If you try to do game development (or anything for that matter) as a business, you try to be smart about it, look up what needs to be done, how it has to be approached etc… I did my homework too and having published a game on Steam before I knew I had to invest a lot into marketing to succeed, otherwise simply no one will know about my game. As I said this is the “bad” part and I’ll be honest. I think I could have done a much better job, not just based on the results, but based on the hours and effort I put in, but let’s take it apart just like the development phase. Development blog/vlog I started writing entries about the progress of the game really early on. I hoped to gather a small following who are interested in the game. I read that the effectiveness of these blogs are minimal, so I tried to maximize the results by syncing the posts to at least a dozen online communities. I also decided to produce a video version because it is preferred over text these days + I could show game-play footage too every now and then. I really enjoyed writing my thoughts down and liked making the videos so I will continue to do so for future projects, but they never really reached many people despite my efforts to share them here and there… Social media I’ve tried to be active on Twitter during development, posting GIFs, screen-shots and progress reports multiple times a week. Later on I joined other big sites like Facebook and Reddit too to promote the game. In hindsight I should have been more active and should have joined Reddit way earlier. Reddit has a lot of rules and takes a lot more effort than Twitter or Facebook, but even with my small post count it drove 10 times more traffic to my store page, than any other social media site. Since the game features some comedy/satire and I produced a hell of a lot of GIFs, I tried less conventional routes too like 9gag, imgur, GIPHY and tumblr, but nothing really caught on. Wishlist campaign I prepared a bunch of pictures up-front featuring some items and their humorous texts from the game. I posted one of these images every day starting from when the game could be wishlisted on Steam. I got a lot of love and a lot of hate too , but overall the effectiveness was questionable. It only achieved a few hundred wishlists up until the release day. Youtube & Twitch For my previous Steam game I sent out keys on release day to a 100 or so Youtubers who played any kind-of co-op game before, resulting in nearly 0 coverage. This time I gathered the contact info of a lot of Youtubers and Twitch streamers upfront. Many were hand collected + I got help from scripts, developer friends and big marketing lists ! I categorized them based on the games they play and tried talking to a few of those who played roguelikes way before release to peak their interest. Finally I tried to make a funny press release mail, hoping that they will continue reading after the first glance. I sent out 300 keys the day before release and continued the following weeks, sending out 900 keys total. And the results?! Mixed, could be worse, but it could be much better too. 130 keys were activated and around 40 channels covered the game, many already on release day and I’m really thankful for these people as their work helped me to reach more players. Why is it mixed then? First, the videos did generate external traffic, but not a huge one. Second, I failed to capture the interest of big names. I also feel like I could have reached marginally better results by communicating lot a more and a lot earlier. Keymailer I payed for some extra features and for a small promotion on this service for the release month. It did result in a tiny extra Youtube coverage, but based on both the results and the service itself all in all it wasn’t money well spent for me (even if it wasn’t a big cost). Press This was a really successful marketing endeavor considering the efforts and the resulting coverage. I sent out 121 Steam keys with press release mails starting from the day before release. Both Rock Paper Shotgun and PC Gamer wrote a short review about it in their weekly unknown Steam gems series and the game got a lovely review from Indiegames.com. Also a lot of smaller sites covered it many praising it for being a well executed “chill” tongue-in-cheek roguelike . The traffic generated by these sites was moderate, but visible + I could read some comforting write-ups about the quality of the game. Ads I tried Facebook ads during and a bit after the release week + in the middle of the winter sale. Since their efficiency can not be tracked too well I can only give a big guesstimate based on the analytics, sales reports and the comparison of the ad performances. I think they payed back their price in additional sales, but did not have much more extra effect. I believe they could work in a bigger scale too with more preparation and with testing out various formats, but I only payed a few bucks and tried two variants, so I wouldn’t say I have a good understanding of the topic yet. Some lifetime traffic results: So much effort and so many people reached! Why is it “bad”, were the results such a mixed bag? Well, when it comes to development and design I’m really organized, but when it comes to marketing and pr I’m not at all. As I stated I never were really “active” on social media and I have a lot to learn about communication. Also the whole thing was not well prepared and the execution especially right at the release was a mess. The release itself was a mess . I think this greatly effected the efficiency! Just to be more specific I neglected and did not respond in time to a lot of mails and inquiries and the marketing tasks planned for the launch and for the week after took more than twice as much time to be completed as it should have. I think the things I did do were well thought out and creative, but my next releases and accompanying campaigns should be much more organized and better executed. Time & effort I don’t think of myself as a super-fast super-productive human being. I know I’m a pretty confident and reliable programmer and also somewhat as a designer, but I’m a slowpoke when it comes art, audio and marketing/pr. For following my progress and for aiding estimations I always track my time down to the hour level. This also gives me confidence in my ability to deliver and allows me to post charts about the time it took to finish my projects . Important thing to note before looking at the numbers: they are not 100% accurate and missing a portion of the work which were hard to track. To clarify, I collected the hours when I used my primary tools on my main PC (e.g.: Visual Studio, GIMP), but it was close to impossible to track all the tasks, like talking about the game on forums & social media, writing and replying-to my emails, browsing for solutions to specific problems and for collecting press contact information, you get the idea… All in all these charts still show a close enough summary. 288 days passed between writing down the first line in the design doc and releasing the game on Steam. I “logged” in 190 full-time days. Of course more days were spent working on the game, but these were the ones when I spent a whole day working and could track significant portion of it + note that in the first 4 months of the project I spent only 4 days each week working on I Am Overburdened (a day weekly were spent on other projects). Release So how the release went? It was bad, not just bad, “ugly”. After I started my wishlist campaign, close to the originally planned date (2017. Oct. 23.) I had to postpone the release by a week due still having bugs in the build and not having time to fix them (went to a long ago planned and payed for vacation). I know this is amateurish, but the build was simply not “gold” two weeks prior to release . Even with the extra week I had to rush some fixes and of course there were technical issues on launch day. Fortunately I could fix every major problem in the first day after going live and there were no angry letters from the initial buyers, but having to fight fires (even though being a common thing in the software/game industry) was super tiring while I had to complete my marketing campaign and interact with the community at the same time. The game finally went live on Steam and itch.io on 2017. Nov. 2 ! I did not crunch at all during development, but I don’t remember sleeping too much during the week before and after launching the game. Big lesson for sure . I saw some pictures about the game making it to the new and trending list on Steam, but it most probably spent only a few hours there. I never saw it even though I checked Steam almost every hour. I did saw it on the front-page though, next to the new and trending section in the under 5$ list . It spent a day or two there if I remember correctly. On the other hand, itch.io featured it on their front page and it’s been there for around a whole week ! With all the coverage and good reviews did it at least sale well, did it make back it’s development costs, if not in the first weeks at least in the last two months? Nope and it is not close to it yet… Sales In the last two months a bit more than 650 copies of I Am Overburdened were sold. Just to give an overview, 200 copies in the first week and reached 400 by the end of November, the remaining during the winter sale. This is not a devastating result, it is actually way better than my first Steam game, but I would be happier and optimistic about my future as game developer with reaching around 3 to 4 times the copies by now. To continue as a business for another year in a stable manner around 7 to 8 times the copies total (with price discounts in mind) during 2018 would have to be reached. I’m not sure if the game will ever reach those numbers though . If you do the math, that is still not “big money”, but it could still work for me because I live in eastern Europe (low living costs) + I’m not a big spender. Of course this is an outcome to be prepared for and to be expected when someone starts a high-risk business, so I’m not at all “shocked” by the results. I knew this (or even a worse one) had a high chance. No matter how much effort one puts into avoiding failure, most of the game projects don’t reach monetary success. I’m just feeling a bit down, because I enjoyed every minute of making this game, a.k.a. “dream job” , maybe except for the release , but most probably I won’t be able to continue my journey to make another “bigger” commercial game. I may try to build tiny ones, but certainly will not jump into a 6+ months long project again. Closing words It is a bit early to fully dismiss I Am Overburdened and my results. It turned out to be an awesome game. I love it and I’m super proud of it. I’m still looking for possibilities to make money with it (e.g.: ports) + over a longer time period with taking part in several discount events the income generated by it may cover at least a bigger portion of my investment. No one buys games for full price on PC these days, even AAA games are discounted by 50% a few months after release , so who knows… If you have taken a liking to play the game based on the pictures/story you can buy it (or wishlist it ) at Steam or at itch.io for 4.99$ (may vary based on region). As an extra for getting all the way here in the post, I recorded a “Gource” video of the I Am Overburdened repository right before Christmas. I usually check all the files into version control, even marketing materials, so you can watch all the output of almost a year of work condensed into 3 minutes. Enjoy ! Thank you very much for following my journey and thanks for reading. Take care!
  17. 4 points
    Hello there. I'm not really the blogging type. This is my first ever blog. So I'll do my best. I've been trying to showcase my video game engine from scratch in different professional forums with scant mixed results. I’m currently a happily employed 3D Graphics Programmer in the medical device field who also loves creating graphics programs as a side hobby. It's been my experience that most people who aren't graphics programmers simply don't appreciate how much learning goes into simply being able to code a small fraction of this from scratch. Most viewers will simply compare this to the most amazing video game they’ve ever seen (developed by teams of engineers) and simply dismiss this without considering that I’m a one-man show. What I’m hoping to accomplish with this: I’m not totally sure. I spent a lot of my own personal time creating this from the ground up using only my own code (without downloading any existing make-my-game-for-me SDKs), so I figured it’s worth showing off. My design: Oct Tree for scene/game management (optimized collision-detection and scene rendering path) from scratch in C++. 1. All math (linear algebra, trig, quaternion, vectors), containers, sorting, searching from scratch in C++. 2. Sound system (DirectSound, DirectMusic, OpenAL) from scratch in C++. 3. Latest OpenGL 4.0 and above mechanism (via GLEW on win32/win64) (GLSL 4.4). Very heavy usage of GLSL. Unusual/skilled special effects/features captured in video worth mentioning: 1. Volumetric Explosions via vertex-shader deformed sphere into shock-wave animation further enhanced with bloom post-processing (via compute shader). 2. Lens Flare generator, which projects variable edge polygon shapes projected along screen-space vector from center of screen to light-position (again, in screen-space: size and number of flares based on intensity and size of light source). 2. Real-time animated procedural light ray texture (via fragment shader) additively blended with Volumetric explosions. 3. Active camouflage (aka Predator camouflage). 4. Vibrating shield bubble (with same sphere data in Volumetric explosion) accomplished using a technique very similar to active camouflage 5. Exploding mesh: When I first started creating this, I started out using fixed-function pipeline (years ago). I used one vertex buffer, one optimized index buffer, then another special unoptimized index buffer that traces through all geometry one volume box at a time. And each spaceship “piece” was represented with a starting and ending index offset into this unoptimized index buffer. Unfortunately, the lower the poly-resolution, the more obvious it is what I was doing. Unfortunately, as a result, when the ship explodes you see the triangle jaggies on the mesh edges. My engine is currently unfortunately somewhat married to this design—which is why I haven’t redesigned that part yet (It’s on the list, but priorities). If I were to design this over again, I’d simply represent each piece with a different transform depending upon whether or not the interpolated object-space vertex (input to the pixel shader) was in-front of or behind an arbitrary “breaking plane”. If the position was beyond the breaking point boundary planes, discard the fragment. This way, I can use one vertex buffer + one optimized index buffer and achieve better looking results with faster code.
  18. 4 points
    Isn't that field of science called psychology? Or science itself is just applied philosophy at the end of the day... BTW, they already have metrics for measuring whether an object is conscious or not, which have been demonstrated to be able to tell the difference between normal awake brains, sleeping brains, dreaming brains, anesthetized brains, vegetative comatose brains, minimally conscious brains, and "locked in syndrome" brains (which would otherwise appear similar to other comatose brains, but on this metric, shows high levels of consciousness). Science does peer into those mechanisms. People's free will is surprisingly easy to influence... Again, that's psychology (or hypnotism too, if you like). The exact mechanisms of exactly how this process works though -- or any specific human action, when trying to explain the entire chain of consequence from genesis of thought to action -- are too complex for any human to ever understand (a thousand trillion synaptic connections, multiplied by all the other variables is an inconceivable amount of data, even when just considering a single moment in time...). There's also the camp who believes that the actual physical mechanisms behind thought is rooted in quantum behavior, which is probabilistic, which makes the whole thing "just physics" without having to say that it's deterministic (keeping the "free" part of "free will" free, and leaving the door open for a God who rolls dice).
  19. 4 points
    I don't believe @Alberth could have explain that better, the problem is that it isn't a very simple thing to do; So I added images to show some details. Sorry I am at work and don't have the time to make it at a great quality. So I will show you what Alberth said: First you need what is known as tiling textures, these are the rectangles: The important part is that you can make a larger image from them because they tile. Next you need a mask, this is the form of your country: It's common for masks to be gray scale or black and white, this saves a lot of data. The way a mask works is that you will tell your fragment shader to render the grass texture where the white is, because white = (1,1,1) and any number times 1 = it self. Example 0.5 * 1 = 0.5. You can just multiply the color of the texture with the white. Black = (0,0,0) and Any number times black = 0. Your end result should look like this: Next you tell your fragment shader just to skip all pixels that is Black (0,0,0) and it should render only the grass part. After rendering the grass part you do the same using your rock texture. Then you render first the rock texture a bit lower and to the left of the image, then you render the grass where you want it. Click image for full size. For the edge you just use an edge detect or you can also store it in the mask. Using tiling textures and a mask you can create all the content you need.
  20. 4 points
    As others mentioned, it isn't just games, and it isn't just beginners. It can be that you'll lose an unrealistic amount of weight as your new years resolution, or you'll finish an assignment by some deadline, or you're Apple releasing a watch a week after the official release date, or Duke Nukem Forever released about 15 years late. The problem has been around for all of humanity. A bit of web searching shows even in the ancient world, the ancient Roman, Greek, and Babylonian cultures made assorted promises and resolutions each new year, including documented promises to return objects, pay debts, and follow better care. People have been setting (and failing to meet) overambitious unrealistic goals for all of recorded history. It doesn't matter even if we have experience. We KNOW we aren't going to drop that much weight, but we think that THIS TIME we might be able to. We KNOW that every year we wait to file taxes but we commit to THIS TIME doing it earlier. We KNOW that similar assignments have required more time, but we think that THIS TIME we can make it by an earlier deadline. We KNOW a game like the one we see has a 15-minute long scrolling list of names in the credits, but we think we can do it. As typical, Wikipedia's got a writeup listing a bunch of proposed reasons with references. It looks like something psychologists have been writing about since the 1960s. Given the history of new year's promises, I can imagine Hammurabi reading the stone tablet newspaper at the new year describing how he can help keep his goals of losing weight and keeping it off.
  21. 4 points
    We are the music-makers, And we are the dreamers of dreams, Wandering by lone sea-breakers And sitting by desolate streams; World losers and world forsakers, On whom the pale moon gleams: Yet we are the movers and shakers Of the world for ever, it seems. With wonderful deathless ditties We build up the world’s great cities. And out of a fabulous story We fashion an empire’s glory: One man with a dream, at pleasure, Shall go forth and conquer a crown; And three with a new song’s measure Can trample an empire down. We, in the ages lying In the buried past of the earth, Built Nineveh with our sighing, And Babel itself with our mirth; And o’erthrew them with prophesying To the old of the new world’s worth; For each age is a dream that is dying, Or one that is coming to birth. From "Ode" by Arthur O'Shaughnessy
  22. 4 points
    It's been a few months since the last staff blog update, and I apologize for that. It was a busy end of 2017. We've recently announced several new community features here on GameDev.net that I'd like to go through - if you haven't spent time exploring all of GameDev.net then you might have missed these! GameDev Challenges Spawned from this forum request: GameDev Challenges is a new community "game jam"-like event for developers looking to test their skills or learn through short-form projects intended to take no more than a month or two to develop. Developers who complete the challenges in the allotted time earn a badge on their profile. Of course, developers can complete challenges after the allotted time, but right now the badges are not awarded. Right now we are using the GameDev Challenges forum to manage these (Forums -> Community -> GameDev Challenges), and we're currently on the 3rd challenge where developers need to create a clone of the arcade classic Missile Command. The first two challenges were for a Pong! clone and an Arcade Battle Arena along the lines of the classic Bubble Bobble. I encourage everyone to check out the threads and entries for all the challenges! In the near future we'll integrate Challenges with our latest announcement: Projects and Developer Profiles. Projects As we announced: The Project profile is pretty extensive. We recommend setting up a developer blog for each Project, which can then be used to provide updates for the Project and will show on your Project page. A Gallery is also automatically created for your Project (if you don't already have one), and from there you can upload screenshots and videos of your Project - these will also show in your Project page. The rest of your Project profile includes links to your Project homepage, store links, descriptions, platforms, and other details about your project. You can even upload and manage files for others to download. Here's the feature list from the announcement: Browse, download, and comment on projects from other Developers Provide updates to your project by linking your GameDev.net Blog Create your own Developer profile, including with a GameDev.net subdomain (in progress)! Showcase your Project with screenshots from your project's linked GameDev.net Album Manage your projects through your Developer Dashboard Market your project's website, Facebook, Twitter, Steam, Patreon, Kickstarter, and GameDev.Market pages Track project views and downloads through Google Analytics Upload and manage file downloads for your Project, allowing others to try it out and give feedback Link to your project with an embeddable widget (link auto-embeds on GameDev.net) Showcase your project with a trailer on YouTube or Vimeo Import your project from IndieDB or itch.io Developer Profiles Developer Profiles are a new feature of your GameDev.net profile for showcasing your development team(s). We have basic features for this right now with plans to expand it. You'll find all the settings for your Developer Profile in the Project Dashboard mentioned earlier. NeHe Source Code on GitHub Also mentioned recently is that we've put all of the NeHe tutorial source code on GitHub at https://github.com/gamedev-net/nehe-opengl. We will accept updates to the source code via pull request. NeHe is our long-running OpenGL tutorial site for developers wanting to learn OpenGL in a very accessible, straightforward manner, and is available at http://nehe.gamedev.net. GameDev Loadout Podcast And finally, if you're into podcasts then I suggest checking out our new Podcast section. We've recently partnered with Tony at GameDev Loadout to showcase his podcasts on GameDev.net. Tony interviews industry professionals and indie developers about a number of game development related topics. He's approaching his 80th episode, which is quite a feat! Next? We have bigger plans with Projects, Developer Profiles, and GameDev Challenges, so keep an eye out for more updates. We'll also be re-evaluating the home page in the near future (yes, again) to try to make it easier to find content you're interested in seeing across the entire site. Oh, one last announcement - the Game Developers Conference is quickly approaching, and GameDev.net members can get 10% off All Access and GDC Conference passes using this code: GDC18DEV As always, please share any thoughts and feedback in the comments below!
  23. 4 points
    I caught wind of the Missile Command challenge earlier in the month and figured I'd knock it out during Christmas break. I was planning on coding more on my Car Wars prototype but the challenge took up most of my programming time. Sleeping late, hanging out with my family, and playing video games took up the rest of my free time. All in all, it was a great way to slide into the new year. I spent some effort keeping the code clean (mostly) and even did a couple of hours worth of commenting and code reorganization before zipping up the package. Many sample projects are quick and dirty affairs so aren't really a good learning resource for new programmers. Hopefully my project isn't too bad to look at. What went right? Switching to Unity a few years ago. This was definitely the right call for me. It helped me land my dream job and allows me to focus on making games rather than coding EVERYTHING from the ground up. KISS (Keep It Simple Stupid) - I started thinking about doing my own take on Missile Command but decided on doing just a simple clone. The known and limited scope let me keep the project clean and kept development time down to something reasonable. Playing the original at http://my.ign.com/atari/missile-command was a big help for identifying features I wanted. Getting the pixel perfect look of old-school graphics was a little bit tricky, but thanks to a well written article I was able to get sharp edges on my pixels. https://blogs.unity3d.com/2015/06/19/pixel-perfect-2d/ I had done some research on this earlier so I knew this would be a problem I'd have to solve. Or rather see how someone else solved it and implement that. Using free sounds from freesound.org was a good use of time. There are only 4 sound effects in the game and it only took me an hour or two to find what I felt were the right ones. What went ok? Making UI's in Unity doesn't come naturally to me yet. I just want some simple elements laid out on the screen. Sometimes it goes pretty quickly, other times I'm checking and unchecking checkboxes, dragging stuff around in the heirarchy, and basically banging around on it until it works. I got the minimal core features of Missile Command, but not all of it. You don't really think about all the details until you start making them. I'm missing cruise missiles, bombers, and the splitting of warheads. Dragging the different sprites into position on the screen was manual and fiddly. There's probably a better way to do this, but it didn't take too long. You can't shoot through explosions, which makes the game a little more challenging. And you can blow up your own cities if you shoot defense warheads too close to them. It was easy enough to fix, but I left it in there. What went wrong? I spent a ton of time getting the pixel perfect stuff right. Playtesting in editor, it was set to Maximize on Play. There wasn't quite enough room to see the full screen so the scale was at 0.97 making the display 97% what it should be and thus, blurred all my sharp edges. I didn't see that setting though... >.< I pulled my hair out trying to see the problem in my math, and even downloaded a free pixel perfect camera which was STILL showing blurry stuff. Finally, I built the game and ran it outside the editor and saw things were just fine. There's also an import setting on sprites for Compression that I cleared to None. I'm not sure if this second step was necessary. I've been bit by the Editor not being 100% accurate to final game play and wished I would have tried that sooner. I had trouble with the scoring screen. I wanted to put the missiles up on the score line like Missile Command does, but ran into trouble with game sized assets and canvas sized UI elements. After 45 minutes or so I said screw it, and just put a count of the missiles and cities up. I didn't data drive the game as much as I wanted. The data is in the code itself so users can't mod the game without downloading the source code. I'm also only using one set of colors for the level. If I put any more time into this, I'll probably tackle this issue next, and then worry about the features I missed out on. Final thoughts Working on this project makes me appreciate how awesome the programmers of old really were. With the tools I have (Unity, Visual Studio, etc.) it took me a couple of weekends. And even then I didn't recreate all the features of the original. I'm including a link to the zipped up project in case anyone wants to see the source code to play around with it. Hopefully someone finds it useful. If so, let me know. EcksMissileCommand.zip - Executable if you want to play. EcksMissileCommand_Source.zip - The project if you want to mess around with it. Sound Credits freesound.org https://freesound.org/people/sharesynth/sounds/344506/ - Explosion.wav https://freesound.org/people/sharesynth/sounds/344525/ - LevelStartAlarm.wav LevelStartAlarm_Edit.wav - I used Audacity to edit the above sound. I took the first six beeps then faded them to silence. https://freesound.org/people/Robinhood76/sounds/273332/ - MissileLaunch.wav https://freesound.org/people/sharesynth/sounds/341250/ - ScoreCount.wav
  24. 4 points
    You started to big with your project (I have seen your posts on the forum). You should have made a small game first, you would have gotten to the reward faster. Then you should have made a small team project, 2-3 members and only after that should you have dived into your large project and assembling a large team. It takes around 12 years for you to reach the point where you can make your "The Game" and only the possible parts.( I still can't make that talking AI I wanted.) Making games is hard, it's expensive, the people you make it for will hate for it and others will tell you it isn't a real job. Find some reason to make games, without it you will be crushed. I don't believe that a hobby team of around 12 people with random skills and around $500 000 budget, could match 250-500 dedicated professionals with a $40 000 000 budget. Even Big Indie developers spend millions on there games. For indie developers the goal should be something like Minecraft or Five Nights At Freddy's; special games that isn't too expensive to make and breaks the mold.
  25. 4 points
    This question is pretty vague. I'll just describe how I generally do the main game loop, but this is just one of many ways that it can work. I have an object Game... this is the object that contains the main game loop. It also contains a member called ActiveGameScene which implements the interface IGameScene. The IGameScene has three primary responsibilites... Respond to a request, Update the GameState or Render the gamestate. The IGameScene contains a list of IGameSystems. Game Systems are very tightly focused "functionality packets" so, one system may be my input system and another may be my rendering system and there may be many more... lighting systems, ai systems etc. Each System has methods ProcessRequest, Update, and Render. So, the Game contains a Scene which contains a set of Systems and the main game loop is //Pseudo Code //In Game while(running) { while(requestsPending) Scene.ProcessRequest(request); Scene.Update(); Scene.Render(); } //In GameScene ProcessRequest(){ Foreach(ISystem sys in Systems) sys.ProcessRequest(request); } Update(){foreach(ISystem sys in Systems) sys.Update();} Render(){foreach(ISystem sys in Systems) sys.Render();} Then, it's in the specific systems where the interesting things happen. Maybe there is an AI System... //In AI System ProcessRequest(request){ } Update(){ Foreach(AIUnit){ if (AIUnit.CurrentAction==null) { AIUnit.CurrentAction = AIUnit.ChooseNextAction(); } switch(AIUnit.CurrentAction) { case Idle : Break; case Attack: AIUnit.Attack(); break; case Move: AIUnit.Position+= (AIUnit.Dest-AIUnit.Pos).Normalize(); } } }
  26. 4 points
    Realistically speaking it's nearly impossible to find people who will be as motivated and driven as yourself towards your own project. It does happen, but there's a reason why entry level positions are typically filled with people who provide crappy service. Think of it like this; the person who pumps gas likely doesn't want to do that forever, they likely would rather own the gas station. Therefore you get someone who is 'just doing a job' and likely isn't providing the best service possible. People who do provide the best service possible likely get promoted, and then someone else fills their spot at the bottom ensuring that quality service is always hard to find. Ultimately a really driven person would likely want to rise through the ranks and then start their own business in the field they rose up through. It's the same with any field of work. Finding someone who will give you 100% as an employee/team member is truly a diamond in the rough. If you compare the above example to game development, you can't really expect everyone who might want to work with you to be as driven and dedicated as you are. When you do find someone as driven as you are it's likely that they will be using their position to gain experience to one day lead their own team. Some people are fine staying in the middle, and providing 100%, but it's tough to find those kinds of driven people. The passion for your project has to come from you. As leader of a project you then work to inspire those on your team to work with you and be as passionate about the project as you are, lead by example. Most teams lead by two people are married, siblings or long time friends. There's a reason for this, it's because they share the same passions and work well together. Finding one person to work in perfect harmony with you is rare, having that with an entire team is nearly unheard of. As for "wasting your life making a product that is never gonna be as good as the rest of the industry", your life is yours to do with as you please. If you enjoy the creation process, then to me, it's not a waste at all. A sense of purpose for ones life should come from many things, not just work, but social life, hobbies, recreation, etc, it's a balancing act. That way if any one thing doesn't work out it's not such a far fall from being content. Comparing your work or the potential of your work to 'the rest of the industry' to me is counter productive if its uninspiring to you. Remember that there can only be one 'the best', and only a few 'top of the game'. There are billions of people on the planet, the odds are stacked against you. It's also important to remember that what you see as an end user is a polished product, not a prototype. So you're seeing the best that could be brought forward. Each individual on a team has their own respective portfolios of work that also represents their current best, and it's something they update as they get better. What we as end users rarely see is the crappy stuff that doesn't make release, the 'unpolished' work if you will. Think of a painter who has their work on exhibition at a gallery. Everything we see at the gallery is what they believe to be their best work. Now think of their studio, the stuff left behind wasn't good enough to make the cut at the gallery. On top of that they will have scores of work they didn't like and never completed, not the mention the work from X years before they were finished mastering their skill that they no longer show anyone. Something I always remember is that it'll take you around 10,000 hours in each skill to get good enough to be professional. If you're trying to develop a game then whatever asset skill you have will take that same 10k hours. If you lead the team, then team leadership is another skill requiring 10k hours. Each skill you will use will take you 10k hours. Some skills will overlap and allow you to 'level up' at the same time, others wont. This is why most people work on one skill at a time, or join teams in order to only be in charge of one skill. They can then learn from each other and master other skills together. If you've ever watched interviews or behind the scenes of some of your favorite productions, be it videos games, music, movies, stand-up comedy, etc, one thing you'll notice is most artists will be unhappy with work they were once happy with. This is a representation of their skills improving as an artist. Clerks by Kevin Smith is one of my favorite movies, he talks smack about it all the time, because he's evolved as a writer/director. Another good example of the things I've mentioned are some of the costume concepts for the superhero movies over the years. Some of them are super bad, and needed to be entirely reworked, but ultimately we as end users got something that looked really good on screen. Without looking behind the scenes, the average user never knows how much work went into making something perfect. Stand-up comedy is the same, we don't often get to see comedians in their early years bombing at 50 person bars, we see their 1 hour specials a decade into their career. Many assume that what we see came out perfect on the first try, when in reality it's far from that easy. In the end it's important that you enjoy the creation process, and are truly dedicated to making your project/skill as perfect as possible within reason. If you don't enjoy it, then why bother? It doesn't matter what others think, art typically shines best when you make it for yourself and not for others.
  27. 4 points
    My turn. Source file and exec are in the zip here https://github.com/wybifu/missile_command/archive/master.zip and also : and album
  28. 4 points
    Seems you need to do your debugging yourself Did you try to output variables to screen one after another, something like: SkyColors[DTID.xy] = float4(X,Y,r, 1); You can make some conclusions based at the colors you see. If this does not help you can create a debug buffer, store variables there, read back to CPU and print them so you can spot NaNs or infinites. That's annoying but once set up you can use that debugging for each shader.
  29. 4 points
    The title is vague, so I'll explain. I was frustrated with UI dev (in general), and even more-so when working with a application that needed OpenGL with embedded UI. What if I wanted to make a full application with OpenGL? (Custom game engine anyone?) Well I did want to. And I'm working on it right now. But it started me onto making what I think is a great idea of a styling language; I present KSS (pron. Kiss) The multitudes more programmable version of CSS.... (only for desktop dev) /* It has all the 'normal' styling stuff, like you'd expect. */ elementName { /*This is a styling field.*/ font-color: rgb(0,0,0), font: "Calibri", font-size: 12px } .idName { color: rgba(255,255,255,0) } uiName : onMouse1Click{ color: vec3(0,0,0) } BUT It also has some cool things. I've taken the liberty to add variables, templates (style inheritance), hierarchy-selection, events (as objects), native function calls, in-file functions. var defaultColor: rgb(0,0,0) var number : 1.0 /*Types include: rgb, rgba, vec2, vec3, vec4, number, string, true, false, none (null), this*/ fun styleSomeStuff{ .buttons{ color: defaultColor, text-color: rgb(number,255,number) } } template buttonStyle{ color: rgb(255,255,0) } .buttons{ use: buttonStyle, otherTemplateName /*copies the templates styling field*/ color: defaultColor; } .buttons : onMouse1Click{ /* events not assigned as a value are initialized when read from file*/ styleSomeStuff();*/* call the in-file function*/ *nativeFunctionCall();/*call a native function that's binded*/ var a : 2; /*assign a variable, even if not initially defined.*/ } /*storing an event in a 'value' will allow you to operate on the event itself. It is only ran when 'connected', below.*/ val ON_CLICK2 = .buttons : onMouse2Click{ use: templateName } connect ON_CLICK2; disconnect ON_CLICK2; /*you can make a function to do the same.*/ fun connectStuff{ connect ON_CLICK2; } But wait, you ask... what If I need to select items from a hierarchy? Surely using element names and id's couldn't be robust enough! Well: /*We use the > to indicate the item next element is a 'child' to itemA in the hierarchy*/ itemA > itemAChild : onMouse1Click{ } .itemId > .itemAChild > itemAChildsChild{ } /*want to get all children of the element at hand?*/ elementName > .any{ /*this will style all the elements children*/ } /*What about if we want to use conditional styling? Like if a variable or tag inside the element is or isnt something?*/ var hello : false; var goodbye : true; itemA [var hello = false, var goodbye != false] { /*passes*/ } itemA [@tagName = something]{ /*passes is the tag is equal to whatever the value u asked.*/ } The last things to note are event-pointers, how tagging works, 'this', and general workflow. Tagging works (basically) the same as Unity, you say @tagName : tagValue inside a styling field to assign that element a tag that's referable in the application. You have the ability to refer to any variable you assign within the styling sheet, from say.. source-code. The backend. As such, being able to set a variable to 'this' (the element being styled) allows you to operate with buttons who are currently in focus, or set the parent of an item to another element. All elements are available to use as variables inside and outside the styling sheet, so an event can effectively parent an element or group of elements to a UI you specify. Ill show that in a figure below, Event pointers are so that you can trigger an event with a UI component, but affect another, below- /*We use the -> to point to a new element, or group of elements to style.*/ .buttons : onMouse1Click -> .buttons > childName{ visible : false parent: uiName; } /* In this case, we style something with the name "childName" whos parented to anything with the id 'buttons'. Likewise, we if there was a element with the name 'uiName', it would parent the childName element to it. */ Lastly: The results/workflow. I'm in the process of building my first (serious) game engine after learning OpenGL, and I'm building it with Kotlin and Python. I can bind both Kotlin and Python functions for use inside the styling sheet, and although I didn't show the layout-language (cause its honestly not good), this is the result after a day of work so far, while using these two UI languages I've made (In the attachments.) It's also important to note that while it does adopt the CSS Box-model, it is not a Cascading layout. That's all I had to show. Essentially, right now Its a Kotlin -backed creation, but can be made for Java specifically, C++, C#, etc. I'm planning on adding more into the language to make it even more robust. What's more, it doesn't need to be OpenGL backed- it can use Java paint classes, SFML, etc etc. I just need to standardize the API for it. I guess what I'm wondering is, if I put it out there- would anyone want to use it for desktop application dev? P.S. Of course the syntax is subject to change, via suggestion. P.S.[2] The image in the middle of the editor is static, but wont be when I put 3D scenes in the scene-view.
  30. 4 points
    This article uses material originally posted on Diligent Graphics web site. Introduction Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed. There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy: Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use. Overview Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components: Render device (IRenderDevice interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.). Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context. An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread. The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs. In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary. Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen. Render device, device contexts and swap chain are created during the engine initialization. Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface. Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource. Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach. Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state. API Basics Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously. Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine. Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used. Initializing the Pipeline State As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once. Creating Shaders While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed. When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows: PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion. Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes: // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = { LayoutElement( 0, 0, 3, VT_FLOAT32, False ), LayoutElement( 1, 0, 4, VT_UINT8, True ), LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method: // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed. Binding Shader Resources Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above). Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()): m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them. Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created: m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible. As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table. This post gives more details about the resource binding model in Diligent Engine. Setting the Pipeline State and Committing Shader Resources Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context: m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages. The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually. Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking. When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources: m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them. Invoking Draw Command The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood: ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension. Source Code Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin. AntTweakBar sample is Diligent Engine’s “Hello World” example. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc. Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Finally, there is an example project that shows how Diligent Engine can be integrated with Unity. Future Work The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
  31. 3 points
    Yeah, it's pretty fun™. Long story short: a conservative government was elected in 2013, who immediately passed an "austerity" budget (and ate an onion), which was the style at the time. Part of that budget involved scrapping a $10M federal fund that invested in games industry projects and actually banned the federal film funding agency from ever investing in gavedev whatsoever... despite the fact that this project was actually turning a profit, making money for the taxpayer, making jobs for the local economy, and helping the industry rebuild after the GFC had destroyed 1500 gamedev jobs a few years prior. Fun side note - it wasn't actually an austerity budget because despite cutting billions of dollars from public projects, they then spent all of their savings and more on new policy, like making sure every school has a chaplain. Predictably, the industry wrote a strongly worded letter to the government. A minority party set up a senate inquiry to formally investigate this 'WTF' moment. That investigation recommended putting the $10M back in place and lifting the ban, because money and jobs and growth... and then, predictably, the govt ignored it. But they're obligated to publish some kind of response, hence the deadline mentioned in that article, which, predictably, has again been ignored. None of us are really holding our breath. The conservatives love to repeat "jobs and growth!" and "infrastructure!" and "innovation!", but we all know it's hot air. Over the same period they decided to cancel our national gigabit fiber-optic internet infrastructure project and instead opt to spend the same amount of money just buying back the past-shelf-life, decrepit, failing national copper network that they privatized in the 90's, and then pretend that 20mbps DSL is enough speed that anyone will ever need. That's the forward-thinking, innovative souls that we're dealing with here. Thankfully, the state govt's are not quite as insane, and a few of them have really stepped up to help the industry rebuild, which completely offsets the attempts of the federal govt to sabotage us.
  32. 3 points
    Hi all, I'm an indie app n game developer during my spare times; mainly focused on iOS for the time being ... recently having published a few simple 2d iOS games on the Apple App Store which can be found from my website http://techchee.com , would love to learn from peers how to promote or market mobile games or develop more sophisticated ones 😄
  33. 3 points
    float4x2 and float2x4 are every bit as much a 'matrix' as float4x4 for the purposes of packing. /Zpr (Row Major Packing) will affect float2x4/float4x2 and will cause them to take 4096 bytes instead of 2048 and vice versa depending on whether that flag is set. This shader, when compiled with /Zpr is a 2048 byte constant buffer and reads float4's: cbuffer B { float2x4 stuff[64]; } float4 main(uint i : I) : SV_TARGET { return stuff[i][0] + stuff[i][1]; }
  34. 3 points
    I think this is key. If you're going for telling a good story, the rule is "show don't tell" -- which means in a visual medium like a video game, the explicit bits take place offscreen and you just see the reactions and results. A little bit of titillation never hurts and can set the scene to advance the story, but a graphic depiction of adult acts is just porn and ruins the story (just as any other tell instead of show would). On the other hand, I always add unpixelate patches to The Sims because the not-telling there is overboard. People are all naked under their clothes, adding pixellation just titillates where it's unnecessary to advance the story. Remember the other golden rule: always leave them wanting more. If people achieve the (cognitive) satisfaction of a visceral reward in the middle of a game, why continue with the playthrough? Tease them. Make them want the ending, but hold off on it for as long as you can. Make them say your name.
  35. 3 points
    None of this makes any sense.
  36. 3 points
    Same problem, honestly. The issues are whether it's a) recognizable (will you pop up on the original IP owner's radar); and b) likely to cause consumer confusion (if you actually want to fight it when they send you the C&D). If it's highly recognizable you're more likely to get the threat of legal action. The more invisible your product, generally speaking (this is by no means absolute), the less likely you are to face legal action. The more successful your product, the higher the likelihood of legal action if any of your IP touches a big pocket's IP.
  37. 3 points
    Your Game loop is most of the game. You can say that everything that happens in a game after it started is done in the Game Loop. It's isn't a fact that everything is inside the loop but most of the game is. Pseudocode: As can be seen my code structure and @Eightvo are different but neither is wrong, there is many ways to do game loops. The main purpose of a game loop is to advance time, so mostly it is used for real time content.
  38. 3 points
    In ue4 we use 2 raytraces. A forward vector raytract to see how far the cliff is from us, and a secon to see how far we are from the top. To save on memory we only do this for certain objects. We also had to enable overlap events to check for certain edge cases. From there we verify values and tweak until it looks perfect. There are tutorials online about it, however we loosly followed those and kinda winged most of it after the basics.
  39. 3 points
    I call this "The Betty White Effect". In her later years Betty White did a few roles where she played an evil character. The fact that she seemed so over-the-top happy and nice just made her seem even more evil and she did very well in those roles, which you wouldn't expect her too. The opposite look and feel of what you are going for can sometimes actually enhance the effect, rather than detract from it as you would expect it too.
  40. 3 points
    This is going to sound really snarky, but one of my biggest problems with game development is tutorials written by people who've only been doing game development for a year or two. They often tend to recommend bad solutions to problems, because although the author has shown sufficient tenacity to overcome common hurdles, they usually do not yet understand the consequences of choosing the approaches that they did, and will recommend their approaches with a degree of overconfidence. I think the best thing someone in your position can do to help people is not to write articles or tutorials, but hang out on forums and answer questions.
  41. 3 points
    There's certainly quite a bit of overlap between them. A graphics programmer on an engine team will work with APIs like D3D/GL in order to implement rendering of features, like deferred shading or shadow mapping, as well as general stuff like scene management, and generic shaders. They'll also work on tools, such as importers for art files, and have to work with artists as their clients. A graphics programmer on a game team will also work on game-specific special effects, post processing and content challenges. A tech-artist is not as likely to use D3D/GL/etc directly, and won't likely work on engine features such as scene management. They are the glue between the artists and the programmers though - so anything on that interface is stuff that they will work on. That includes shader code, importers, exporters, plug-ins and scripts for art tools, automation of processes such as baking, helping with naming conventions, and making sure that artists actually follow the right conventions. They also should know how to use all the art tools that they're writing plugins/exporters/scripts for (but they don't have to be a good artist - just have the technical knowledge of artists work flow).
  42. 3 points
    It looks like you're doing inheritance wrong. Done correctly you don't need to know what the concrete type is. If you find yourself doing a dynamic cast to a derived type, you are likely doing something wrong. The general principles have the acronym SOLID. I suggest you start reading from that article. You should have a well-designed abstract base interface that other code is derived from. All the derived versions should operate correctly on that interface. All objects should be replaceable with similar objects and still be correct. This is called Liskov substitution principle named after a famous person who described it. It is also called the Template Method pattern. This is also used in ECS systems and is a major reason they are popular, all components follow the well-defined interface. Then you should always work with the abstract type, not the derived classes. This is called the dependency inversion principle. This is used in many systems, including ECS systems, to prevent brittleness. In your example you would need to modify Derived1 any time you added any new functionality. It would quickly go to a long list of redirections detecting if it is Derived2, or Derived3, or Derived4, or Derived103. But if you always work with the abstract types the problem goes away. After that, I suggest you read this article. That's an example of the right way to implement virtual functions. And after that, recognize that most code in games is not about individual actions but about bulk actions. Many times people build objects and systems that work on single items, single data structures, single instances. But the code rarely operates on a single thing, instead working on bulk items, bulk data structures and array, collections of instances. Take whatever reasonable steps you can to convert to bulk actions rather than individual actions, which avoids considerable overhead and can be leveraged for cache benefits.
  43. 3 points
    Linear means 4 corner and bilinear interpolation, period, either magnification or minification. The only difference is that relative to your screen, magnification will hit the same 4 corners and produce a smooth interpolation, while under minification, each screen pixel will hit texels far away from each other, but each pixel will still interpolate between the 4 neighbors. Your only way to have performance and not a mess of aliased texel is to generate mip maps offline. The mip maps are nothing more than an offline pre-convolution to respect the rule of signal processing. ( display need at least twice the resolution of the signal to prevent aliasing ). That process is offline because as you can imagine, the GPU does not have the power to integrate for each pixel to draw, large area of the texture
  44. 3 points
    Also, just because the tick time is 250-500ms, doesn't mean you should necessarily change your vector that often. There's all sorts of things you can do to make them seem... er... less than capable. Perhaps some tips on randomness in here: http://www.gdcvault.com/play/1014496/Using-Randomness-in-AI-Both
  45. 3 points
    The problem with Asimov's laws is that they set an impossibly high standard. Asimov's first law: "A robot may not injure a human being or, through inaction, allow a human being to come to harm.". Now, the global death rate is around 0.8% of the total population of the Earth per year. Given the current global population of 7.6 billion, that's around 60 million deaths per year, or 170 thousand per day. Just about all of these deaths are, in an abstract sense, preventable, which means that our hypothetical AI operating under Asimov's laws will have to prevent these 170 thousand deaths each day before it can even look at the second law. Some of these deaths will be relatively easy to prevent (1.25 million deaths per year from traffic accidents), some are going to be much harder (old age, freak accidents, deliberate murder and suicide), and some are going to be as good as impossible (mass starvation in an ever-increasing population once all other causes of death are eliminated).
  46. 3 points
    You should be. Each person owns their own individual contribution. In a joint work each person becomes a co-owner of the work, and once merged they are generally considered inseparable. Even if you revert their changes, that person may still be able to demand payments or block various uses. If you cannot get all the co-owners to agree, for example when licensing the product to another group or publishing the game, then the project can be legally tainted. All it takes is one disgruntled person, or one person who seems to have vanished from the earth, and your project enters a bad state. That is exactly why they become a joint owner of the work and it is usually considered inseparable. Get with a lawyer to help you make a collaboration agreement, contracting agreement, rights assignment, or various other contract. Your lawyer can tell you the difference, and you'll need forms for each person on the project. For the person that left, tell them there are no hard feelings, tell them that since the project needs to go on you need to make sure you still have legal rights to use what they contributed, tell them they can still claim credit for whatever they want, and perhaps even give them twenty bucks (which the rights assignment form will call "valuable consideration") in exchange for their signature. Everyone else on the project should sign an agreement as well. They'll probably get collaboration agreements since they are still contributing on the team.
  47. 3 points
    Actually, I usually update my behaviors (of all types) about every 250-400ms. If you set each zombie's next check at a range about that big, the randomness will spread their updates out automatically so they aren't all on the same frame. Also, the variability leads to a more organic feel on a per zombie basis.
  48. 3 points
    Here goes my entry! Right now it's more or less a straight Missile Command clone. I couldn't help it but use the trophy image as the in game missile Per default use the mouse and keys 1, 2, 3 to fire the cannons. Protect and survive! Download from http://www.georg-rottensteiner.de/files/GDMissileCommand.zip Edit: Updated archive with source code (only the main game code, not the overall library code, sorry!)
  49. 3 points
    I got pulled in as a contractor to work on a CryEngine game once. The engine source code was so bad, it took me a week to do a task that really should've taken me half a day. The WTF per minute scores were off the charts. I got so frustrated with it, that I quit the project and didn't charge the client for the work that I'd done for them.
  50. 3 points
    @LanceJZ Nice. Windows / Mac : MK_LightningCommand.zip Fairly complete this go around.