Leaderboard


Popular Content

Showing content with the highest reputation since 12/17/17 in all areas

  1. 11 points
    I'm gonna call [citation needed] on that one. We don't really know what consciousness is yet. Not all of us believe in souls or the supernatural, incidentally. From my point of view, dismissing AI on the grounds that it can't possibly have something that we haven't demonstrated to even exist, never mind form a fundamental aspect of consciousness, seems... premature. This looks like an attempt to have a religion thread...
  2. 10 points
    Beginners don't understand how game development works. They think game X hasn't been made just because no one thought to do it; they never think that it wasn't made because it's either unfun or too difficult to program, or both. There are people who do the same with science, assuming that the only reason science doesn't accept something is because no one came up with the idea. It's a form of the Dunning-Krueger Effect, really. Beginners also tend to underestimate costs and assume that because their idea is obviously so perfect, people are going to swarm in and volunteer to do all the work for free just for a cut of "the profits", which they imagine must be millions and millions of dollars that just keep on coming. So they don't understand how markets work, either. More Dunning-Krueger here. I'm not innocent; I was one of those idiot beginners.
  3. 9 points
    I've always loved video games. As a child, I spent hours playing on my Atari 2600, on our home PC, or at a friend's house on his Nintendo and Super Nintendo. As time went on, I discovered QBasic and started to learn to make simple programs. It clicked - this is how the creators of the games were doing it! From there, I became very interested in learning about the creation process. I experimented, read articles in magazines, and as the World Wide Web took off I ventured online to learn more. Games got bigger and fancier, and some of them in special editions with documentaries about the creation, and I loved all of it. Well funded teams of hundreds of people were now working to create fantastic games with hours of gameplay and breathtaking visuals. I read articles and watched "making of" documentaries which described how developers would work long hours, forgoing days off, working late nights, and sometimes even sleeping in the office to meet deadlines. I was so impressed with the effort and dedication they put in. How much must you love a product to give up your nights and weekends, and spend time away from your family to finish it off? This was how great games were made, and I was in awe of the industry and the process. This was what I idolized. This was what I aspired to. I was wrong. The process I have described above is not necessary, and it is not cool. Developers do not need to sacrifice their free time, their sleep, and their health to create great games. The science is in. Numerous studies have shown that well-rested people are more productive and less prone to mistakes. The stress of this schedule and lack of sleep is profoundly damaging to people's health, causes burnout, and drives talented developers away from our industry. Just think about a cool feature that you loved in a AAA game. If the team went through a period of crunch, the developer who created that feature may have burned out and left the industry - they might not be creating awesome things for gamers anymore. We should not idolize a process that is harmful to developers. When we hear about crunch, the overwhelming reaction should be that this is not ok. Crunch is not cool, and developers still using this process should work towards a better way. Happier, healthier developers will produce better work, and in the long run, they will produce it more efficiently. Happier, healthier developers are more likely to stay in our industry rather than seeking greener pastures, resulting in more people with extensive experience who can then work on more advanced tasks and ideas, and push the industry forward. Thankfully, a growing number of developers have moved on or are moving on from this toxic culture of overtime and crunch, and a growing number of people are willing to speak up. Unfortunately, it's far from everyone, and there are many developers still exploiting workers. I'm putting my foot forward to say that I do not support a culture of overtime and crunch. We can do better, and we need to do better. I'm not the first person to share these sentiments, and I'm not an influential person, but I hope I won't be the last, and if I can convince even one more developer to stand up for better treatment of workers, then hopefully I've made some small difference. If you agree with me, next time you hear someone discussing crunch as if it's just a normal part of the process, let them know that there's a better way and that you don't support crunch and overtime. Let them know that crunch isn't cool.
  4. 9 points
    I think a big factor is that video games aren't physical things, but just data. Game development does - at least in theory(!!) - not require any money, supporters or physical resources beyond what most people (at least in rich countries) already own anyway. With infinite time and infinite knowledge, you could really create any sort of game completely on your own. You don't need to buy anything (all required software is available for free), you usually don't need to care about laws, and there is nobody you depend on who could just say "no" for whatever reason. Compare this to constructing a building: Even if you had all the knowledge that is required for all aspects of the construction, you still need to buy land to build on, you need to buy building material and machines, many tasks are probably physically impossible to do alone, and you need to folllow a ton of regulations to avoid being shut down. Without all these things, it is not so surprising that the efforts of game development are drastically underestimated by many people. Two more thoughts: - Since there is no initial risk involved at all, fantasizing about crazily huge unrealistic software development projects is just much less frightening. All you need to invest is some time, and if you fail, you haven't lost anything other than that time.. - The less you understand about software development, the less you can imagine how much work it is. Everbody has somewhat of an idea of what it takes to build a house, because you can see people doing it frequently somewhere in your town. But in a non-developer's everyday life, you usually don't see people working on video games, so how could you get a realistic impression of it?
  5. 8 points
    You're doing this: http://highexistence.com/spiritual-bypassing-how-spirituality-sabotaged-my-growth/ http://highexistence.com/10-spiritual-bypassing-things-people-total-bullshit/ Even if our brains are some kind of magical antenna that channels in a magical spirit consciousness from another plane of existence... what's stopping us from building our own mechanical antennae that channel magical spirit consciousness into our AI's?
  6. 6 points
    Yep, here is my contribution to GameDev 2018 Missile Command. The source code is inside the zip :https://github.com/wybifu/missile_command/archive/master.zip github: https://github.com/wybifu/missile_command Windows users: Windows_version.exe Linux users: Linux_version or just compile it for any other OS. language: C library: SDL2 I feel that I have to explain myself as the code is awful: I was just writing as I was thinking when a new Idea popup I just hardcoded it and didn't care if it fits the rest of the code that can provide strange lines like: if (((Agent *)(((Agent *)(a->ptr0))->ptr1)) != NULL) ((Agent *)(((Agent *)(a->ptr0))->ptr1))->visible = BKP_F YEAH !! Absolutely horrible. a small video of gameplay : another shootscreen for pleasure:
  7. 6 points
    This is in no way limited to game development - every teenager with an electric guitar thinks they will be the next Jimi Hendrix too. It's easy to underestimate the difficulty of anything, before you have tried and failed a few times.
  8. 6 points
    If that's a problem, give up game development. It's largely thankless, so if you're not doing it for yourself, you shouldn't do it at all.
  9. 6 points
    I n some cases it's because they have what I call a "glorious vision" of what their game will be. They are imagining that they will re-create reality when, and that in their game things are going to be like reality. It's not until they actually try to make the game that they will realize that, just like everyone else's games, their game will necessarily function in the same simple ways as everyone else's games do. You can imagine the combat in your game being just like in real life right up to the point that you attempt to actually implement it. That is when reality sets in. This is one of the better reasons for creating a detailed design document, one that works out exactly and specifically how key aspects of the game will actually work. Because this is the process that shatters the "glorious vision" and brings you back to the reality of how simple it is actually going to be in the end compared to the wishful thinking of your "glorious vision".
  10. 6 points
    When have web developers *ever* made good decisions about technology choices? Someone decides they want something easy and minimal now, they make it available for the unwashed masses, and it spirals out of control for years. Wash, rinse, repeat.
  11. 5 points
    In my own opinion, this is NOT a bad thing. Being competitive and striving to be the best is a very good thing. Such competitive, over-ambitious, overzealous and talented youngsters only need to be mentored and well directed by an experienced guru in the field Imagine a very talented sports person (footballer, ...) but he or she is playing in a team without a manager. The consequence is that they be will all over the place, without proper overarching strategy, without good structure and will be learning from their mistakes the hard way I was like that... without a mentor, so I followed only my instincts and I paid dearly for that. A lot of wasted years. Probably not fully recovered yet. But it wasn't because of my over-ambitions and competitiveness, rather it was because I wasn't mentored or guided.
  12. 5 points
    Originally posted on Medium I released my first game approximately a month and a half ago and actually tried almost all of the methods I could find on various websites out there - all of them will be listed here. With this article I want to share the story of my “promotion campaign”. The very first thing I did was the Medium account creation. I decided to promote the game using thematic articles about game development and related stuff. Well, actually, I still do this, even with this article here :) In addition to Medium the same articles were posted to my Linkedin profile, but mostly to strengthen it. Moreover, you may find a separate topic on Libgdx website (the framework the game is written on). Then, the press release was published. Actually, you should do a press release the same day as the game launch, but I didn’t know about that back then. And to be honest, all of the methods above were not quite successful in terms of game promotion. So I decided to increase the game presence around the web and started to post articles on various indie-game dev related websites and forums (that's how this blog started) Finally, here comes the list of everything created over the past month (some in Russian, be aware): https://www.igdb.com/games/totem-spirits http://www.slidedb.com/games/totem-spirits https://forums.tigsource.com/index.php?topic=63066.0 https://www.gamedev.net/forums/topic/693334-logical-puzzle-totem-spirits/ http://www.gamedev.ru/projects/forum/?id=231428 https://gamejolt.com/games/totem_spirits/298139 https://vk.com/gameru.indie?w=wall-9222155_202256 https://test4test.io/gameDetails/24 Not so many one can say. But I could not find any more good services! If you know one, please, share in comments. What are the results you may ask? Well, I have to admit that they are terrible. I got a little less than a hundred downloads, and I’m pretty sure that most of them from the relatives and friends. And you can’t really count such as a genuine downloads, since I literally just asked them to get my game on their smartphones. But the good thing is that many of those who played Totem Spirits shared their impressions about the game. They truly liked the product! That was so pleasant to hear their thoughts. I know in person several people who finished the game with all diamonds (a.k.a stars) collected. Still, I don’t regret the time spent on the game because I’ve learnt a great lesson — two years of development is a way too much for such simple and narrow-profile game. It seems that now is not a good time for such complicated puzzlers or I just failed badly with the promotion) Now the next plan is to develop and launch a game in a maximum 160 hours (two working months). The coding process has already begun, so hopefully in January you will see the next product of Pudding Entertainment company!
  13. 4 points
    Missile Command Challenge To celebrate the end of 2017 and beginning of 2018 we are making this a 2-month challenge. We also hope this gives late entrants enough time to submit entries. Your challenge is to create a single player Missile Command clone. Create an exact replica or add your own modifications and improvements! From Wikipedia: Play the original Missile Command on IGN at http://my.ign.com/atari/missile-command. Game Requirements The game must have: Start screen Key to return to the start screen Score system Graphics representative and capturing the spirit of a Missile Command clone Sound effects While single player is the goal, a multiplayer mode is also acceptable Gameplay mechanics in the spirit of Missile Command - the game does not need to be an exact clone, but it does need to have Missile Command gameplay Art Requirements The game may be in 2D or 3D Duration December 1, 2017 to January 31, 2018 Submission Post your entries on this thread: Link to the executable (specify the platform) Screenshots: Create a GameDev.net Gallery Album for your Project and upload your screenshots there! Post the link in the thread and your screenshots will be automatically linked and embedded into the post. Same with YouTube or Vimeo trailers. A small post-mortem in a GameDev.net Blog, with a link posted in this thread, is encouraged, where you can share what went right, what went wrong, or just share a nifty trick Source-code link is encouraged for educational purposes Award Developers who complete the challenge in the specified time frame will receive the 2018 New Year Challenge: Missile Command medal, which will be visible in their profiles. We are working on a new system on GameDev.net where you can submit your entries. It will be made public before this Challenge is complete. Details on how to submit them will be posted here when it is available.
  14. 4 points
    Once a project has been selected (depending on who holds the purse strings), management will usually select some sort of broad resource plan for the whole project, deciding how many programmers/designers/artists/etc will work on the project, and at which stages in the project lifetime. The project is often divided up into milestones, which act as checkpoints to verify that the project is being delivered as expected and on time. Less formal projects might have a handful of milestones, whereas big publisher-funded projects often have a formal milestone delivery process where the build is explicitly handed over every so often (monthly, or six-weekly, or quarterly, etc) to be assessed. Each milestone typically has a bunch of intended 'deliverables' - they are ideally working features, but can also be assets that may or may not yet be integrated into a feature. And there can be different expectations for the state of a deliverable (e.g. "prototype", "working", "finished", etc). The actual state of those deliverables, relative to what was promised, dictates whether the milestone is 'passed' or 'failed'. In the latter case the team is usually expected to fix up those failed deliverables before the next milestone is handed over. If you keep failing milestones, the publisher may terminate the project, as they lose confidence in your ability to complete it. Scheduling within a milestone is usually done via some sort of agile planning process. With a given set of deliverables in mind, the heads of each department and the project management team ("producers") come up with a list of prioritised tasks and decide who to allocate them to. Those people receive the task descriptions and implement them. The day-to-day work of each person depends entirely on their job and their task. There may be a 'daily standup' meeting or Scrum where people briefly discuss what they're working on and ask for help if they're stuck. Beyond that, they communicate via conversation/meetings/email/Slack to resolve ambiguities and discuss details. The rest of the time, they're probably at their computer, executing their current task by creating code, art, whatever.
  15. 4 points
    Hi everyone! It has been more than two months since I released I Am Overburdened and since I wrote a devlog entry. Please accept my apology for this, I was super busy with the release and support of the game. But now I’m back with an in-depth analysis how the overall production and final numbers turned out . Summary I want to do a fully detailed breakdown of the development and business results, but I don’t want break it up into a typical postmortem format (good, bad, ugly). I’ve drawn my conclusions, I know what I have to improve for my upcoming projects, but I don’t want to dissect the I Am Overburdened story this way, I want emphasize how much work goes into a game project and focus more on how a journey like this actually looks and feels like. If you really want know my takeaways, here it goes in a super short format: I consider the game a success from a development perspective (good), but I failed from a marketing and sales standpoint (bad and ugly). Now I go into the details, but will focus more on the objective “what happened, how it went, what it took” parts. Development The game started out as a simple idea with a simple goal in mind. I partially abandoned my previous project, because it ballooned into a huge ball of feature creep, so I wanted to finish a more humble concept in a much shorter time period. The original plan was to create a fun game in 4 months. I really liked the more casual and puzzle-y take on the roguelike genre like the classic Tower of the sorcerer game, or the more recent Desktop dungeons and the Enchanted cave games so I set out to create my own take. I designed the whole game around one core idea: strip out every “unnecessary” RPG element/trope and keep only the items/loot, but try to make it just as deep as many other roguelikes regardless of its simplicity. From this approach the “differentiating factor” was born, a foolishly big inventory, which helped me to define and present what I Am Overburdened really is. A silly roguelike full of crazy artifacts and a “hero” who has 20 inventory slots. Most of the prototyping and alpha phases of the development (first two months) went smoothly, then I had to shift gears heavily… Reality check After 3 months of development, when all of the core systems were in place and when I deemed big parts of the content non-placeholder, the time came to show the game to others. I realized something at that point, forcing me to make a huge decision about the project. The game was not fun . The idea was solid, the presentation was kind-of ok, but overall it was simply mediocre and a month of polishing and extra content in no way could change that! Back than I was super stressed out due to this and I thought about this as my hardest decision as a game maker, but looking back I think I made the right choice (now I feel like I actually only had this one). I decided to postpone release, explore the idea further even if it doubles!!! the originally planned development time (and it happened ) and most importantly I decided to not make or release a “shovelware”, because the world really isn’t interested in another one and I’m not interested in making/publishing one… Final scope So after 4 months of development, feeling a bit glum, but also feeling reinvigorated to really make the most out of I Am Overburdened I extended the scope of the design & content and I also planned to polish the hell out of the game . This took another 4 months and almost a dozen private beta showings, but it resulted in a game I’m so proud of, that I always speak of it as a worthy addition to the roguelike genre and as a game that proudly stands on its own! Some numbers about the end result: It takes “only” around 30 to 40 minutes to complete the game on normal mode in one sitting, but due to its nature (somewhat puzzle-y, randomized dungeons & monster/loot placements + lots of items, unlocks and multiple game modes), the full content cannot be experienced with one play-through. I suspect it takes around 6 to 12 full runs (depending on skill and luck) to see most of what the game has to offer so it lends quite a few hours of fun . There are 10 different dungeon sets and they are built from multiple dozens of hand authored templates, so that no level looks even similar to the other ones in one session. They are populated by 18 different monsters each having their own skill and archetype (not just the same enemy re-skinned multiple times). And the pinnacle, the artifacts. The game has more than 120 unique items, all of them having a unique sprite and almost all of them having unique bonuses, skills (not just +attributes, but reactive and passive spells) and sound effects. This makes each try feel really different and item pickup/buy choices feel important and determinative. The game was also localized to Hungarian before release, because that is my native language so I could do a good job with the translation relatively fast and this also made sure, that the game is prepared to be easily localized to multiple languages if demand turns out to be high. Production numbers How much code I had to write and content I had to produce all in all to make this game? It is hard to describe the volume/magnitude with exact numbers, because the following charts may mean a totally different thing for a different game or in case of using different underlaying technologies, but a summary of all the asset files and the code lines can still give a vague idea of the work involved. Writing and localization may not sound like a big deal, but the game had close to 5000 words to translate ! I know it may be less than the tenth of the dialogue of a big adventure or RPG game, but it is still way larger than the text in any of my projects before… I’ll go into the detailed time requirements of the full project too after I painted the whole picture, because no game is complete without appropriate marketing work, a super stressful release period and post-release support with updates and community management work . Marketing If you try to do game development (or anything for that matter) as a business, you try to be smart about it, look up what needs to be done, how it has to be approached etc… I did my homework too and having published a game on Steam before I knew I had to invest a lot into marketing to succeed, otherwise simply no one will know about my game. As I said this is the “bad” part and I’ll be honest. I think I could have done a much better job, not just based on the results, but based on the hours and effort I put in, but let’s take it apart just like the development phase. Development blog/vlog I started writing entries about the progress of the game really early on. I hoped to gather a small following who are interested in the game. I read that the effectiveness of these blogs are minimal, so I tried to maximize the results by syncing the posts to at least a dozen online communities. I also decided to produce a video version because it is preferred over text these days + I could show game-play footage too every now and then. I really enjoyed writing my thoughts down and liked making the videos so I will continue to do so for future projects, but they never really reached many people despite my efforts to share them here and there… Social media I’ve tried to be active on Twitter during development, posting GIFs, screen-shots and progress reports multiple times a week. Later on I joined other big sites like Facebook and Reddit too to promote the game. In hindsight I should have been more active and should have joined Reddit way earlier. Reddit has a lot of rules and takes a lot more effort than Twitter or Facebook, but even with my small post count it drove 10 times more traffic to my store page, than any other social media site. Since the game features some comedy/satire and I produced a hell of a lot of GIFs, I tried less conventional routes too like 9gag, imgur, GIPHY and tumblr, but nothing really caught on. Wishlist campaign I prepared a bunch of pictures up-front featuring some items and their humorous texts from the game. I posted one of these images every day starting from when the game could be wishlisted on Steam. I got a lot of love and a lot of hate too , but overall the effectiveness was questionable. It only achieved a few hundred wishlists up until the release day. Youtube & Twitch For my previous Steam game I sent out keys on release day to a 100 or so Youtubers who played any kind-of co-op game before, resulting in nearly 0 coverage. This time I gathered the contact info of a lot of Youtubers and Twitch streamers upfront. Many were hand collected + I got help from scripts, developer friends and big marketing lists ! I categorized them based on the games they play and tried talking to a few of those who played roguelikes way before release to peak their interest. Finally I tried to make a funny press release mail, hoping that they will continue reading after the first glance. I sent out 300 keys the day before release and continued the following weeks, sending out 900 keys total. And the results?! Mixed, could be worse, but it could be much better too. 130 keys were activated and around 40 channels covered the game, many already on release day and I’m really thankful for these people as their work helped me to reach more players. Why is it mixed then? First, the videos did generate external traffic, but not a huge one. Second, I failed to capture the interest of big names. I also feel like I could have reached marginally better results by communicating lot a more and a lot earlier. Keymailer I payed for some extra features and for a small promotion on this service for the release month. It did result in a tiny extra Youtube coverage, but based on both the results and the service itself all in all it wasn’t money well spent for me (even if it wasn’t a big cost). Press This was a really successful marketing endeavor considering the efforts and the resulting coverage. I sent out 121 Steam keys with press release mails starting from the day before release. Both Rock Paper Shotgun and PC Gamer wrote a short review about it in their weekly unknown Steam gems series and the game got a lovely review from Indiegames.com. Also a lot of smaller sites covered it many praising it for being a well executed “chill” tongue-in-cheek roguelike . The traffic generated by these sites was moderate, but visible + I could read some comforting write-ups about the quality of the game. Ads I tried Facebook ads during and a bit after the release week + in the middle of the winter sale. Since their efficiency can not be tracked too well I can only give a big guesstimate based on the analytics, sales reports and the comparison of the ad performances. I think they payed back their price in additional sales, but did not have much more extra effect. I believe they could work in a bigger scale too with more preparation and with testing out various formats, but I only payed a few bucks and tried two variants, so I wouldn’t say I have a good understanding of the topic yet. Some lifetime traffic results: So much effort and so many people reached! Why is it “bad”, were the results such a mixed bag? Well, when it comes to development and design I’m really organized, but when it comes to marketing and pr I’m not at all. As I stated I never were really “active” on social media and I have a lot to learn about communication. Also the whole thing was not well prepared and the execution especially right at the release was a mess. The release itself was a mess . I think this greatly effected the efficiency! Just to be more specific I neglected and did not respond in time to a lot of mails and inquiries and the marketing tasks planned for the launch and for the week after took more than twice as much time to be completed as it should have. I think the things I did do were well thought out and creative, but my next releases and accompanying campaigns should be much more organized and better executed. Time & effort I don’t think of myself as a super-fast super-productive human being. I know I’m a pretty confident and reliable programmer and also somewhat as a designer, but I’m a slowpoke when it comes art, audio and marketing/pr. For following my progress and for aiding estimations I always track my time down to the hour level. This also gives me confidence in my ability to deliver and allows me to post charts about the time it took to finish my projects . Important thing to note before looking at the numbers: they are not 100% accurate and missing a portion of the work which were hard to track. To clarify, I collected the hours when I used my primary tools on my main PC (e.g.: Visual Studio, GIMP), but it was close to impossible to track all the tasks, like talking about the game on forums & social media, writing and replying-to my emails, browsing for solutions to specific problems and for collecting press contact information, you get the idea… All in all these charts still show a close enough summary. 288 days passed between writing down the first line in the design doc and releasing the game on Steam. I “logged” in 190 full-time days. Of course more days were spent working on the game, but these were the ones when I spent a whole day working and could track significant portion of it + note that in the first 4 months of the project I spent only 4 days each week working on I Am Overburdened (a day weekly were spent on other projects). Release So how the release went? It was bad, not just bad, “ugly”. After I started my wishlist campaign, close to the originally planned date (2017. Oct. 23.) I had to postpone the release by a week due still having bugs in the build and not having time to fix them (went to a long ago planned and payed for vacation). I know this is amateurish, but the build was simply not “gold” two weeks prior to release . Even with the extra week I had to rush some fixes and of course there were technical issues on launch day. Fortunately I could fix every major problem in the first day after going live and there were no angry letters from the initial buyers, but having to fight fires (even though being a common thing in the software/game industry) was super tiring while I had to complete my marketing campaign and interact with the community at the same time. The game finally went live on Steam and itch.io on 2017. Nov. 2 ! I did not crunch at all during development, but I don’t remember sleeping too much during the week before and after launching the game. Big lesson for sure . I saw some pictures about the game making it to the new and trending list on Steam, but it most probably spent only a few hours there. I never saw it even though I checked Steam almost every hour. I did saw it on the front-page though, next to the new and trending section in the under 5$ list . It spent a day or two there if I remember correctly. On the other hand, itch.io featured it on their front page and it’s been there for around a whole week ! With all the coverage and good reviews did it at least sale well, did it make back it’s development costs, if not in the first weeks at least in the last two months? Nope and it is not close to it yet… Sales In the last two months a bit more than 650 copies of I Am Overburdened were sold. Just to give an overview, 200 copies in the first week and reached 400 by the end of November, the remaining during the winter sale. This is not a devastating result, it is actually way better than my first Steam game, but I would be happier and optimistic about my future as game developer with reaching around 3 to 4 times the copies by now. To continue as a business for another year in a stable manner around 7 to 8 times the copies total (with price discounts in mind) during 2018 would have to be reached. I’m not sure if the game will ever reach those numbers though . If you do the math, that is still not “big money”, but it could still work for me because I live in eastern Europe (low living costs) + I’m not a big spender. Of course this is an outcome to be prepared for and to be expected when someone starts a high-risk business, so I’m not at all “shocked” by the results. I knew this (or even a worse one) had a high chance. No matter how much effort one puts into avoiding failure, most of the game projects don’t reach monetary success. I’m just feeling a bit down, because I enjoyed every minute of making this game, a.k.a. “dream job” , maybe except for the release , but most probably I won’t be able to continue my journey to make another “bigger” commercial game. I may try to build tiny ones, but certainly will not jump into a 6+ months long project again. Closing words It is a bit early to fully dismiss I Am Overburdened and my results. It turned out to be an awesome game. I love it and I’m super proud of it. I’m still looking for possibilities to make money with it (e.g.: ports) + over a longer time period with taking part in several discount events the income generated by it may cover at least a bigger portion of my investment. No one buys games for full price on PC these days, even AAA games are discounted by 50% a few months after release , so who knows… If you have taken a liking to play the game based on the pictures/story you can buy it (or wishlist it ) at Steam or at itch.io for 4.99$ (may vary based on region). As an extra for getting all the way here in the post, I recorded a “Gource” video of the I Am Overburdened repository right before Christmas. I usually check all the files into version control, even marketing materials, so you can watch all the output of almost a year of work condensed into 3 minutes. Enjoy ! Thank you very much for following my journey and thanks for reading. Take care!
  16. 4 points
    Hello there. I'm not really the blogging type. This is my first ever blog. So I'll do my best. I've been trying to showcase my video game engine from scratch in different professional forums with scant mixed results. I’m currently a happily employed 3D Graphics Programmer in the medical device field who also loves creating graphics programs as a side hobby. It's been my experience that most people who aren't graphics programmers simply don't appreciate how much learning goes into simply being able to code a small fraction of this from scratch. Most viewers will simply compare this to the most amazing video game they’ve ever seen (developed by teams of engineers) and simply dismiss this without considering that I’m a one-man show. What I’m hoping to accomplish with this: I’m not totally sure. I spent a lot of my own personal time creating this from the ground up using only my own code (without downloading any existing make-my-game-for-me SDKs), so I figured it’s worth showing off. My design: Oct Tree for scene/game management (optimized collision-detection and scene rendering path) from scratch in C++. 1. All math (linear algebra, trig, quaternion, vectors), containers, sorting, searching from scratch in C++. 2. Sound system (DirectSound, DirectMusic, OpenAL) from scratch in C++. 3. Latest OpenGL 4.0 and above mechanism (via GLEW on win32/win64) (GLSL 4.4). Very heavy usage of GLSL. Unusual/skilled special effects/features captured in video worth mentioning: 1. Volumetric Explosions via vertex-shader deformed sphere into shock-wave animation further enhanced with bloom post-processing (via compute shader). 2. Lens Flare generator, which projects variable edge polygon shapes projected along screen-space vector from center of screen to light-position (again, in screen-space: size and number of flares based on intensity and size of light source). 2. Real-time animated procedural light ray texture (via fragment shader) additively blended with Volumetric explosions. 3. Active camouflage (aka Predator camouflage). 4. Vibrating shield bubble (with same sphere data in Volumetric explosion) accomplished using a technique very similar to active camouflage 5. Exploding mesh: When I first started creating this, I started out using fixed-function pipeline (years ago). I used one vertex buffer, one optimized index buffer, then another special unoptimized index buffer that traces through all geometry one volume box at a time. And each spaceship “piece” was represented with a starting and ending index offset into this unoptimized index buffer. Unfortunately, the lower the poly-resolution, the more obvious it is what I was doing. Unfortunately, as a result, when the ship explodes you see the triangle jaggies on the mesh edges. My engine is currently unfortunately somewhat married to this design—which is why I haven’t redesigned that part yet (It’s on the list, but priorities). If I were to design this over again, I’d simply represent each piece with a different transform depending upon whether or not the interpolated object-space vertex (input to the pixel shader) was in-front of or behind an arbitrary “breaking plane”. If the position was beyond the breaking point boundary planes, discard the fragment. This way, I can use one vertex buffer + one optimized index buffer and achieve better looking results with faster code.
  17. 4 points
    Isn't that field of science called psychology? Or science itself is just applied philosophy at the end of the day... BTW, they already have metrics for measuring whether an object is conscious or not, which have been demonstrated to be able to tell the difference between normal awake brains, sleeping brains, dreaming brains, anesthetized brains, vegetative comatose brains, minimally conscious brains, and "locked in syndrome" brains (which would otherwise appear similar to other comatose brains, but on this metric, shows high levels of consciousness). Science does peer into those mechanisms. People's free will is surprisingly easy to influence... Again, that's psychology (or hypnotism too, if you like). The exact mechanisms of exactly how this process works though -- or any specific human action, when trying to explain the entire chain of consequence from genesis of thought to action -- are too complex for any human to ever understand (a thousand trillion synaptic connections, multiplied by all the other variables is an inconceivable amount of data, even when just considering a single moment in time...). There's also the camp who believes that the actual physical mechanisms behind thought is rooted in quantum behavior, which is probabilistic, which makes the whole thing "just physics" without having to say that it's deterministic (keeping the "free" part of "free will" free, and leaving the door open for a God who rolls dice).
  18. 4 points
    I don't believe @Alberth could have explain that better, the problem is that it isn't a very simple thing to do; So I added images to show some details. Sorry I am at work and don't have the time to make it at a great quality. So I will show you what Alberth said: First you need what is known as tiling textures, these are the rectangles: The important part is that you can make a larger image from them because they tile. Next you need a mask, this is the form of your country: It's common for masks to be gray scale or black and white, this saves a lot of data. The way a mask works is that you will tell your fragment shader to render the grass texture where the white is, because white = (1,1,1) and any number times 1 = it self. Example 0.5 * 1 = 0.5. You can just multiply the color of the texture with the white. Black = (0,0,0) and Any number times black = 0. Your end result should look like this: Next you tell your fragment shader just to skip all pixels that is Black (0,0,0) and it should render only the grass part. After rendering the grass part you do the same using your rock texture. Then you render first the rock texture a bit lower and to the left of the image, then you render the grass where you want it. Click image for full size. For the edge you just use an edge detect or you can also store it in the mask. Using tiling textures and a mask you can create all the content you need.
  19. 4 points
    As others mentioned, it isn't just games, and it isn't just beginners. It can be that you'll lose an unrealistic amount of weight as your new years resolution, or you'll finish an assignment by some deadline, or you're Apple releasing a watch a week after the official release date, or Duke Nukem Forever released about 15 years late. The problem has been around for all of humanity. A bit of web searching shows even in the ancient world, the ancient Roman, Greek, and Babylonian cultures made assorted promises and resolutions each new year, including documented promises to return objects, pay debts, and follow better care. People have been setting (and failing to meet) overambitious unrealistic goals for all of recorded history. It doesn't matter even if we have experience. We KNOW we aren't going to drop that much weight, but we think that THIS TIME we might be able to. We KNOW that every year we wait to file taxes but we commit to THIS TIME doing it earlier. We KNOW that similar assignments have required more time, but we think that THIS TIME we can make it by an earlier deadline. We KNOW a game like the one we see has a 15-minute long scrolling list of names in the credits, but we think we can do it. As typical, Wikipedia's got a writeup listing a bunch of proposed reasons with references. It looks like something psychologists have been writing about since the 1960s. Given the history of new year's promises, I can imagine Hammurabi reading the stone tablet newspaper at the new year describing how he can help keep his goals of losing weight and keeping it off.
  20. 4 points
    We are the music-makers, And we are the dreamers of dreams, Wandering by lone sea-breakers And sitting by desolate streams; World losers and world forsakers, On whom the pale moon gleams: Yet we are the movers and shakers Of the world for ever, it seems. With wonderful deathless ditties We build up the world’s great cities. And out of a fabulous story We fashion an empire’s glory: One man with a dream, at pleasure, Shall go forth and conquer a crown; And three with a new song’s measure Can trample an empire down. We, in the ages lying In the buried past of the earth, Built Nineveh with our sighing, And Babel itself with our mirth; And o’erthrew them with prophesying To the old of the new world’s worth; For each age is a dream that is dying, Or one that is coming to birth. From "Ode" by Arthur O'Shaughnessy
  21. 4 points
    It's been a few months since the last staff blog update, and I apologize for that. It was a busy end of 2017. We've recently announced several new community features here on GameDev.net that I'd like to go through - if you haven't spent time exploring all of GameDev.net then you might have missed these! GameDev Challenges Spawned from this forum request: GameDev Challenges is a new community "game jam"-like event for developers looking to test their skills or learn through short-form projects intended to take no more than a month or two to develop. Developers who complete the challenges in the allotted time earn a badge on their profile. Of course, developers can complete challenges after the allotted time, but right now the badges are not awarded. Right now we are using the GameDev Challenges forum to manage these (Forums -> Community -> GameDev Challenges), and we're currently on the 3rd challenge where developers need to create a clone of the arcade classic Missile Command. The first two challenges were for a Pong! clone and an Arcade Battle Arena along the lines of the classic Bubble Bobble. I encourage everyone to check out the threads and entries for all the challenges! In the near future we'll integrate Challenges with our latest announcement: Projects and Developer Profiles. Projects As we announced: The Project profile is pretty extensive. We recommend setting up a developer blog for each Project, which can then be used to provide updates for the Project and will show on your Project page. A Gallery is also automatically created for your Project (if you don't already have one), and from there you can upload screenshots and videos of your Project - these will also show in your Project page. The rest of your Project profile includes links to your Project homepage, store links, descriptions, platforms, and other details about your project. You can even upload and manage files for others to download. Here's the feature list from the announcement: Browse, download, and comment on projects from other Developers Provide updates to your project by linking your GameDev.net Blog Create your own Developer profile, including with a GameDev.net subdomain (in progress)! Showcase your Project with screenshots from your project's linked GameDev.net Album Manage your projects through your Developer Dashboard Market your project's website, Facebook, Twitter, Steam, Patreon, Kickstarter, and GameDev.Market pages Track project views and downloads through Google Analytics Upload and manage file downloads for your Project, allowing others to try it out and give feedback Link to your project with an embeddable widget (link auto-embeds on GameDev.net) Showcase your project with a trailer on YouTube or Vimeo Import your project from IndieDB or itch.io Developer Profiles Developer Profiles are a new feature of your GameDev.net profile for showcasing your development team(s). We have basic features for this right now with plans to expand it. You'll find all the settings for your Developer Profile in the Project Dashboard mentioned earlier. NeHe Source Code on GitHub Also mentioned recently is that we've put all of the NeHe tutorial source code on GitHub at https://github.com/gamedev-net/nehe-opengl. We will accept updates to the source code via pull request. NeHe is our long-running OpenGL tutorial site for developers wanting to learn OpenGL in a very accessible, straightforward manner, and is available at http://nehe.gamedev.net. GameDev Loadout Podcast And finally, if you're into podcasts then I suggest checking out our new Podcast section. We've recently partnered with Tony at GameDev Loadout to showcase his podcasts on GameDev.net. Tony interviews industry professionals and indie developers about a number of game development related topics. He's approaching his 80th episode, which is quite a feat! Next? We have bigger plans with Projects, Developer Profiles, and GameDev Challenges, so keep an eye out for more updates. We'll also be re-evaluating the home page in the near future (yes, again) to try to make it easier to find content you're interested in seeing across the entire site. Oh, one last announcement - the Game Developers Conference is quickly approaching, and GameDev.net members can get 10% off All Access and GDC Conference passes using this code: GDC18DEV As always, please share any thoughts and feedback in the comments below!
  22. 4 points
    I caught wind of the Missile Command challenge earlier in the month and figured I'd knock it out during Christmas break. I was planning on coding more on my Car Wars prototype but the challenge took up most of my programming time. Sleeping late, hanging out with my family, and playing video games took up the rest of my free time. All in all, it was a great way to slide into the new year. I spent some effort keeping the code clean (mostly) and even did a couple of hours worth of commenting and code reorganization before zipping up the package. Many sample projects are quick and dirty affairs so aren't really a good learning resource for new programmers. Hopefully my project isn't too bad to look at. What went right? Switching to Unity a few years ago. This was definitely the right call for me. It helped me land my dream job and allows me to focus on making games rather than coding EVERYTHING from the ground up. KISS (Keep It Simple Stupid) - I started thinking about doing my own take on Missile Command but decided on doing just a simple clone. The known and limited scope let me keep the project clean and kept development time down to something reasonable. Playing the original at http://my.ign.com/atari/missile-command was a big help for identifying features I wanted. Getting the pixel perfect look of old-school graphics was a little bit tricky, but thanks to a well written article I was able to get sharp edges on my pixels. https://blogs.unity3d.com/2015/06/19/pixel-perfect-2d/ I had done some research on this earlier so I knew this would be a problem I'd have to solve. Or rather see how someone else solved it and implement that. Using free sounds from freesound.org was a good use of time. There are only 4 sound effects in the game and it only took me an hour or two to find what I felt were the right ones. What went ok? Making UI's in Unity doesn't come naturally to me yet. I just want some simple elements laid out on the screen. Sometimes it goes pretty quickly, other times I'm checking and unchecking checkboxes, dragging stuff around in the heirarchy, and basically banging around on it until it works. I got the minimal core features of Missile Command, but not all of it. You don't really think about all the details until you start making them. I'm missing cruise missiles, bombers, and the splitting of warheads. Dragging the different sprites into position on the screen was manual and fiddly. There's probably a better way to do this, but it didn't take too long. You can't shoot through explosions, which makes the game a little more challenging. And you can blow up your own cities if you shoot defense warheads too close to them. It was easy enough to fix, but I left it in there. What went wrong? I spent a ton of time getting the pixel perfect stuff right. Playtesting in editor, it was set to Maximize on Play. There wasn't quite enough room to see the full screen so the scale was at 0.97 making the display 97% what it should be and thus, blurred all my sharp edges. I didn't see that setting though... >.< I pulled my hair out trying to see the problem in my math, and even downloaded a free pixel perfect camera which was STILL showing blurry stuff. Finally, I built the game and ran it outside the editor and saw things were just fine. There's also an import setting on sprites for Compression that I cleared to None. I'm not sure if this second step was necessary. I've been bit by the Editor not being 100% accurate to final game play and wished I would have tried that sooner. I had trouble with the scoring screen. I wanted to put the missiles up on the score line like Missile Command does, but ran into trouble with game sized assets and canvas sized UI elements. After 45 minutes or so I said screw it, and just put a count of the missiles and cities up. I didn't data drive the game as much as I wanted. The data is in the code itself so users can't mod the game without downloading the source code. I'm also only using one set of colors for the level. If I put any more time into this, I'll probably tackle this issue next, and then worry about the features I missed out on. Final thoughts Working on this project makes me appreciate how awesome the programmers of old really were. With the tools I have (Unity, Visual Studio, etc.) it took me a couple of weekends. And even then I didn't recreate all the features of the original. I'm including a link to the zipped up project in case anyone wants to see the source code to play around with it. Hopefully someone finds it useful. If so, let me know. EcksMissileCommand.zip - Executable if you want to play. EcksMissileCommand_Source.zip - The project if you want to mess around with it. Sound Credits freesound.org https://freesound.org/people/sharesynth/sounds/344506/ - Explosion.wav https://freesound.org/people/sharesynth/sounds/344525/ - LevelStartAlarm.wav LevelStartAlarm_Edit.wav - I used Audacity to edit the above sound. I took the first six beeps then faded them to silence. https://freesound.org/people/Robinhood76/sounds/273332/ - MissileLaunch.wav https://freesound.org/people/sharesynth/sounds/341250/ - ScoreCount.wav
  23. 4 points
    You started to big with your project (I have seen your posts on the forum). You should have made a small game first, you would have gotten to the reward faster. Then you should have made a small team project, 2-3 members and only after that should you have dived into your large project and assembling a large team. It takes around 12 years for you to reach the point where you can make your "The Game" and only the possible parts.( I still can't make that talking AI I wanted.) Making games is hard, it's expensive, the people you make it for will hate for it and others will tell you it isn't a real job. Find some reason to make games, without it you will be crushed. I don't believe that a hobby team of around 12 people with random skills and around $500 000 budget, could match 250-500 dedicated professionals with a $40 000 000 budget. Even Big Indie developers spend millions on there games. For indie developers the goal should be something like Minecraft or Five Nights At Freddy's; special games that isn't too expensive to make and breaks the mold.
  24. 4 points
    This question is pretty vague. I'll just describe how I generally do the main game loop, but this is just one of many ways that it can work. I have an object Game... this is the object that contains the main game loop. It also contains a member called ActiveGameScene which implements the interface IGameScene. The IGameScene has three primary responsibilites... Respond to a request, Update the GameState or Render the gamestate. The IGameScene contains a list of IGameSystems. Game Systems are very tightly focused "functionality packets" so, one system may be my input system and another may be my rendering system and there may be many more... lighting systems, ai systems etc. Each System has methods ProcessRequest, Update, and Render. So, the Game contains a Scene which contains a set of Systems and the main game loop is //Pseudo Code //In Game while(running) { while(requestsPending) Scene.ProcessRequest(request); Scene.Update(); Scene.Render(); } //In GameScene ProcessRequest(){ Foreach(ISystem sys in Systems) sys.ProcessRequest(request); } Update(){foreach(ISystem sys in Systems) sys.Update();} Render(){foreach(ISystem sys in Systems) sys.Render();} Then, it's in the specific systems where the interesting things happen. Maybe there is an AI System... //In AI System ProcessRequest(request){ } Update(){ Foreach(AIUnit){ if (AIUnit.CurrentAction==null) { AIUnit.CurrentAction = AIUnit.ChooseNextAction(); } switch(AIUnit.CurrentAction) { case Idle : Break; case Attack: AIUnit.Attack(); break; case Move: AIUnit.Position+= (AIUnit.Dest-AIUnit.Pos).Normalize(); } } }
  25. 4 points
    Realistically speaking it's nearly impossible to find people who will be as motivated and driven as yourself towards your own project. It does happen, but there's a reason why entry level positions are typically filled with people who provide crappy service. Think of it like this; the person who pumps gas likely doesn't want to do that forever, they likely would rather own the gas station. Therefore you get someone who is 'just doing a job' and likely isn't providing the best service possible. People who do provide the best service possible likely get promoted, and then someone else fills their spot at the bottom ensuring that quality service is always hard to find. Ultimately a really driven person would likely want to rise through the ranks and then start their own business in the field they rose up through. It's the same with any field of work. Finding someone who will give you 100% as an employee/team member is truly a diamond in the rough. If you compare the above example to game development, you can't really expect everyone who might want to work with you to be as driven and dedicated as you are. When you do find someone as driven as you are it's likely that they will be using their position to gain experience to one day lead their own team. Some people are fine staying in the middle, and providing 100%, but it's tough to find those kinds of driven people. The passion for your project has to come from you. As leader of a project you then work to inspire those on your team to work with you and be as passionate about the project as you are, lead by example. Most teams lead by two people are married, siblings or long time friends. There's a reason for this, it's because they share the same passions and work well together. Finding one person to work in perfect harmony with you is rare, having that with an entire team is nearly unheard of. As for "wasting your life making a product that is never gonna be as good as the rest of the industry", your life is yours to do with as you please. If you enjoy the creation process, then to me, it's not a waste at all. A sense of purpose for ones life should come from many things, not just work, but social life, hobbies, recreation, etc, it's a balancing act. That way if any one thing doesn't work out it's not such a far fall from being content. Comparing your work or the potential of your work to 'the rest of the industry' to me is counter productive if its uninspiring to you. Remember that there can only be one 'the best', and only a few 'top of the game'. There are billions of people on the planet, the odds are stacked against you. It's also important to remember that what you see as an end user is a polished product, not a prototype. So you're seeing the best that could be brought forward. Each individual on a team has their own respective portfolios of work that also represents their current best, and it's something they update as they get better. What we as end users rarely see is the crappy stuff that doesn't make release, the 'unpolished' work if you will. Think of a painter who has their work on exhibition at a gallery. Everything we see at the gallery is what they believe to be their best work. Now think of their studio, the stuff left behind wasn't good enough to make the cut at the gallery. On top of that they will have scores of work they didn't like and never completed, not the mention the work from X years before they were finished mastering their skill that they no longer show anyone. Something I always remember is that it'll take you around 10,000 hours in each skill to get good enough to be professional. If you're trying to develop a game then whatever asset skill you have will take that same 10k hours. If you lead the team, then team leadership is another skill requiring 10k hours. Each skill you will use will take you 10k hours. Some skills will overlap and allow you to 'level up' at the same time, others wont. This is why most people work on one skill at a time, or join teams in order to only be in charge of one skill. They can then learn from each other and master other skills together. If you've ever watched interviews or behind the scenes of some of your favorite productions, be it videos games, music, movies, stand-up comedy, etc, one thing you'll notice is most artists will be unhappy with work they were once happy with. This is a representation of their skills improving as an artist. Clerks by Kevin Smith is one of my favorite movies, he talks smack about it all the time, because he's evolved as a writer/director. Another good example of the things I've mentioned are some of the costume concepts for the superhero movies over the years. Some of them are super bad, and needed to be entirely reworked, but ultimately we as end users got something that looked really good on screen. Without looking behind the scenes, the average user never knows how much work went into making something perfect. Stand-up comedy is the same, we don't often get to see comedians in their early years bombing at 50 person bars, we see their 1 hour specials a decade into their career. Many assume that what we see came out perfect on the first try, when in reality it's far from that easy. In the end it's important that you enjoy the creation process, and are truly dedicated to making your project/skill as perfect as possible within reason. If you don't enjoy it, then why bother? It doesn't matter what others think, art typically shines best when you make it for yourself and not for others.
  26. 4 points
    My turn. Source file and exec are in the zip here https://github.com/wybifu/missile_command/archive/master.zip and also : and album
  27. 4 points
    Seems you need to do your debugging yourself Did you try to output variables to screen one after another, something like: SkyColors[DTID.xy] = float4(X,Y,r, 1); You can make some conclusions based at the colors you see. If this does not help you can create a debug buffer, store variables there, read back to CPU and print them so you can spot NaNs or infinites. That's annoying but once set up you can use that debugging for each shader.
  28. 4 points
    The title is vague, so I'll explain. I was frustrated with UI dev (in general), and even more-so when working with a application that needed OpenGL with embedded UI. What if I wanted to make a full application with OpenGL? (Custom game engine anyone?) Well I did want to. And I'm working on it right now. But it started me onto making what I think is a great idea of a styling language; I present KSS (pron. Kiss) The multitudes more programmable version of CSS.... (only for desktop dev) /* It has all the 'normal' styling stuff, like you'd expect. */ elementName { /*This is a styling field.*/ font-color: rgb(0,0,0), font: "Calibri", font-size: 12px } .idName { color: rgba(255,255,255,0) } uiName : onMouse1Click{ color: vec3(0,0,0) } BUT It also has some cool things. I've taken the liberty to add variables, templates (style inheritance), hierarchy-selection, events (as objects), native function calls, in-file functions. var defaultColor: rgb(0,0,0) var number : 1.0 /*Types include: rgb, rgba, vec2, vec3, vec4, number, string, true, false, none (null), this*/ fun styleSomeStuff{ .buttons{ color: defaultColor, text-color: rgb(number,255,number) } } template buttonStyle{ color: rgb(255,255,0) } .buttons{ use: buttonStyle, otherTemplateName /*copies the templates styling field*/ color: defaultColor; } .buttons : onMouse1Click{ /* events not assigned as a value are initialized when read from file*/ styleSomeStuff();*/* call the in-file function*/ *nativeFunctionCall();/*call a native function that's binded*/ var a : 2; /*assign a variable, even if not initially defined.*/ } /*storing an event in a 'value' will allow you to operate on the event itself. It is only ran when 'connected', below.*/ val ON_CLICK2 = .buttons : onMouse2Click{ use: templateName } connect ON_CLICK2; disconnect ON_CLICK2; /*you can make a function to do the same.*/ fun connectStuff{ connect ON_CLICK2; } But wait, you ask... what If I need to select items from a hierarchy? Surely using element names and id's couldn't be robust enough! Well: /*We use the > to indicate the item next element is a 'child' to itemA in the hierarchy*/ itemA > itemAChild : onMouse1Click{ } .itemId > .itemAChild > itemAChildsChild{ } /*want to get all children of the element at hand?*/ elementName > .any{ /*this will style all the elements children*/ } /*What about if we want to use conditional styling? Like if a variable or tag inside the element is or isnt something?*/ var hello : false; var goodbye : true; itemA [var hello = false, var goodbye != false] { /*passes*/ } itemA [@tagName = something]{ /*passes is the tag is equal to whatever the value u asked.*/ } The last things to note are event-pointers, how tagging works, 'this', and general workflow. Tagging works (basically) the same as Unity, you say @tagName : tagValue inside a styling field to assign that element a tag that's referable in the application. You have the ability to refer to any variable you assign within the styling sheet, from say.. source-code. The backend. As such, being able to set a variable to 'this' (the element being styled) allows you to operate with buttons who are currently in focus, or set the parent of an item to another element. All elements are available to use as variables inside and outside the styling sheet, so an event can effectively parent an element or group of elements to a UI you specify. Ill show that in a figure below, Event pointers are so that you can trigger an event with a UI component, but affect another, below- /*We use the -> to point to a new element, or group of elements to style.*/ .buttons : onMouse1Click -> .buttons > childName{ visible : false parent: uiName; } /* In this case, we style something with the name "childName" whos parented to anything with the id 'buttons'. Likewise, we if there was a element with the name 'uiName', it would parent the childName element to it. */ Lastly: The results/workflow. I'm in the process of building my first (serious) game engine after learning OpenGL, and I'm building it with Kotlin and Python. I can bind both Kotlin and Python functions for use inside the styling sheet, and although I didn't show the layout-language (cause its honestly not good), this is the result after a day of work so far, while using these two UI languages I've made (In the attachments.) It's also important to note that while it does adopt the CSS Box-model, it is not a Cascading layout. That's all I had to show. Essentially, right now Its a Kotlin -backed creation, but can be made for Java specifically, C++, C#, etc. I'm planning on adding more into the language to make it even more robust. What's more, it doesn't need to be OpenGL backed- it can use Java paint classes, SFML, etc etc. I just need to standardize the API for it. I guess what I'm wondering is, if I put it out there- would anyone want to use it for desktop application dev? P.S. Of course the syntax is subject to change, via suggestion. P.S.[2] The image in the middle of the editor is static, but wont be when I put 3D scenes in the scene-view.
  29. 4 points
    This article uses material originally posted on Diligent Graphics web site. Introduction Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed. There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy: Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use. Overview Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components: Render device (IRenderDevice interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.). Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context. An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread. The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs. In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary. Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen. Render device, device contexts and swap chain are created during the engine initialization. Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface. Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource. Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach. Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state. API Basics Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously. Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine. Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used. Initializing the Pipeline State As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once. Creating Shaders While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed. When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows: PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion. Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes: // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = { LayoutElement( 0, 0, 3, VT_FLOAT32, False ), LayoutElement( 1, 0, 4, VT_UINT8, True ), LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method: // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed. Binding Shader Resources Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above). Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()): m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them. Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created: m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible. As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table. This post gives more details about the resource binding model in Diligent Engine. Setting the Pipeline State and Committing Shader Resources Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context: m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages. The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually. Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking. When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources: m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them. Invoking Draw Command The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood: ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension. Source Code Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin. AntTweakBar sample is Diligent Engine’s “Hello World” example. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc. Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Finally, there is an example project that shows how Diligent Engine can be integrated with Unity. Future Work The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
  30. 3 points
    Hi all, I'm an indie app n game developer during my spare times; mainly focused on iOS for the time being ... recently having published a few simple 2d iOS games on the Apple App Store which can be found from my website http://techchee.com , would love to learn from peers how to promote or market mobile games or develop more sophisticated ones 😄
  31. 3 points
    How does one start any project? by having some sort of idea about what they are going to do. In the world of game development, this is often some sort of game design document. This is no different, and requires just as much planning and preparation. As mentioned in the blog description, this project is going to be using WebGL and Javascript as it's underlying systems. I'm sure this will diminish this project in some peoples eyes but to me the systems required for building a game on the web are exactly the same as for those on a desktop, or any other platform. So what features is this game engine going to support? Physics Engine Line -> Line collision Force = mass * acceleration (Newtonian Physics) Stretch Goal: Line -> Bezier curve collision Rendering Engine Ability to render textured meshes Stretch Goal: Particle system Stretch Goal: Trail system Input Handling Keyboard input Other Systems Asset exporter from blender Misc Any non-game systems will be written in python (eg build scripts) The entire project aims to be < 1Mb total size Obviously there has to be some sort of game to test this engine. So this whole 'engine' is going to be based around a single simple game: You're going to fly a 2D spaceship around a course. You'll use arrow keys to steer, and you'll collide with the level geometry. Yup, it's a primitive game - but I think that means it should be achievable. The grahpics style will be simple, as it will not be the focus of the project. The focus of the project is on me learning the underlying technologies required for making a game. You can see some concept art below:
  32. 3 points
    None of this makes any sense.
  33. 3 points
    The core idea is to have 2 layers, one layer is a grid of small rectangles (conceptually covering the entire map, but you likely don't want to fill the entire grid with a single colour of rectangles), and one layer handles the precise shape of a province. Starting with the first layer, if you cut out a small rectangle from a solidly coloured area, you can see you would get a solidly covered area covering the entire map if you fill the entire grid of rectangles, right? You can do the same with the countries with diagonal lines. The height has to be exactly right, so the lines properly connect if you put two rectangles under each other. Again, filling the entire grid would then give you an entire map filled with diagonal lines. This is how you make an (rectangular-ish) area of some colour of arbitrary size, using just a single small rectangle. As for the second layer, instead of the entire map, just fill enough of the grid such that the country with the colour is completely covered. For simplicity, make it a normalt rectangle covering a subset of the grid, just enough to cover the country completely. Each country is then a rectangular image of N x M grid cells, starting at the bottom left with some grid coordinate. To get the real province shape, you need a mask, that only shows the pixels in the above rectangle that are part of the country. It leaves the pixels outside the country unharmed, so they can be used for other countries. A mask is a transparent/opaque picture that is opaque for pixels in the country and fully transparent for pixels outside the country. (In its simplest form, opaque/transparent pixel can be a single bit, so this picture is really small in size). You add the mask to the rectangles colour, so it becomes opaque colour in the country, and invisible/transparent outside the country. To a person, it looks like you only coloured the country, and nothing outside it (since the person cannot see the transparent pixels). The simpler but less flexible approach is to combine the rectangles and the mask into a single image. That is, for each country you have a picture of that country, which is fully transparent outside the country. This is less flexible if you need to give a country different colours, eg for highlighting or for hiding. In the sinmpler approach you must make a new image for each country for each colour, in the two layer system, you only need to add a few more small rectangles.
  34. 3 points
    Same problem, honestly. The issues are whether it's a) recognizable (will you pop up on the original IP owner's radar); and b) likely to cause consumer confusion (if you actually want to fight it when they send you the C&D). If it's highly recognizable you're more likely to get the threat of legal action. The more invisible your product, generally speaking (this is by no means absolute), the less likely you are to face legal action. The more successful your product, the higher the likelihood of legal action if any of your IP touches a big pocket's IP.
  35. 3 points
    In ue4 we use 2 raytraces. A forward vector raytract to see how far the cliff is from us, and a secon to see how far we are from the top. To save on memory we only do this for certain objects. We also had to enable overlap events to check for certain edge cases. From there we verify values and tweak until it looks perfect. There are tutorials online about it, however we loosly followed those and kinda winged most of it after the basics.
  36. 3 points
    Nice work. I like all the different kinds of turrets you can use. And don't feel bad about dirty code. In a game jam/challenge situation, it's not about academically correct code. It's about getting shit done and out the door. That said, it's usually a good exercise to go back and try and refactor the code into something more "correct". Developing that skill can help you go far in development. - Make it work.(hacky) Make it right. (clean) Make it fast. (optimize if necessary)
  37. 3 points
    I call this "The Betty White Effect". In her later years Betty White did a few roles where she played an evil character. The fact that she seemed so over-the-top happy and nice just made her seem even more evil and she did very well in those roles, which you wouldn't expect her too. The opposite look and feel of what you are going for can sometimes actually enhance the effect, rather than detract from it as you would expect it too.
  38. 3 points
    I liked TLJ. Saw it twice, including premiere night. I know its a flawed film that has many other variants of the story could have been done differently. Regardless I like it. Daisy is amazing and I like how Rey's character has grown. I can explain my thoughts if you'd like. As for Mark's recent opinions, I care less. Admittedly a little of it is understandable. His acting was top notch and I think the way Luke went out was great.
  39. 3 points
    This is going to sound really snarky, but one of my biggest problems with game development is tutorials written by people who've only been doing game development for a year or two. They often tend to recommend bad solutions to problems, because although the author has shown sufficient tenacity to overcome common hurdles, they usually do not yet understand the consequences of choosing the approaches that they did, and will recommend their approaches with a degree of overconfidence. I think the best thing someone in your position can do to help people is not to write articles or tutorials, but hang out on forums and answer questions.
  40. 3 points
    A few days have passed since I uploaded the game. I did no advertising whatsoever, but the two blog entries I had here. So I've been watching the dashboard closely, to see if somebody actually looked at and installed the game. And a handful (literally) did Joy! I even got a feedback which can all be conveniently inspected in the dashboard. Unfortunately I've also got crash reports. In one case I received a stack trace. I have a hunch what might be the cause (I've called QueryInterface without checking the HRESULT. I should've known better!) Currently I'm on vacation and have only access to a Windows 7 laptop. And unfortunately on this Visual Studio 2015 prefers to crash when I try to create new packages. I'm pretty happy I can actually compile though; and also the manual package creation via command line does work fine. Go figure. Downside, I obviously can't test or debug the compiled game at all. So I have to wait before uploading a fix to the store. In the meantime I got access to a Windows mobile phone (which is actually pretty good IMHO). Just for kicks I deployed the game and it worked perfectly. If you disregard no possible keyboard entry or back button support. Over the last few days I've made a few adjustments to the code so now back button works, and overlay touch controls are also available (for phone only). I have also tried to add tilt support (both via Gyro and OrientationSensor - the first one is not supported by that phone, and the latter seems to be lagging quite a bit). Once I manage to get keyboard support I also have to try to make the game GUI somewhat more phone friendly (a tree control isn't really all that fit for big fingers). Overall I'm pretty satisfied as how well the same app performs as pure Win32, as UWP app on desktop and on phone.
  41. 3 points
    We don't know. Is your game good? Fun to play? Does it look good? Does it look fun? How are you marketing it? Just being on Steam doesn't necessarily guarantee you any sales, it depends on your game and what you do to sell it. You will likely get at least a few sales just for being there, but there's really no guarantee.
  42. 3 points
    https://msdn.microsoft.com/en-us/library/windows/desktop/dn899209(v=vs.85).aspx edit - IIRC you can't put a texture SRV in the root.
  43. 3 points
    I noticed some other slowness and made some adjustments to the server. Managed to drop CPU utilization about 50% along with reductions in bandwidth and disk usage - long story short things appear to be much more responsive across the board.
  44. 3 points
    You should be. Each person owns their own individual contribution. In a joint work each person becomes a co-owner of the work, and once merged they are generally considered inseparable. Even if you revert their changes, that person may still be able to demand payments or block various uses. If you cannot get all the co-owners to agree, for example when licensing the product to another group or publishing the game, then the project can be legally tainted. All it takes is one disgruntled person, or one person who seems to have vanished from the earth, and your project enters a bad state. That is exactly why they become a joint owner of the work and it is usually considered inseparable. Get with a lawyer to help you make a collaboration agreement, contracting agreement, rights assignment, or various other contract. Your lawyer can tell you the difference, and you'll need forms for each person on the project. For the person that left, tell them there are no hard feelings, tell them that since the project needs to go on you need to make sure you still have legal rights to use what they contributed, tell them they can still claim credit for whatever they want, and perhaps even give them twenty bucks (which the rights assignment form will call "valuable consideration") in exchange for their signature. Everyone else on the project should sign an agreement as well. They'll probably get collaboration agreements since they are still contributing on the team.
  45. 3 points
    I'm very sorry, I forgot that this was in For Beginners. I thought this was in the General/Gameplay forums, where people are generally assumed to have more knowledge and experience. When you wrote lines like "the terrain we're aiming for", "We'd have total freedom", and other "we" statements it seemed you were working with an experienced group. As it is for beginners, I'll try to keep it simple. Everyone starts somewhere. Going through each. Apologies if something feels too basic or too advanced, I'm not sure what skill levels you are at. Staring with a mesh for the terrain, you can begin with whatever polygonal mesh you want, usually people use triangles on a regular grid. With the shape you're describing you will probably not use a regular mesh. You can find many different data structures by searching for "triangular irregular mesh", there are many different structures with their own pros and cons. It looks like you've already got the point about T-junctions. They tend to introduce small holes and gaps. These are easy to avoid and there are many different techniques. Subdividing a triangle has a few approaches, usually either a subdivision in the middle breaking it into three pieces, or dividing any side of the triangle. Another approach is to turn an edge into multiple fans, but the math is a bit more complex. There are different reasons to split on either side versus splitting in the center, and some algorithms do both. For search terms try: triangle surface loop subdivision algorithms, triangle mid-edge subdivision algorithms, triangular mesh butterfly algorithms, and triangular mesh sqrt(3) algorithms. There are many others developed over the decades, but those should help explain how it is done and the reasons to pick each one. Some of the algorithms are extremely simple and produce good results quite rapidly, some produce fewer polygons, some produce denser polygons useful for sub-pixel effects. I know the image I posted used a regular grid, and I'm wondering if it would have been better not to post that. Again, you can use any arbitrary slice. I've seen tools that approximate a curve by using a triangle fan with many narrow steps. They step along the curve to find points along it within the triangle, then use those points to build the dense fan, then run a check to correct for T-junctions. You ask how far to subdivide, and that one really is up to you. Except for straight parametric lines, a parametric curve can never be perfectly represented in polygons. If you split on an irregular pattern you can approximate the curve as much as you'd like. I noticed the Ear Clipping routine you posted doesn't work with a parametric curve but with known polygon points. In theory you could continue subdividing a curve with multiple vertices per pixel. There are algorithms that will do that, called subdivision surfaces, useful when looking at sub-pixel accurate effects. They're rarely used to the sub-pixel level in games except in rare cases with lighting shaders and such, but common in movies for various smooth surface effects. Consider a sphere as a good example: A sphere may be broken down as 20 points, 50 points, even 10000 points, but current rendering technology with pixels doesn't render a sphere as a smooth curve. You may use sub-pixel accuracy in order to get the pixel to have highly accurate coloration or anti-aliasing, but ultimately it is still a square pixel and not a curve. Once the polygons are simplified on the x/z or x/y plane (depending on what axis you are using) then adding height is straightforward. Since you've already divided the points along the boundary, you duplicate the points along the boundary -- once for the upper level and once for the lower level -- and walk along the two edges with a triangle strip to make the walls. With that done, you've got any arbitrary curve as you described, with as much high detail or low detail as you would like. Regarding the navigation mesh, the question comes back to the curve and the navigation engine you are using. If you are working with a true parametric line then it might be easy or difficult to turn into a collision shape based on the system. If your system supports parametric surfaces then use it directly. Otherwise, if the system requires line segments or polygons, you are back to the same issue of how much to subdivide. Turning a curve into line segments cannot realistically be done forever, at some point the line must be close enough. If you're working from an image then you can go up to the individual pixels of the image. Navigation meshes are typically very coarse, but you can make it as dense as you'd like. Dense grids tend to be more processing work for pathfinding, so consider another pass to remove unnecessary internal points if you're using a polygonal navigation grid. The comment about processing the terrain navigation as an image with a kernel was one way that many navigation meshes work. Many games use a simple 2D image and encode different areas with different colors or attributes. While it is easy to think of them as colors, the image could hold any information as bits. Perhaps one bit to indicate it can be walked, another to indicate swimming, another to indicate climbing, another to indicate a non-jumping zone, another to indicate slow movement, whatever is needed. Sometimes they have multiple flags. Sometimes they're combined with other systems, like an audio system to indicate walking on wood or leaves or gravel. Navigating on a 2D image this way is easy to compute by mapping the character coordinates into an image coordinate and checking the individual cell or the neighbors to the cell. I mention expanded margins around the barriers/walls because they are commonly used as a safety to prevent passing through a barrier. Performing continuous swept volume tests will detect if you would have passed through the wall, but those are fairly expensive mathematically. It is generally easier to make barriers thick enough that they won't pass through due to numeric drift or big time steps. Regarding sliding along a wall, I'm not sure what the algorithm is named or if it even has a name. Basically if you're against a wall or boundary you want to subtract the motion that would push you through the wall. Start by finding the normal of the boundary so you know what way is out. Multiply that direction by the dot product of your velocity and the direction, that will give you the amount of motion that is headed toward the wall. What remains is the part of the speed headed in the right direction. In math terms: velocity -= wallNormal * dotProduct(velocityNormalized, wallNormal). If you want to preserve speed, which may not feel natural in your game, set the velocity magnitude equal to the original magnitude. You may also want to do it in two steps, one phase to approach directly to the contact point, then subtract that from your remaining distance; the second phase would use the remaining portion to slide against the wall. Again, I'm very sorry for not paying attention that this was in For Beginners. Usually there is someone (often moderators like myself) who reminds people to stay on topic in For Beginners and keep it directed to the topic. Better?
  46. 3 points
    I thought the Last Jedi was a good movie at 2h 40m but it would have been a great movie at 2h. Basically, there was a whole bunch of stuff that just didn't need to be there, and added nothing to the film. It really dragged out the middle act. But the third act was fantastic. I loved that it didn't go the way people were expecting. I've heard this criticism again and again... and it really doesn't have any merit. a) who said she didn't tell anyone? She just didn't tell the guy with a history of disobeying orders. b) "why does it have to be a secret?" Call me crazy, but if I was planning a sneaky manoeuvre that relied on the empire not finding out, I'd keep it a secret too. Not announcing your secret plan to your entire military is just common sense. That said, that whole sequence could have been much shorter and the movie would have benefitted. One thing that was really bad was
  47. 3 points
    Here goes my entry! Right now it's more or less a straight Missile Command clone. I couldn't help it but use the trophy image as the in game missile Per default use the mouse and keys 1, 2, 3 to fire the cannons. Protect and survive! Download from http://www.georg-rottensteiner.de/files/GDMissileCommand.zip Edit: Updated archive with source code (only the main game code, not the overall library code, sorry!)
  48. 3 points
    My experience with GUIs is that they have a whole lot of state which can be preserved between updates. If you resize your window, the layout needs to change. If you move your mouse over a button, there might be a mouse-over highlight. But if you're not constantly resizing the window, you don't need to redo the layout. If you're not constantly moving the mouse, you don't need to collision check every button or other clickable element. Keeping the UI state stored somewhere between updates so that it can be reused is called "retained mode". Usually there is no choice: Immediate mode is typically built on top of retained mode, and the code to actually render the GUI is inevitably draw calls. You can implement a pure immediate mode render-only GUI by just making draw calls, but if you discard layout and input state you end up having to redo all of your layout computations every time you want to render. The issues I've experienced with immediate mode is that the additional wrapper on top of retained mode is inefficient. If the immediate mode API doesn't include control identifiers, the wrapper layer has to spend time looking these up on each call. Retained mode GUIs have SEPARATE functions which update just what is actually needed at the moment (layout, mouse rect tests, keyboard input goes directly to whatever control has focus, etc). And those functions are only called when a change occurs. Immediate mode GUIs often jam literally everything into one function hierarchy representing the GUI hierarchy, and each individual control's function switches on what is happening (layout, input handling) and adjusts its internal retained mode state. The problem is that these APIs typically traverse the entire call hierarchy even if you don't need to, because they use the call order to determine which retained mode control ID is being referenced by the immediate mode function. If you skip calling a "Button" function in one case your entire GUI stops working. Particularly egregious examples of immediate mode GUI are Unity's OnGUI mehod, GUI and GUILayout classes. Unity basically calls your OnGUI method once per "event" (layout, paint, input handling, that kind of thing). Since much of the code in your OnGUI method may only be relevant to one or two of the event types, if you do anything other than GUI.* calls you're wasting time calculating things that will be discarded. Trying to do anything complicated with them can very quickly wreck performance. The hoops you can optionally jump through to increase performance (by avoiding irrelevant calculations in specific 'event' phases) are like writing a retained-mode interface wrapper on top of that. Unnecessary additional layers of abstraction when ideally you could just control the retained mode GUI under the hood. I haven't personally used any of the web-specific UI frameworks since I avoid web development like the plague, but my understanding is: The HTML elements are a 'retained mode' equivalent. The browser handles their layout and rendering. Why anyone would want to build immediate mode on top of that is beyond me.
  49. 3 points
    @LanceJZ Nice. Windows / Mac : MK_LightningCommand.zip Fairly complete this go around.
  50. 3 points
    What exactly is an Octree? If you're completely unfamiliar with them, I recommend reading the wikipedia article (read time: ~5 minutes). This is a sufficient description of what it is but is barely enough to give any ideas on what it's used for and how to actually implement one. In this article, I will do my best to take you through the steps necessary to create an octree data structure through conceptual explanations, pictures, and code, and show you the considerations to be made at each step along the way. I don't expect this article to be the authoritative way to do octrees, but it should give you a really good start and act as a good reference. Assumptions Before we dive in, I'm going to be making a few assumptions about you as a reader: You are very comfortable with programming in a C-syntax-style language (I will be using C# with XNA). You have programmed some sort of tree-like data structure in the past, such as a binary search tree and are familiar with recursion and its strengths and pitfalls. You know how to do collision detection with bounding rectangles, bounding spheres, and bounding frustums. You have a good grasp of common data structures (arrays, lists, etc) and understand Big-O notation (you can also learn about Big-O in this GDnet article). You have a development environment project which contains spatial objects which need collision tests. Setting the stage Let's suppose that we are building a very large game world which can contain thousands of physical objects of various types, shapes and sizes, some of which must collide with each other. Each frame we need to find out which objects are intersecting with each other and have some way to handle that intersection. How do we do it without killing performance? Brute force collision detection The simplest method is to just compare each object against every other object in the world. Typically, you can do this with two for loops. The code would look something like this: foreach(gameObject myObject in ObjList) { foreach(gameObject otherObject in ObjList) { if(myObject == otherObject) continue; //avoid self collision check if(myObject.CollidesWith(otherObject)) { //code to handle the collision } } } Conceptually, this is what we're doing in our picture: Each red line is an expensive CPU test for intersection. Naturally, you should feel horrified by this code because it is going to run in O(N^2) time. If you have 10,000 objects, then you're going to be doing 100,000,000 collision checks (hundred million). I don't care how fast your CPU is or how well you've tuned your math code, this code would reduce your computer to a sluggish crawl. If you're running your game at 60 frames per second, you're looking at 60 * 100 million calculations per second! It's nuts. It's insane. It's crazy. Let's not do this if we can avoid it, at least not with a large set of objects. This would only be acceptable if we're only checking, say, 10 items against each other (100 checks is palatable). If you know in advance that your game is only going to have a very small number of objects (i.e., asteriods), you can probably get away with using this brute force method for collision detection and ignore octrees altogether. If/when you start noticing performance problems due to too many collision checks per frame, consider some simple targeted optimizations: 1. How much computation does your current collision routine take? Do you have a square root hidden away in there (ie, a distance check)? Are you doing a granular collision check (pixel vs pixel, triangle vs triangle, etc)? One common technique is to perform a rough, coarse check for collision before testing for a granular collision check. You can give your objects an enclosing bounding rectangle or bounding sphere and test for intersection with these before testing against a granular check which may involve a lot more math and computation time. Use a "distance squared" check for comparing distance between objects to avoid using the square root method. Square root calculation typically uses the newtonian method of approximation and can be computationally expensive. 2. Can you get away with calculating fewer collision checks? If your game runs at 60 frames per second, could you skip a few frames? If you know certain objects behave deterministically, can you "solve" for when they will collide ahead of time (ie, pool ball vs. side of pool table). Can you reduce the number of objects which need to be checked for collisions? A technique for this would be to separate objects into several lists. One list could be your "stationary" objects list. They never have to test for collision against each other. The other list could be your "moving" objects, which need to be tested against all other moving objects and against all stationary objects. This could reduce the number of necessary collision tests to reach an acceptable performance level. 3. Can you get away with removing some object collision tests when performance becomes an issue? For example, a smoke particle could interact with a surface object and follow its contours to create a nice aesthetic effect, but it wouldn't break game play if you hit a predefined limit for collision checks and decided to stop ignoring smoke particles for collision. Ignoring essential game object movement would certainly break game play though (ei, player bullets stop intersecting with monsters). So, perhaps maintaining a priority list of collision checks to compute would help. First you handle the high priority collision tests, and if you're not at your threshold, you can handle lower priority collision tests. When the threshold is reached, you dump the rest of the items in the priority list or defer them for testing at a later time. 4. Can you use a faster but still simplistic method for collision detection to get away from a O(N^2) runtime? If you eliminate the objects you've already checked for collisions against, you can reduce the runtime to O(N(N+1)/2), which is much faster and still easy to implement. (technically, it's still O(N^2)) In terms of software engineering, you may end up spending more time than it's worth fine-tuning a bad algorithm & data structure choice to squeeze out a few more ounces of performance. The cost vs. benefit ratio becomes increasingly unfavorable and it becomes time to choose a better data structure to handle collision detection. Spatial partitioning algorithms are the proverbial nuke to solving the runtime problem for collision detection. At a small upfront cost to performance, they'll reduce your collision detection tests to logarithmic runtime. The upfront costs of development time and CPU overhead are easily outweighed by the scalability benefits and performance gains. Conceptual background on spatial partitioning Let's take a step back and look at spatial partitioning and trees in general before diving into Octrees. If we don't understand the conceptual idea, we have no hope of implementing it by sweating over code. Looking at the brute force implementation above, we're essentially taking every object in the game and comparing their positions against all other objects in the game to see if any are touching. All of these objects are contained spatially within our game world. Well, if we create an enclosing box around our game world and figure out which objects are contained within this enclosing box, then we've got a region of space with a list of contained objects within it. In this case, it would contain every object in the game. We can notice that if we have an object on one corner of the world and another object way on the other side, we don't really need to, or want to, calculate a collision check against them every frame. It'd be a waste of precious CPU time. So, let's try something interesting! If we divide our world exactly in half, we can create three separate lists of objects. The first list of objects, List A, contains all objects on the left half of the world. The second list, List B, contains objects on the right half of the world. Some objects may touch the dividing line such that they're on each side of the line, so we'll create a third list, List C, for these objects. We can notice that with each subdivision, we're spatially reducing the world in half and collecting a list of objects in that resulting half. We can elegantly create a binary search tree to contain these lists. Conceptually, this tree should look something like so: In terms of pseudo code, the tree data structure would look something like this: public class BinaryTree { //This is a list of all of the objects contained within this node of the tree private List m_objectList; //These are pointers to the left and right child nodes in the tree private BinaryTree m_left, m_right; //This is a pointer to the parent object (for upward tree traversal). private BinaryTree m_parent; } We know that all objects in List A will never intersect with any objects in List B, so we can almost eliminate half of the number of collision checks. We've still got the objects in List C which could touch objects in either list A or B, so we'll have to check all objects in List C against all objects in Lists A, B & C. If we continue to sub-divide the world into smaller and smaller parts, we can further reduce the number of necessary collision checks by half each time. This is the general idea behind spatial partitioning. There are many ways to subdivide a world into a tree-like data structure (BSP trees, Quad Trees, K-D trees, OctTrees, etc). Now, by default, we're just assuming that the best division is a cut in half, right down the middle, since we're assuming that all of our objects will be somewhat uniformly distributed throughout the world. It's not a bad assumption to make, but some spatial division algorithms may decide to make a cut such that each side has an equal amount of objects (a weighted cut) so that the resulting tree is more balanced. However, what happens if all of these objects move around? In order to maintain a nearly even division, you'd have to either shift the splitting plane or completely rebuild the tree each frame. It'd be a bit of a mess with a lot of complexity. So, for my implementation of a spatial partitioning tree I decided to cut right down the middle every time. As a result, some trees may end up being a bit more sparse than others, but that's okay -- it doesn't cost much. To subdivide or not to subdivide? That is the question. Let's assume that we have a somewhat sparse region with only a few objects. We could continue subdividing our space until we've found the smallest possible enclosing area for that object. But is that really necessary? Let's remember that the whole reason we're creating a tree is to reduce the number of collision checks we need to perform each frame -- not to create a perfectly enclosing region of space for every object. Here are the rules I use for deciding whether to subdivide or not: If we create a subdivision which only contains one object, we can stop subdividing even though we could keep dividing further. This rule will become an important part of the criteria for what defines a "leaf node" in our octree. The other important criteria is to set a minimum size for a region. If you have an extremely small object which is nanometers in size (or, god forbid, you have a bug and forgot to initialize an object size!), you're going to keep subdividing to the point where you potentially overflow your call stack. For my own implementation, I defined the smallest containing region to be a 1x1x1 cube. Any objects in this teeny cube will just have to be run with the O(N^2) brute force collision test (I don't anticipate many objects anyways!). If a containing region doesn't contain any objects, we shouldn't try to include it in the tree. We can take our subdivision by half one step further and divide the 2D world space into quadrants. The logic is essentially the same, but now we're testing for collision with four squares instead of two rectangles. We can continue subdividing each square until our rules for termination are met. The representation of the world space and corresponding data structure for a quad tree would look something like this: If the quad tree subdivision and data structure make sense, then an octree should be pretty straight forward as well. We're just adding a third dimension, using bounding cubes instead of bounding squares, and have eight possible child nodes instead of four. Some of you might wonder what should happen if you have a game world with non-cubic dimensions, say 200x300x400. You can still use an octree with cubic dimensions -- some child nodes will just end up empty if the game world doesn't have anything there. Obviously, you'll want to set the dimensions of your octree to at least the largest dimension of your game world. Octree Construction So, as you've read, an octree is a special type of subdividing tree commonly used for objects in 3D space (or anything with 3 dimensions). Our enclosing region is going to be a three dimensional rectangle (commonly a cube). We will then apply our subdivision logic above, and cut our enclosing region into eight smaller rectangles. If a game object completely fits within one of these subdivided regions, we'll push it down the tree into that node's containing region. We'll then recursively continue subdividing each resulting region until one of our breaking conditions is met. At the end, we should expect to have a nice tree-like data structure. My implementation of the octree can contain objects which have either a bounding sphere and/or a bounding rectangle. You'll see a lot of code I use to determine which is being used. In terms of our Octree class data structure, I decided to do the following for each tree: Each node has a bounding region which defines the enclosing region Each node has a reference to the parent node Contains an array of eight child nodes (use arrays for code simplicity and cache performance) Contains a list of objects contained within the current enclosing region I use a byte-sized bitmask for figuring out which child nodes are actively being used (the optimization benefits at the cost of additional complexity is somewhat debatable) I use a few static variables to indicate the state of the tree Here is the code for my Octree class outline: public class OctTree { BoundingBox m_region; List m_objects; /// /// These are items which we're waiting to insert into the data structure. /// We want to accrue as many objects in here as possible before we inject them into the tree. This is slightly more cache friendly. /// static Queue m_pendingInsertion = new Queue(); /// /// These are all of the possible child octants for this node in the tree. /// OctTree[] m_childNode = new OctTree[8]; /// /// This is a bitmask indicating which child nodes are actively being used. /// It adds slightly more complexity, but is faster for performance since there is only one comparison instead of 8. /// byte m_activeNodes = 0; /// /// The minumum size for enclosing region is a 1x1x1 cube. /// const int MIN_SIZE = 1; /// /// this is how many frames we'll wait before deleting an empty tree branch. Note that this is not a constant. The maximum lifespan doubles /// every time a node is reused, until it hits a hard coded constant of 64 /// int m_maxLifespan = 8; // int m_curLife = -1; //this is a countdown time showing how much time we have left to live /// /// A reference to the parent node is nice to have when we're trying to do a tree update. /// OctTree _parent; static bool m_treeReady = false; //the tree has a few objects which need to be inserted before it is complete static bool m_treeBuilt = false; //there is no pre-existing tree yet. } Initializing the enclosing region The first step in building an octree is to define the enclosing region for the entire tree. This will be the bounding box for the root node of the tree which initially contains all objects in the game world. Before we go about initializing this bounding volume, we have a few design decisions we need to make: 1. What should happen if an object moves outside of the bounding volume of the root node? Do we want to resize the entire octree so that all objects are enclosed? If we do, we'll have to completely rebuild the octree from scratch. If we don't, we'll need to have some way to either handle out of bounds objects, or ensure that objects never go out of bounds. 2. How do we want to create the enclosing region for our octree? Do we want to use a preset dimension, such as a 200x400x200 (X,Y,Z) rectangle? Or do we want to use a cubic dimension which is a power of 2? What should be the smallest allowable enclosing region which cannot be subdivided? Personally, I decided that I would use a cubic enclosing region with dimensions which are a power of 2, and sufficiently large to completely enclose my world. The smallest allowable cube is a 1x1x1 unit region. With this, I know that I can always cleanly subdivide my world and get integer numbers (even though the Vector3 uses floats). I also decided that my enclosing region would enclose the entire game world, so if an object leaves this region, it should be quietly destroyed. At the smallest octant, I will have to run a brute force collision check against all other objects, but I don't realistically expect more than 3 objects to occupy that small of an area at a time, so the performance costs of O(N^2) are completely acceptable. So, I normally just initialize my octree with a constructor which takes a region size and a list of items to insert into the tree. I feel it's barely worth showing this part of the code since it's so elementary, but I'll include it for completeness. Here are my constructors: /*Note: we want to avoid allocating memory for as long as possible since there can be lots of nodes.*/ /// /// Creates an oct tree which encloses the given region and contains the provided objects. /// /// The bounding region for the oct tree. /// The list of objects contained within the bounding region private OctTree(BoundingBox region, List objList) { m_region = region; m_objects = objList; m_curLife = -1; } public OctTree() { m_objects = new List(); m_region = new BoundingBox(Vector3.Zero, Vector3.Zero); m_curLife = -1; } /// /// Creates an octTree with a suggestion for the bounding region containing the items. /// /// The suggested dimensions for the bounding region. /// Note: if items are outside this region, the region will be automatically resized. public OctTree(BoundingBox region) { m_region = region; m_objects = new List(); m_curLife = -1; } Building an initial octree I'm a big fan of lazy initialization. I try to avoid allocating memory or doing work until I absolutely have to. In the case of my octree, I avoid building the data structure as long as possible. We'll accept a user's request to insert an object into the data structure, but we don't actually have to build the tree until someone runs a query against it. What does this do for us? Well, let's assume that the process of constructing and traversing our tree is somewhat computationally expensive. If a user wants to give us 1,000 objects to insert into the tree, does it make sense to recompute every subsequent enclosing area a thousand times? Or, can we save some time and do a bulk blast? I created a "pending" queue of items and a few flags to indicate the build state of the tree. All of the inserted items get put into the pending queue and when a query is made, those pending requests get flushed and injected into the tree. This is especially handy during a game loading sequence since you'll most likely be inserting thousands of objects at once. After the game world has been loaded, the number of objects injected into the tree is orders of magnitude fewer. My lazy initialization routine is contained within my UpdateTree() method. It checks to see if the tree has been built, and builds the data structure if it doesn't exist and has pending objects. /// /// Processes all pending insertions by inserting them into the tree. /// /// Consider deprecating this? private void UpdateTree() //complete & tested { if (!m_treeBuilt) { while (m_pendingInsertion.Count != 0) m_objects.Add(m_pendingInsertion.Dequeue()); BuildTree(); } else { while (m_pendingInsertion.Count != 0) Insert(m_pendingInsertion.Dequeue()); } m_treeReady = true; } As for building the tree itself, this can be done recursively. So for each recursive iteration, I start off with a list of objects contained within the bounding region. I check my termination rules, and if we pass, we create eight subdivided bounding areas which are perfectly contained within our enclosed region. Then, I go through every object in my given list and test to see if any of them will fit perfectly within any of my octants. If they do fit, I insert them into a corresponding list for that octant. At the very end, I check the counts on my corresponding octant lists and create new octrees and attach them to our current node, and mark my bitmask to indicate that those child octants are actively being used. All of the left over objects have been pushed down to us from our parent, but can't be pushed down to any children, so logically, this must be the smallest octant which can contain the object. /// /// Naively builds an oct tree from scratch. /// private void BuildTree() //complete & tested { //terminate the recursion if we're a leaf node if (m_objects.Count <= 1) return; Vector3 dimensions = m_region.Max - m_region.Min; if (dimensions == Vector3.Zero) { FindEnclosingCube(); dimensions = m_region.Max - m_region.Min; } //Check to see if the dimensions of the box are greater than the minimum dimensions if (dimensions.X <= MIN_SIZE && dimensions.Y <= MIN_SIZE && dimensions.Z <= MIN_SIZE) { return; } Vector3 half = dimensions / 2.0f; Vector3 center = m_region.Min + half; //Create subdivided regions for each octant BoundingBox[] octant = new BoundingBox[8]; octant[0] = new BoundingBox(m_region.Min, center); octant[1] = new BoundingBox(new Vector3(center.X, m_region.Min.Y, m_region.Min.Z), new Vector3(m_region.Max.X, center.Y, center.Z)); octant[2] = new BoundingBox(new Vector3(center.X, m_region.Min.Y, center.Z), new Vector3(m_region.Max.X, center.Y, m_region.Max.Z)); octant[3] = new BoundingBox(new Vector3(m_region.Min.X, m_region.Min.Y, center.Z), new Vector3(center.X, center.Y, m_region.Max.Z)); octant[4] = new BoundingBox(new Vector3(m_region.Min.X, center.Y, m_region.Min.Z), new Vector3(center.X, m_region.Max.Y, center.Z)); octant[5] = new BoundingBox(new Vector3(center.X, center.Y, m_region.Min.Z), new Vector3(m_region.Max.X, m_region.Max.Y, center.Z)); octant[6] = new BoundingBox(center, m_region.Max); octant[7] = new BoundingBox(new Vector3(m_region.Min.X, center.Y, center.Z), new Vector3(center.X, m_region.Max.Y, m_region.Max.Z)); //This will contain all of our objects which fit within each respective octant. List[] octList = new List[8]; for (int i = 0; i < 8; i++) octList = new List(); //this list contains all of the objects which got moved down the tree and can be delisted from this node. List delist = new List(); foreach (Physical obj in m_objects) { if (obj.BoundingBox.Min != obj.BoundingBox.Max) { for (int a = 0; a < 8; a++) { if (octant[a].Contains(obj.BoundingBox) == ContainmentType.Contains) { octList[a].Add(obj); delist.Add(obj); break; } } } else if (obj.BoundingSphere.Radius != 0) { for (int a = 0; a < 8; a++) { if (octant[a].Contains(obj.BoundingSphere) == ContainmentType.Contains) { octList[a].Add(obj); delist.Add(obj); break; } } } } //delist every moved object from this node. foreach (Physical obj in delist) m_objects.Remove(obj); //Create child nodes where there are items contained in the bounding region for (int a = 0; a < 8; a++) { if (octList[a].Count != 0) { m_childNode[a] = CreateNode(octant[a], octList[a]); m_activeNodes |= (byte)(1 << a); m_childNode[a].BuildTree(); } } m_treeBuilt = true; m_treeReady = true; } private OctTree CreateNode(BoundingBox region, List objList) //complete & tested { if (objList.Count == 0) return null; OctTree ret = new OctTree(region, objList); ret._parent = this; return ret; } private OctTree CreateNode(BoundingBox region, Physical Item) { List objList = new List(1); //sacrifice potential CPU time for a smaller memory footprint objList.Add(Item); OctTree ret = new OctTree(region, objList); ret._parent = this; return ret; } Updating a tree Let's imagine that our tree has a lot of moving objects in it. If any object moves, there is a good chance that the object has moved outside of its enclosing octant. How do we handle changes in object position while maintaining the integrity of our tree structure? Technique 1: Keep it super simple, trash & rebuild everything. Some implementations of an Octree will completely rebuild the entire tree every frame and discard the old one. This is super simple and it works, and if this is all you need, then prefer the simple technique. The general consensus is that the upfront CPU cost of rebuilding the tree every frame is much cheaper than running a brute force collision check, and programmer time is too valuable to be spent on an unnecessary optimization. For those of us who like challenges and to over-engineer things, the "trash & rebuild" technique comes with a few small problems: You're constantly allocating and deallocating memory each time you rebuild your tree. Allocating new memory comes with a small cost. If possible, you want to minimize the amount of memory being allocated and reallocated over time by reusing memory you've already got. Most of the tree is unchanging, so it's a waste of CPU time to rebuild the same branches over and over again. Technique 2: Keep the existing tree, update the changed branches I noticed that most branches of a tree don't need to be updated. They just contain stationary objects. Wouldn't it be nice if, instead of rebuilding the entire tree every frame, we just updated the parts of the tree which needed an update? This technique keeps the existing tree and updates only the branches which had an object which moved. It's a bit more complex to implement, but it's a lot more fun too, so let's really get into that! During my first attempt at this, I mistakenly thought that an object in a child node could only go up or down one traversal of the tree. This is wrong. If an object in a child node reaches the edge of that node, and that edge also happens to be an edge for the enclosing parent node, then that object needs to be inserted above its parent, and possibly up even further. So, the bottom line is that we don't know how far up an object needs to be pushed up the tree. Just as well, an object can move such that it can be neatly enclosed in a child node, or that child's child node. We don't know how far down the tree we can go. Fortunately, since we include a reference to each node's parent, we can easily solve this problem recursively with minimal computation! The general idea behind the update algorithm is to first let all objects in the tree update themselves. Some may move or change in size. We want to get a list of every object which moved, so the object update method should return to us a boolean value indicating if its bounding area changed. Once we've got a list of all of our moved objects, we want to start at our current node and try to traverse up the tree until we find a node which completely encloses the moved object (most of the time, the current node still encloses the object). If the object isn't completely enclosed by the current node, we keep moving it up to its next parent node. In the worst case, our root node will be guaranteed to contain the object. After we've moved our object as far up the tree as possible, we'll try to move it as far down the tree as we can. Most of the time, if we moved the object up, we won't be able to move it back down. But, if the object moved so that a child node of the current node could contain it, we have the chance to push it back down the tree. It's important to be able to move objects down the tree as well, or else all moving objects would eventually migrate to the top and we'd start getting some performance problems during collision detection routines. Branch Removal In some cases, an object will move out of a node and that node will no longer have any objects contained within it, nor have any children which contain objects. If this happens, we have an empty branch and we need to mark it as such and prune this dead branch off the tree. There is an interesting question hiding here: When do you want to prune the dead branches off a tree? Allocating new memory costs time, so if we're just going to reuse this same region in a few cycles, why not keep it around for a bit? How long can we keep it around before it becomes more expensive to maintain the dead branch? I decided to give each of my nodes a count down timer which activates when the branch is dead. If an object moves into this nodes octant while the death timer is active, I double the lifespan and reset the death timer. This ensures that octants which are frequently used are hot and stick around, and nodes which are infrequently used are removed before they start to cost more than they're worth. A practical example of this usefulness would be apparent when you have a machine gun shooting a stream of bullets. Those bullets follow in close succession of each other, so it'd be a shame to immediately delete a node as soon as the first bullet leaves it, only to recreate it a fraction of a second later as the second bullet re-enters it. And if there's a lot of bullets, we can probably keep these octants around for a little while. If a child branch is empty and hasn't been used in a while, it's safe to prune it out of our tree. Anyways, let's look at the code which does all of this magic. First up, we have the Update() method. This is a method which is recursively called on all child trees. It moves all objects around, does some house keeping work for the data structure, and then moves each moved object into its correct node (parent or child). public void Update(GameTime gameTime) { if (m_treeBuilt == true) { //Start a count down death timer for any leaf nodes which don't have objects or children. //when the timer reaches zero, we delete the leaf. If the node is reused before death, we double its lifespan. //this gives us a "frequency" usage score and lets us avoid allocating and deallocating memory unnecessarily if (m_objects.Count == 0) { if (HasChildren == false) { if (m_curLife == -1) m_curLife = m_maxLifespan; else if (m_curLife > 0) { m_curLife--; } } } else { if (m_curLife != -1) { if(m_maxLifespan <= 64) m_maxLifespan *= 2; m_curLife = -1; } } List movedObjects = new List(m_objects.Count); //go through and update every object in the current tree node foreach (Physical gameObj in m_objects) { //we should figure out if an object actually moved so that we know whether we need to update this node in the tree. if (gameObj.Update(gameTime)) { movedObjects.Add(gameObj); } } //prune any dead objects from the tree. int listSize = m_objects.Count; for (int a = 0; a < listSize; a++) { if (!m_objects[a].Alive) { if (movedObjects.Contains(m_objects[a])) movedObjects.Remove(m_objects[a]); m_objects.RemoveAt(a--); listSize--; } } //recursively update any child nodes. for( int flags = m_activeNodes, index = 0; flags > 0; flags >>=1, index++) if ((flags & 1) == 1) m_childNode[index].Update(gameTime); //If an object moved, we can insert it into the parent and that will insert it into the correct tree node. //note that we have to do this last so that we don't accidentally update the same object more than once per frame. foreach (Physical movedObj in movedObjects) { OctTree current = this; //figure out how far up the tree we need to go to reinsert our moved object //we are either using a bounding rect or a bounding sphere //try to move the object into an enclosing parent node until we've got full containment if (movedObj.BoundingBox.Max != movedObj.BoundingBox.Min) { while (current.m_region.Contains(movedObj.BoundingBox) != ContainmentType.Contains) if (current._parent != null) current = current._parent; else break; //prevent infinite loops when we go out of bounds of the root node region } else { while (current.m_region.Contains(movedObj.BoundingSphere) != ContainmentType.Contains)//we must be using a bounding sphere, so check for its containment. if (current._parent != null) current = current._parent; else break; } //now, remove the object from the current node and insert it into the current containing node. m_objects.Remove(movedObj); current.Insert(movedObj); //this will try to insert the object as deep into the tree as we can go. } //prune out any dead branches in the tree for (int flags = m_activeNodes, index = 0; flags > 0; flags >>= 1, index++) if ((flags & 1) == 1 && m_childNode[index].m_curLife == 0) { m_childNode[index] = null; m_activeNodes ^= (byte)(1 << index); //remove the node from the active nodes flag list } //now that all objects have moved and they've been placed into their correct nodes in the octree, we can look for collisions. if (IsRoot == true) { //This will recursively gather up all collisions and create a list of them. //this is simply a matter of comparing all objects in the current root node with all objects in all child nodes. //note: we can assume that every collision will only be between objects which have moved. //note 2: An explosion can be centered on a point but grow in size over time. In this case, you'll have to override the update method for the explosion. List irList = GetIntersection(new List()); foreach (IntersectionRecord ir in irList) { if (ir.PhysicalObject != null) ir.PhysicalObject.HandleIntersection(ir); if (ir.OtherPhysicalObject != null) ir.OtherPhysicalObject.HandleIntersection(ir); } } } else { } } Note that we call an Insert() method for moved objects. The insertion of objects into the tree is very similar to the method used to build the initial tree. Insert() will try to push objects as far down the tree as possible. Notice that I also try to avoid creating new bounding areas if I can use an existing one from a child node. /// /// A tree has already been created, so we're going to try to insert an item into the tree without rebuilding the whole thing /// /// A physical object /// The physical object to insert into the tree private void Insert(T Item) where T : Physical { /*make sure we're not inserting an object any deeper into the tree than we have to. -if the current node is an empty leaf node, just insert and leave it.*/ if (m_objects.Count <= 1 && m_activeNodes == 0) { m_objects.Add(Item); return; } Vector3 dimensions = m_region.Max - m_region.Min; //Check to see if the dimensions of the box are greater than the minimum dimensions if (dimensions.X <= MIN_SIZE && dimensions.Y <= MIN_SIZE && dimensions.Z <= MIN_SIZE) { m_objects.Add(Item); return; } Vector3 half = dimensions / 2.0f; Vector3 center = m_region.Min + half; //Find or create subdivided regions for each octant in the current region BoundingBox[] childOctant = new BoundingBox[8]; childOctant[0] = (m_childNode[0] != null) ? m_childNode[0].m_region : new BoundingBox(m_region.Min, center); childOctant[1] = (m_childNode[1] != null) ? m_childNode[1].m_region : new BoundingBox(new Vector3(center.X, m_region.Min.Y, m_region.Min.Z), new Vector3(m_region.Max.X, center.Y, center.Z)); childOctant[2] = (m_childNode[2] != null) ? m_childNode[2].m_region : new BoundingBox(new Vector3(center.X, m_region.Min.Y, center.Z), new Vector3(m_region.Max.X, center.Y, m_region.Max.Z)); childOctant[3] = (m_childNode[3] != null) ? m_childNode[3].m_region : new BoundingBox(new Vector3(m_region.Min.X, m_region.Min.Y, center.Z), new Vector3(center.X, center.Y, m_region.Max.Z)); childOctant[4] = (m_childNode[4] != null) ? m_childNode[4].m_region : new BoundingBox(new Vector3(m_region.Min.X, center.Y, m_region.Min.Z), new Vector3(center.X, m_region.Max.Y, center.Z)); childOctant[5] = (m_childNode[5] != null) ? m_childNode[5].m_region : new BoundingBox(new Vector3(center.X, center.Y, m_region.Min.Z), new Vector3(m_region.Max.X, m_region.Max.Y, center.Z)); childOctant[6] = (m_childNode[6] != null) ? m_childNode[6].m_region : new BoundingBox(center, m_region.Max); childOctant[7] = (m_childNode[7] != null) ? m_childNode[7].m_region : new BoundingBox(new Vector3(m_region.Min.X, center.Y, center.Z), new Vector3(center.X, m_region.Max.Y, m_region.Max.Z)); //First, is the item completely contained within the root bounding box? //note2: I shouldn't actually have to compensate for this. If an object is out of our predefined bounds, then we have a problem/error. // Wrong. Our initial bounding box for the terrain is constricting its height to the highest peak. Flying units will be above that. // Fix: I resized the enclosing box to 256x256x256. This should be sufficient. if (Item.BoundingBox.Max != Item.BoundingBox.Min && m_region.Contains(Item.BoundingBox) == ContainmentType.Contains) { bool found = false; //we will try to place the object into a child node. If we can't fit it in a child node, then we insert it into the current node object list. for(int a=0;a<8;a++) { //is the object fully contained within a quadrant? if (childOctant[a].Contains(Item.BoundingBox) == ContainmentType.Contains) { if (m_childNode[a] != null) m_childNode[a].Insert(Item); //Add the item into that tree and let the child tree figure out what to do with it else { m_childNode[a] = CreateNode(childOctant[a], Item); //create a new tree node with the item m_activeNodes |= (byte)(1 << a); } found = true; } } if(!found) m_objects.Add(Item); } else if (Item.BoundingSphere.Radius != 0 && m_region.Contains(Item.BoundingSphere) == ContainmentType.Contains) { bool found = false; //we will try to place the object into a child node. If we can't fit it in a child node, then we insert it into the current node object list. for (int a = 0; a < 8; a++) { //is the object contained within a child quadrant? if (childOctant[a].Contains(Item.BoundingSphere) == ContainmentType.Contains) { if (m_childNode[a] != null) m_childNode[a].Insert(Item); //Add the item into that tree and let the child tree figure out what to do with it else { m_childNode[a] = CreateNode(childOctant[a], Item); //create a new tree node with the item m_activeNodes |= (byte)(1 << a); } found = true; } } if (!found) m_objects.Add(Item); } else { //either the item lies outside of the enclosed bounding box or it is intersecting it. Either way, we need to rebuild //the entire tree by enlarging the containing bounding box //BoundingBox enclosingArea = FindBox(); BuildTree(); } } Collision Detection Finally, our octree has been built and everything is as it should be. How do we perform collision detection against it? First, let's list out the different ways we want to look for collisions: Frustum intersections. We may have a frustum which intersects with a region of the world. We only want the objects which intersect with the given frustum. This is particularly useful for culling regions outside of the camera view space, and for figuring out what objects are within a mouse selection area. Ray intersections. We may want to shoot a directional ray from any given point and want to know either the nearest intersecting object, or get a list of all objects which intersect that ray (like a rail gun). This is very useful for mouse picking. If the user clicks on the screen, we want to draw a ray into the world and figure out what they clicked on. Bounding Box intersections. We want to know which objects in the world are intersecting a given bounding box. This is most useful for "box" shaped game objects (houses, cars, etc). Bounding Sphere Intersections. We want to know which objects are intersecting with a given bounding sphere. Most objects will probably be using a bounding sphere for coarse collision detection since the mathematics is computationally the least expensive and somewhat easy. The main idea behind recursive collision detection processing for an octree is that you start at the root/current node and test for intersection with all objects in that node against the intersector. Then, you do a bounding box intersection test against all active child nodes with the intersector. If a child node fails this intersection test, you can completely ignore the rest of that child's tree. If a child node passes the intersection test, you recursively traverse down the tree and repeat. Each node should pass a list of intersection records up to its caller, which appends those intersections to its own list of intersections. When the recursion finishes, the original caller will get a list of every intersection for the given intersector. The beauty of this is that it takes very little code to implement and performance is very fast. In a lot of these collisions, we're probably going to be getting a lot of results. We're also going to want to have some way of responding to each collision, depending on what objects are colliding. For example, a player hero should pick up a floating bonus item (quad damage!), but a rocket shouldn't explode if it hits said bonus item. I created a new class to contain information about each intersection. This class contains references to the intersecting objects, the point of intersection, the normal at the point of intersection, etc. These intersection records become quite useful when you pass them to an object and tell them to handle it. For completeness and clarity, here is my intersection record class: public class IntersectionRecord { Vector3 m_position; /// /// This is the exact point in 3D space which has an intersection. /// public Vector3 Position { get { return m_position; } } Vector3 m_normal; /// /// This is the normal of the surface at the point of intersection /// public Vector3 Normal { get { return m_normal; } } Ray m_ray; /// /// This is the ray which caused the intersection /// public Ray Ray { get { return m_ray; } } Physical m_intersectedObject1; /// /// This is the object which is being intersected /// public Physical PhysicalObject { get { return m_intersectedObject1; } set { m_intersectedObject1 = value; } } Physical m_intersectedObject2; /// /// This is the other object being intersected (may be null, as in the case of a ray-object intersection) /// public Physical OtherPhysicalObject { get { return m_intersectedObject2; } set { m_intersectedObject2 = value; } } /// /// this is a reference to the current node within the octree for where the collision occurred. In some cases, the collision handler /// will want to be able to spawn new objects and insert them into the tree. This node is a good starting place for inserting these objects /// since it is a very near approximation to where we want to be in the tree. /// OctTree m_treeNode; /// /// check the object identities between the two intersection records. If they match in either order, we have a duplicate. /// ///the other record to compare against /// true if the records are an intersection for the same pair of objects, false otherwise. public override bool Equals(object otherRecord) { IntersectionRecord o = (IntersectionRecord)otherRecord; // //return (m_intersectedObject1 != null && m_intersectedObject2 != null && m_intersectedObject1.ID == m_intersectedObject2.ID); if (otherRecord == null) return false; if (o.m_intersectedObject1.ID == m_intersectedObject1.ID && o.m_intersectedObject2.ID == m_intersectedObject2.ID) return true; if (o.m_intersectedObject1.ID == m_intersectedObject2.ID && o.m_intersectedObject2.ID == m_intersectedObject1.ID) return true; return false; } double m_distance; /// /// This is the distance from the ray to the intersection point. /// You'll usually want to use the nearest collision point if you get multiple intersections. /// public double Distance { get { return m_distance; } } private bool m_hasHit = false; public bool HasHit { get { return m_hasHit; } } public IntersectionRecord() { m_position = Vector3.Zero; m_normal = Vector3.Zero; m_ray = new Ray(); m_distance = float.MaxValue; m_intersectedObject1 = null; } public IntersectionRecord(Vector3 hitPos, Vector3 hitNormal, Ray ray, double distance) { m_position = hitPos; m_normal = hitNormal; m_ray = ray; m_distance = distance; // m_hitObject = hitGeom; m_hasHit = true; } /// /// Creates a new intersection record indicating whether there was a hit or not and the object which was hit. /// ///Optional: The object which was hit. Defaults to null. public IntersectionRecord(Physical hitObject = null) { m_hasHit = hitObject != null; m_intersectedObject1 = hitObject; m_position = Vector3.Zero; m_normal = Vector3.Zero; m_ray = new Ray(); m_distance = 0.0f; } } Intersection with a Bounding Frustum /// /// Gives you a list of all intersection records which intersect or are contained within the given frustum area /// ///The containing frustum to check for intersection/containment with /// A list of intersection records with collisions private List GetIntersection(BoundingFrustum frustum, Physical.PhysicalType type = Physical.PhysicalType.ALL) { if (m_objects.Count == 0 && HasChildren == false) //terminator for any recursion return null; List ret = new List(); //test each object in the list for intersection foreach (Physical obj in m_objects) { //skip any objects which don't meet our type criteria if ((int)((int)type & (int)obj.Type) == 0) continue; //test for intersection IntersectionRecord ir = obj.Intersects(frustum); if (ir != null) ret.Add(ir); } //test each object in the list for intersection for (int a = 0; a < 8; a++) { if (m_childNode[a] != null && (frustum.Contains(m_childNode[a].m_region) == ContainmentType.Intersects || frustum.Contains(m_childNode[a].m_region) == ContainmentType.Contains)) { List hitList = m_childNode[a].GetIntersection(frustum); if (hitList != null) { foreach (IntersectionRecord ir in hitList) ret.Add(ir); } } } return ret; } The bounding frustum intersection list can be used to only render objects which are visible to the current camera view. I use a scene database to figure out how to render all objects in the game world. Here is a snippet of code from my rendering function which uses the bounding frustum of the active camera: /// /// This renders every active object in the scene database /// /// public int Render() { int triangles = 0; //Renders all visible objects by iterating through the oct tree recursively and testing for intersection //with the current camera view frustum foreach (IntersectionRecord ir in m_octTree.AllIntersections(m_cameras[m_activeCamera].Frustum)) { ir.PhysicalObject.SetDirectionalLight(m_globalLight[0].Direction, m_globalLight[0].Color); ir.PhysicalObject.View = m_cameras[m_activeCamera].View; ir.PhysicalObject.Projection = m_cameras[m_activeCamera].Projection; ir.PhysicalObject.UpdateLOD(m_cameras[m_activeCamera]); triangles += ir.PhysicalObject.Render(m_cameras[m_activeCamera]); } return triangles; } Intersection with a Ray /// /// Gives you a list of intersection records for all objects which intersect with the given ray /// ///The ray to intersect objects against /// A list of all intersections private List GetIntersection(Ray intersectRay, Physical.PhysicalType type = Physical.PhysicalType.ALL) { if (m_objects.Count == 0 && HasChildren == false) //terminator for any recursion return null; List ret = new List(); //the ray is intersecting this region, so we have to check for intersection with all of our contained objects and child regions. //test each object in the list for intersection foreach (Physical obj in m_objects) { //skip any objects which don't meet our type criteria if ((int)((int)type & (int)obj.Type) == 0) continue; if (obj.BoundingBox.Intersects(intersectRay) != null) { IntersectionRecord ir = obj.Intersects(intersectRay); if (ir.HasHit) ret.Add(ir); } } // test each child octant for intersection for (int a = 0; a < 8; a++) { if (m_childNode[a] != null && m_childNode[a].m_region.Intersects(intersectRay) != null) { List hits = m_childNode[a].GetIntersection(intersectRay, type); if (hits != null) { foreach (IntersectionRecord ir in hits) ret.Add(ir); } } } return ret; } Intersection with a list of objects This is a particularly useful recursive method for determining if a list of objects in the current node intersect with any objects in any child nodes (See: Update() method for usage). It's the method which will be used most frequently, so it's good to get this right and efficient. What we want to do is start at the root node of the tree. We compare all objects in the current node against all other objects in the current node for collision. We gather up any of those collisions as intersection records, and insert them into a list. We then pass our list of tested objects down to our child nodes. The child nodes will then test their objects against themselves, then against the objects we passed down to them. The child nodes will capture any collisions in a list, and return that list to its parent. The parent then takes the collision list received from its child nodes and appends it to its own list of collisions, finally returning it to its caller. If you count out the number of collision tests in the illustration above, you can see that we conducted 29 hit tests and recieved 4 hits. This is much better than [11*11 = 121] hit tests. private List GetIntersection(List parentObjs, Physical.PhysicalType type = Physical.PhysicalType.ALL) { List intersections = new List(); //assume all parent objects have already been processed for collisions against each other. //check all parent objects against all objects in our local node foreach (Physical pObj in parentObjs) { foreach (Physical lObj in m_objects) { //We let the two objects check for collision against each other. They can figure out how to do the coarse and granular checks. //all we're concerned about is whether or not a collision actually happened. IntersectionRecord ir = pObj.Intersects(lObj); if (ir != null) { intersections.Add(ir); } } } //now, check all our local objects against all other local objects in the node if (m_objects.Count > 1) { #region self-congratulation /* * This is a rather brilliant section of code. Normally, you'd just have two foreach loops, like so: * foreach(Physical lObj1 in m_objects) * { * foreach(Physical lObj2 in m_objects) * { * //intersection check code * } * } * * The problem is that this runs in O(N*N) time and that we're checking for collisions with objects which have already been checked. * Imagine you have a set of four items: {1,2,3,4} * You'd first check: {1} vs {1,2,3,4} * Next, you'd check {2} vs {1,2,3,4} * but we already checked {1} vs {2}, so it's a waste to check {2} vs. {1}. What if we could skip this check by removing {1}? * We'd have a total of 4+3+2+1 collision checks, which equates to O(N(N+1)/2) time. If N is 10, we are already doing half as many collision checks as necessary. * Now, we can't just remove an item at the end of the 2nd for loop since that would break the iterator in the first foreach loop, so we'd have to use a * regular for(int i=0;i tmp = new List(m_objects.Count); tmp.AddRange(m_objects); while (tmp.Count > 0) { foreach (Physical lObj2 in tmp) { if (tmp[tmp.Count - 1] == lObj2 || (tmp[tmp.Count - 1].IsStatic && lObj2.IsStatic)) continue; IntersectionRecord ir = tmp[tmp.Count - 1].Intersects(lObj2); if (ir != null) intersections.Add(ir); } //remove this object from the temp list so that we can run in O(N(N+1)/2) time instead of O(N*N) tmp.RemoveAt(tmp.Count-1); } } //now, merge our local objects list with the parent objects list, then pass it down to all children. foreach (Physical lObj in m_objects) if (lObj.IsStatic == false) parentObjs.Add(lObj); //parentObjs.AddRange(m_objects); //each child node will give us a list of intersection records, which we then merge with our own intersection records. for (int flags = m_activeNodes, index = 0; flags > 0; flags >>= 1, index++) if ((flags & 1) == 1) intersections.AddRange(m_childNode[index].GetIntersection(parentObjs, type)); return intersections; } ;i++)> Screenshot Demos This is a view of the game world from a distance showing the outlines for each bounding volume for the octree. This view shows a bunch of successive projectiles moving through the game world with the frequently-used nodes being preserved instead of deleted. Complete Code Sample I've attached a complete code sample of the octree class, the intersection record class, and my generic physical object class. I don't guarantee that they're all bug-free since it's all a work in progress and hasn't been rigorously tested yet.