Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Open Source'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 32 results

  1. malhotraprateek

    Bullet Bullet Debug Visualization

    Hi, I have created a basic debug viewer for Bullet while working on my game engine. It uses three.js to render debug lines, and communicates to the application (i.e. server) via web socket. Link: bullet-visualization Included in the repo is some reference code for creating the server (C++) using the net11 library (details on github). For now, it can only draw debug lines (no contact points or text). It might be useful for de-coupling in game rendering and debug draw. The project is MIT licensed and contributions are very welcome ☺️.
  2. Can someone please program a new whacking game. The title of the game is "Whack the Mouse". I made the characters on Pixton. The weapons that are attached to the cat are claws and fangs. Don't forget to add the violence and gore warning. Here are the characters (the cat is the player and the mouse is the one who's getting whacked):
  3. Hello im new in UE4 im creating a island for my game and im strugling to have the beach because im using a autolandscape material that i created can sameone teach me a way to do it? Im gone a attatch some prints of my blueprints and my landscape (where the red arrows are pointing was supost to be beach)
  4. It's been two months now since I started to do daily game development streams. I've been trying my best, but it is time for this to come to a close. In this article I'll talk about the various things that happened, why I'm stopping, and the future of the Leaf game. Strap in! It's actually been slightly longer than two months, but since I missed some days due to being sick, and some others because I didn't feel like streaming – more on that later – I'll just count it as two months. In any case, in this time I've done 56 streams, almost all of them two hours long. That's a lot of hours, and I'm truly impressed that some people stuck around for almost all of them. Thank you very much! A lot happened in that time too, and I think it would be interesting to go over some of the major features and talk about them briefly. New Features in Leaf Slopes and Collision Collision detection was heavily revised from the previous version. The general procedure is to scan the current chunk for hits until there are no more hits to be found. If we have more than ten hits we assume that the player is in a wall somehow and just die. The number ten is obviously arbitrary, but somehow it seems sufficient and I haven't had any accidental deaths yet. When a hit is detected, it dispatches on the type of tile or entity that was collided with. It does so in two steps, the first is a test whether the collision will happen at all, to allow sub-tile precision, and the second is the actual collision resolution, should a full hit have been detected. The first test can be used to elide collisions with jump-through platforms or slopes if the player moves above the actual slope surface. The actual collision resolution is typically comprised of moving the player to the collision point, updating velocity along the hit normal, and finally zipping out of the ground if necessary to avoid floating point precision issues. The collision detection of the slopes itself is surprisingly simple and works on the same principle as swept AABB tests: we can enlarge the slope triangle by simply moving the line towards the player by the player's half-size. Once this shift is done we only need to do a ray-line collision test. During resolution there's some slight physics cheating going on to make the player stick to the ground when going down a slope, rather than flying off, but that's it. Packets and File Formats Leaf defines a multitude of file formats. These formats are typically all defined around the idea of a packet – a collection of files in a directory hierarchy. The idea of a packet allows me to define these formats as both directly on disk, in-memory as some data structure, or encapsulated within an archive. The packet protocol isn't that complicated and I intend on either at least putting it into Trial, or putting it into its own library altogether. Either way, it allows the transparent implementation of these formats regardless of backing storage. The actual formats themselves also follow a very similar file structure: a meta.lisp file for a brief metadata header, which identifies the format, the version, and some authoring metadata fields. This file is in typical s-expression form and can be used to create a version object, which controls the loading and writing process of the rest of the format. In the current v0, this usually means an extra data.lisp payload file, and a number of other associated payload files like texture images. The beauty of using generic functions with methods that specialise both on the version and object at the same time is that it allows me to define new versions in terms of select overrides, so that I can specify new behaviour for select classes, rather than having to redo the entire de/serialisation process, or breaking compatibility altogether. Dialogue and Quests The dialogue and quests are implemented as very generic systems that should have the flexibility (I hope) to deal with all the story needs I might have in the future. Dialogue is written in an extended dialect of Markless. For instance, the following is a valid dialogue snippet: ~ Fi | (:happy) Well isn't this a sight for sore eyes! | Finally a bit of sunshine! - I don't like rain ~ Player | I don't mind the rain, actually. | Makes it easier to think. - Yeah! ~ Player | Yeah, it's been too long! Hopefully this isn't announcing the coming of a sandstorm. ! incf (favour 'fi) - ... ! decf (favour 'fi) ~ Fi | ? (< 3 (favour 'fi)) | | So, what's our next move? | |? | | Alright, good luck out there! The list is translated into a choice for the player to make, which can impact the dialogue later. The way this is implemented is through a syntax extension in the cl-markless parser, followed by a compiler from the Markless AST to an assembly language, and a virtual machine to execute the assembly. The user of the dialogue system only needs to implement the evaluation of commands, the display of text, and the presentation of choices. The quest system on the other hand is based on node graphs. Each quest is represented as a directed graph of task nodes, each describing a task the player must fulfil through an invariant and a success condition. On success, one or more successor tasks can be unlocked. Tasks can also spawn dialogue pieces to become available as interactions with NPCs or items. The system is smart enough to allow different, competing branches, as well as parallel branches to complete a quest. I intend on building a graph editor UI for this once Alloy is further along. Both of these systems are, again, detached enough that I'll either put them into Trial, or put them into a completely separate library altogether. I'm sure I'll need to adjust things once I actually have some written story on hand to use these systems with. Platforming AI The platforming AI allows characters to move along the terrain just like the player would. This is extremely useful for story reasons, so that characters can naturally move to select points, or idle around places rather than just standing still. The way this is implemented is through a node graph that describes the possible movement options from one valid position to the next. This graph is built through a number of scanline passes over the tile map that either add new nodes or connect existing nodes together in new ways. The result is a graph with nodes that can connect through walk, crawl, fall, or jump edges. A character can be moved along this graph by first running A* to find a shortest path to the target node, and then performing a real-time movement through the calculated path. Generally the idea is to always move the player in the direction of the next target node until that node has been reached, in which case it's popped off the path. The jump edges already encode the necessary jump parameters to use, so when reaching a jump node the character just needs to assume the initial velocity and let standard physics do the rest. The implementation includes a simple visualiser so that you can see how characters would move across the chunk terrain. When the chunk terrain changes, the node graph is currently just recomputed from scratch which isn't fast, but then again during gameplay the chunk isn't going to change anyway so it's only really annoying during editing. I'll think about whether I want to implement incremental updates. Lighting Leaf has gone through two lighting systems. The old one worked through signed distance fields that were implicitly computed through a light description. New light types required new shader code to evaluate the SDF, and each light required many operations in the fragment stage, which is costly. The new system uses two passes, in the first lights are rendered to a separate buffer. The lights are rendered like regular geometry, so we can use discrete polygons to define light areas, and use other fancy tricks like textured lights. In the second pass the fragment shader simply looks up the current fragment position in the light texture and mixes the colours together. In effect this new system is easier to implement, more expressive, and much faster to run. Overall it's a massive win in almost every way I can imagine. There's further improvements I want to make still, such as shadow casting, dynamic daylights, and light absorption mapping to allow the light to dissipate into the ground gradually. Alloy Alloy is a new user interface toolkit that I've been working on as part of Leaf's development. I've been in need for a good UI toolkit that I can use within GL (and otherwise) for a while, and a lot of Leaf's features had to be stalled because I didn't have one yet. However, a lot of Alloy's development is also only very distantly related to game development itself, and hardly at all related to the game itself. Thus I think I'll talk more about Alloy in other articles sometime. Why I'm Stopping I initially started this daily stuff to get myself out of a rut. At the time I wasn't doing much at all, and that bothered me a lot, so committing to a daily endeavour seemed like a good way to kick myself out of it. And it was! For a long time it worked really well. I enjoyed the streams and made good progress with the game. Unfortunately I have the tendency to turn things like this into enormous burdens for myself. The stream turned from something I wanted to do into something I felt I had to do, and then ultimately into something I dreaded doing. This has happened before with all of my projects, especially streaming ones. With streams I quickly feel a lot of pressure because I get the idea that people aren't enjoying the content, that it's just a boring waste of time. Maybe it is, or maybe it isn't, I don't know. Either way, having to worry about the viewers and not just the project I'm working on, especially trying to constrain tasks to interesting little features that can fit into two hours turns into a big constraint that I can't keep up anymore. There's a lot of interesting work left to be done, sure, but I just can't bear things anymore at the moment. Dreading the stream poisoned a lot of the rest of my days and ultimately started to hurt my productivity and well-being over the past two weeks. Maybe I'll do more streams again at some point in the future, but for now I need a break for an indeterminate amount of time. The Future of Leaf Leaf isn't dead, though. I intend to keep working on it on my own, and I really do want to see it finished one day, however far away that day may be. Currently I feel like I need to focus on writing, which is a big challenge for me. I'm a very, very inexperienced writer, especially when it comes to long-form stories and world-building. There I have practically no idea on how to do anything. If you are a writer, or are interested in talking shop about stories, please contact me. Other than writing I'm probably going to mostly work on Alloy in the immediate future. I hope to have a better idea of the writing once I'm done, and that should give rise to more features to implement in Leaf directly. I'll try to keep posting updates on the blog here as things progress in any case, and there's a few systems I'd like to elaborate on in technical articles as well. Thanks to everyone who read my summaries, watched the streams or recordings, and chatted live during this time. It means a lot to me to see people genuinely interested in what I do.
  5. Wow! To release an App on Google Play is not a matter of a few hours, as I experienced the last weeks! But finally Tapmoji is released! I want to share some experiences releasing the app. For the last couple of months I worked on a game called Tapmoji using the Defold engine. Initially I wanted this project to be a small and simple one, but during the development I decided to make a polished mobile game. So it became my biggest project so far. You can get it on Google Play (as Early Access). Please take a look at it and let me know what you like/dislike or what issues you had! I’ll appreciate every feedback! For the interested ones, read on to get an insight into the releasing process! For everyone else, thanks for playing, and come back for more news and games! Devlog Here are some screenshots of the current release: The last couple of weeks I had three big challanges: Finalize the App (Gameplay, Sounds, Menu, Backgrounds, Particle effects) Find a way to insert AdMob Mediation Upload the App to Google Play Finalizing the App A month ago I released a web version of the game. Since this release I improved a couple of things (and updated the web version as well!). Gameplay: Global countdown: player has to master as many level as possible in 60 seconds. After each level 5 bonus seconds are added Stars: the currency the player collects Ability to restart game from the last level after paying a certain amount of stars Spawning some particles when the right emoticon or an item was touched At game over: showing random sad emoticon at the top At level-complete screen: showing random happy emoticon at the top Showing random background image at each level Spawning and moving emoticons more bouncy Saving highscore and amount of stars in a save file Ads: Possibility to get 100 Stars as reward for wathing an ad; Showing an interstitial ad after every 3rd game. Sounds: As usual, all sounds are from opengameart.org Background music, sounds for tapping on the right/wrong emoticon, tapping on a button, at collecting items, at counting up stars at game over Possibility to toggle sound at the menu Graphics: Created 5 different backgrounds with Incskape Showing Ads (Using AdMob) To insert Ads into a Defold app was not an easy challenge. The Defold forums led me to Enhance, after I saw many posts regarding issues adding the Defold AdMob extension into a Defold project. For me it didn’t work as well. Using Enhace you just have to paste the enhance-extension library into the Defold project and write a few lines of code to show ads. After creating an account on Enhance.co, you have to upload the apk (which contains the enhance-extension), choose which type of ads you want to show (banner, interstitial, rewarded-ad) and which Ad mediation network you want to connect to (e.g. Google AdMob). Enhance adds some SDKs and finally lets you download the “enhanced” apk. I wanted to use AdMob as mediation network, so I needed to set up an AdMob account and put the needed IDs in the associated fields at the Enhance-process. Keep in mind that Enhance will add some new permissions into the AndroidManifest.xml, depending on the SDKs that are added (E.g the permission to track the location of the user). You can uncheck this option at the enhancing-process to keep the power over the permissions. I checked the enhanced apk using the apktool. This tool let’s you reverse-engineer the apk, which allows you to take a look into the AndroidManifest.xml to check which permissions were added. Using Google AdMob showing interstitial and rewarded ads, Enhance made my apk something about 10 MB bigger (~20 MB in total). Using Enhance I faced a problem. Enhance doesn’t provide a 64bit library for Defold. But since August 2019 every app has to support 64 bit architecture. Enhance defold-connector – 64 bit workaround To make this work with Defold, you need Enhance to work with the arm64 architecure. Since August 2019 Google Play allows you only to upload apks, which support 64bit architectures. Unfortunately the defold enhance-extension has no android-arm64 library. I tried to copy the android-armv7 library and rename it to android-arm64 and it worked fine! Officially enhance doesn’t support 64 bit by now, but this workaround seems to work (for me). Using the Defold AdMob extension Using Enhance, the release process become longer. I had to sign my app afterwards and “zipalign” it on my own, to upload it to the Google Play Console. Also, this potentially unstable 64bit issue and my app becoming twice as big, made me looking for an alternative. I decided to try the Defold AdMob extension from “Lerg”. It is well documented and easy to implement, as I experienced. Using this extension my app became just about 1 MB bigger. You can let Defold sign your app and upload the exported apk to the Google Play Console without any further steps. But in comparison to Enhance, where you don’t have to pay anything for using their service, the AdMob extension will serve 1% of all impressions to it’s creators benefit. Upload to Google Play There are a few things to keep in mind, when uploading an app to Google Play. Export in 64 bit Since August 2019 it is nessecery to provide an 64bit apk. In Defold you can just check the 64bit option before building the apk. It is important to check also the 32bit checkbox. This makes the app compatible with twice as much devices. Check Android Permissions needed The AndroidManifest.xml provided by Defold includes some permissions, which you don’t need perhaps (they are explained here). For example keeping the BILLING permission, you define that you app uses in-app purchases. Using in-app purchases you have to provide your adress in the Google Play Console which will be shown public in the Play Store after releasing the app. Being a single developer working from home, you perhaps don’t want to let everybody know where you live. Also, when you don’t use push notifications, you can delete the GET_ACCOUNTS, RECEIVE and C2D_MESSAGE permissions. In Defold you have to create a new AndroidManifest file and set the path in the game.project file. You can copy and paste the content of the built-in AndroidManifest and remove the unnecessary permissions. Creating a Google Play Developer account To create a developer account to gain access to the Google Play Console, you have to pay 25$ by credit card first. Also, you have to provide your phone number to get an google verification code. Be prepared for inserting a name which represents your company. This took me one hour, because I didn’t have an official name for my one-man-studio this far. At this point “Rocking Coffee” was born! Uploading, publishing and updating the app To upload an app, there are some required fields you have to fill in. This includes a title (50 characters), a short description (80 characters) , a long description (4000 characters), an icon (512×512 pixel), at least two screenshots (min. 320 px., max 3.840 px) , a feature-graphic (1024×500) and a link to a website with a privacy-policy for the app, which will be shown in the play store page of the app. After providing all these requirements, you are able to publish your store entry for review by Google. Especially the feature graphic took me some time (you can see it at the top of this post). The best thing is, that it is not beeing shown in the Play Store as long as you don’t have any trailer video. And I don’t have a trailer… For creating a privacy-policy page there is a tool which helps a lot: App Privacy Policy Generator. After uploading the game and creating the store entry, I waited 5 days until it was reviewed and published by Google. This includes an age rating, as well. If your app is rated as appropriate for kids, you have to ensure that the ads shown by AdMob are also appropriate for kids. You can manage this in your Google AdMob account. My app was rejected first, because I simply forgot to set these settings. I uploaded the app as a beta-release. I knew there would be some troubles and bugs, because it was the first time I released an app. In the end it was a good choice. I forgot to handle different screen resolutions and my app was displayed differently on different screen sizes. Especially on tablets. It took me some time to fix this. Defold provides some good solutions for this (creating display profiles). Updating the app, it is important to increment the version code of the app and sign it with the same key as the previous one. The version code can be set in the game.project file in Defold. After updating the apk file in the Google Play Console, it takes about one day for google to check and publish it. Summarizing, it took me about a week to prepare the release and get it online! That’s all! This were my experiences with publishing an Android app made by Defold on Google Play. Hopefully I can help someone with these informations! I’m still working on updates and will release Tapmoji officially, soon! There are far more things to come. Daily challanges, Booster items, Google Play Services etc. For now, don’t hesitate to check out the early access version at the Play Store and to give me some feedback what you like or dislike, or what ideas you have! Cheers! View original post at AGameAMonth.net
  6. Hi, I collected technical infos about more than 500 open source games and put them in a Github repository (game entries in text format). A dynamic HTML table of the entries is created from the data. It allows searching and sorting. For all the games there is a link to the homepage, where to obtain the sources and other technical infos. I plan to add more games to the list and fill in more infos, but that will take a while. Suggestions for additions are always welcome. A statistics page about the entries is automatically created, revealing for example popular combinations like C++ and SDL2. I also converted some CVS, SVN repositories to Git of old, inactive game projects and put them on Gitlab.com/osgames. I hope you might find the list useful.
  7. Shinmera

    Seven Weeks Later

    This weekly summary of daily progress would normally be very short, as I fell ill and had to sit out a few days of development as a result. I'm writing this from bed at the moment, though I'm already feeling a lot better. In any case, this week I "finished" the tundra tileset that I'd been frustrated over for a long time now. You can see it in the header. Then, partly because I couldn't settle on what else to do, and partly because it seemed like an interesting, quick project to do, I wrote a particle system. This is what I'll talk about in a bit more detail. The system that's implemented in Trial -- the custom game engine used for Leaf -- allows for completely custom particle attributes and behaviour. Before I get into how that's handled, I'll talk about how the drawing of the particles is done. For the drawing we consider two separate parts -- the geometry used for each particle, and the data used to distinguish one particle from another. We pack both of these two parts into a singular vertex array, using instancing for the vertex attributes of the latter part. This allows us to use instanced drawing and draw all of the particles in one draw call. In the particle shader we then need to make sure to add the particle's location offset, and to do whatever is necessary to render the geometry appropriately as usual. This can be done easily enough in any game engine, though it would be much more challenging to create a generic system that can easily work with any particle geometry and any rendering logic. In Trial this is almost free. There's two parts in Trial that allow me to do this: first, the ability to inherit and combine opaque shader parts along the class hierarchy, and second, the ability to create structures that are backed by an opaque memory region, while retaining the type information. The latter part is not that surprising for languages where you can cast memory and control the memory layout precisely, but nonetheless in Trial you can combine these structures through inheritance, something not typically possible without significant hassle. Trial also allows you to describe the memory layout precisely. For instance, this same system is used to represent uniform buffer objects, as well as what we're using here, which is attributes in a vertex buffer. If you'll excuse the code dump, we'll now take a look at the actual particle system implementation: I had to use a screenshot, as GameDev does not have Lisp source highlighting, and reading it without is a pain. In any case, let's go over this real quick. We first define a base class for all particles. This only mandates the lifetime field, which is a vector composed of the current age and the max age. This is used by the emitter to check liveness. Any other attribute of a particle is specific to the use-case, so we leave that up to the user. Next we define our main particle-emitter class. It's called a "shader subject" in Trial, which means that it has shader code attached to the class, and can react to events in separate handler functions. Anyway, all we need for this class is to keep track of the number of live particles, the vertex array for all the particles, and the buffer we use to keep the per-particle data. In our constructor we construct the vertex array be combining the vertex attribute bindings of the particle buffer and the particle mesh. The painting logic is very light, as we just need to bind the vertex array and do an instanced draw call, using the live-particles count for our current number of instances. The three functions defined afterwards specify the protocol users need to follow to actually create and update the particles throughout their lifetime. The first function fills the initial state into the passed particle instance, the second uses the info from the input particle instance to fill the update into the output particle info, and the final function determines the number of new particles per update. These particle instances are instances of the particle class the user specifies through the particle-buffer, but their fields are backed by a common byte array. This allows us to make manipulation of the particles feel native and remain extensible, without requiring complex and expensive marshalling. Finally we come to the bulk of the code, which is the tick update handler. This does not do too much in terms of logic, however. We simply iterate over the particle vector, checking the current lifetime. If the particle is still alive, we call the update-particle-state function. If this succeeds, we increase the write-offset into the particle vector. If it does not succeed, or the particle is dead, the write-offset remains the same, and the particle at that position will be overwritten by the next live, successful update. This in effect means that live particles are always at the beginning of the vector, allowing us to cut off the dead ones with the live-particles count. Then, we simply construct as many new particles as we should without overrunning the array, and finally we upload the buffer data from RAM to the GPU by using update-buffer-data, which in effect translates to a glBufferSubData call. Now that we have this base protocol in place we can define a simple standard emitter, which should provide a much easier interface. Okey! Again we define a new structure, this time including the base particle so that we get the lifetime field as well. We add a location and velocity on to this, which we'll provide for basic movement. Then we define a subclass of our emitter, to provide the additional defaults. Using this subclass we can provide some basic updates that most particle systems based on it will expect: an initial location at the origin, updating the location by the velocity, increasing the lifetime by the delta time of the tick, and returning whether the particle is still live after that. On the painting side we provide the default handling of the position. To do so, we first pass the three standard transform matrices used in Trial as uniforms, and then define a vertex shader snippet that handles the vertex transformation. You might notice here that the second vertex input, the one for the per-particle location, does not have a location assigned. This is because we cannot know where this binding lies ahead of time. The user might have additional vertex attributes for their per-particle mesh that we don't know about. The user must later provide an additional vertex-shader snippet that does define this. So, finally, let's look at an actual use-case of this system. First we define an asset that holds our per-particle buffer data. To do this we simply pass along the name of the particle class we want to use, as well as the number of such instances to allocate in the buffer. We then use this, as well as a simple sphere mesh, to initialize our own particle emitter. Then come the particle update methods. For the initial state we calculate a random velocity within a cone region, using polar coordinates. This will cause the particles to shoot out at various angles. We use a hash on the current frame counter here to ensure that particles generated in the same frame get bunched together with the same initial values. We also set the lifetime to be between three and four seconds, randomly for each particle. In the update, we only take care of the velocity change, as the rest of the work is already done for us. For this we apply some weak gravity, and then check the lifetime of the particle. If it is within a certain range, we radically change the velocity of the particle in a random, spherical direction. In effect this will cause the particles, which were bunched together until now, to spread out randomly. For our generator, we simply create a fixed number of particles every 10 frames or so. In a fixed frame-rate, this should look mean a steady generation of particle batches. Finally, in the two shader code snippets we provide the aforementioned vertex attribute binding location, and some simple colouring logic to make the particles look more like fireworks. The final result of this exercise is this: Quite nice, I would say. With this we have a system that allows us to create very different particle effects, with relatively little code. For Leaf, I intend on using this to create 2D sprite-based particle effects, such as sparks, dust clouds, and so forth. I'm sure I'll revisit this at a later date to explore these different application possibilities. For next week though, I feel like I really should return to working on the UI toolkit. I have made some progress in thinking about it, so I feel better equipped to tackle it now.
  8. Shinmera

    Six Weeks Later

    In this week of daily game development, I worked on two larger features. The first is a pathfinding, so that NPCs, and the player character, can move to designated positions in the map. That'll largely be useful for cutscenes and making NPCs move around a bit while idle. I might even extend it to allow creating full routes for NPCs to take, to implement daily routines or something like that. a.webm The other large feature is a new user interface toolkit. This is still in its infancy at the moment, so there's nothing to show for it. I expect that it'll take me a few weeks to complete this, so the next few updates might not be very fancy, I'm afraid. I'll try to do other, minor work on the game in-between though. Mostly working on assets and such, so hopefully I'll at least have one or two screenshots to post.
  9. Shinmera

    Five Weeks Later

    And another week of daily gamedev gone by. The improvements implemented in the last week are not too exciting visually. I finally implemented a proper world storage format, including for the new quest system, as well as save states for the whole shebang. The framework used for that should be general enough to survive future expansions of the games as well, or in the very least I strongly hope that's the case. Other than that I added a crawling mechanic to allow the player through tighter gaps. And just today I completed an auto-tiling mechanism in the editor, which allows prototyping new levels very quickly. edit.webm I'm also still thinking about UI toolkit theory. Hopefully I can dig into the meat of that soon, though it'll probably be a multi-week endeavour.
  10. That didn't feel as long as it was. A month ago I promised to do daily streams of game development. So far I've held true to that. I'm sure I won't be able to hold that true forever, but I'll try to keep going for as long as I can. In this one month alone, a lot has changed for the game though! A new architecture for map organisation was implemented, including a new save file format for that. As part of that work I also completely rewrote the tilemap rendering as well. The game has a lighting system now, too, based on signed distance functions to compute precise light areas. In terms of physics, collision detection was revamped to properly support slopes and moving platforms, giving much more freedom for level design. All of the art assets that existed previously were also dropped and replaced with new ones. I'm still working on that part, since I'm not quite happy with the current set of tiles. I'll also have to add more animations, and of course repeat the animation and character design work for any NPC I might add to the game. That's a bit of a ways out though, as I'll need to think about world building and story writing first before I can really get into that. Now there's a hard challenge! Finally, in the last week I designed a new language for writing dialogue with branching, choices, looping, and so forth. To support this I extended the syntax of Markless, and added a compiler to transform the Markless AST into a simple assembly language, which is then executed in a simple, suspendable VM. Added on to that there's now a quest system that should be general enough to allow writing any kind of quest I'll need. Currently though it's lacking a way to conveniently write these quests, so that's a task to work on in the near future. I intend on writing a simple UI to create and edit these quests. I'm not sure yet what I'll use for that, though. I'm most well-versed with Qt, but maybe I should finally cave in and give CLIM a shot. Or perhaps LTK. We'll have to see. There's still about two months left in my summer break before university resumes. I hope to keep going with this until then, so expect further daily streams and more progress! As before, the stream still happens every day at 20:00 CEST, on https://stream.shinmera.com, or https://twitch.tv/shinmera. I've tremendously appreciated all the people that stop by in chat to watch, and even talk with me during work. It's made things so much more enjoyable for me. Really, thank you so much! a.webm