Jump to content
Site Stability Read more... ×
  • Advertisement

Search the Community

Showing results for tags 'Custom'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 19 results

  1. I see the playing field populating and players taking their turns... Special thanks for this particular challenge opportunity. Full disclosure, I've actually never played Doom. Which made this first week a research exercise. Sorry JohnC and crew, but wow...I'm put back in anticipation of what may be to come from the quantity of declared participants. So tally-ho, everyone in on the ride. I'm going the raycaster route to stay more in key, with an old wolfenstein clone framework from an early gameinstitute.com, Perlin Noise and AI Seminar offerings by Author : JohnDeGoes(12-15 years ago). One of the graphics items Doom brought to the table were textures on the ceiling and floor. A concept my chosen framework is unaware. This go, I'd like to do a proper AI agent type or two. Also on the todo list is a simple sprite animation atlas tool that is feed from targeted screen grab so I can render the art instead of doing pixel art or photo persuasion, but mostly hand packing image sequences is such a chore. I'd like to open this project up to others. Right now I'm thinking heavy guitar loops for game play. It would be nice to have a sound guy of the programmer persuasion and an ai guy. Lets talk. ...but after kicking the can for more than a while, the flame piddled out. I've had this project collecting dust for a decade and thought it would be a good idea to start from wolfenstein to transition to doom. Then I was umm...whatever, I've done enough 2D...moving onward. Here I'm showing the abandoning and rebirth starting with a nice modular wall asset gem from the unity store and a first stab at some licks. Makes me happy.
  2. I've implemented a basic version of Voxel Cone Tracing that uses a single volume texture (covering a small region around the player). But I want to have large and open environments, so I must use some cascaded (LoD'ed) variant of the algorithm. 1) How to inject sky light into the voxels and how to do it fast? (e.g. imagine a large shadowed area which is lit by the blue sky above.) I think, after voxelizing the scene I will introduce an additional compute shader pass where, from each surface voxel, I will trace cones in the direction of the surface normal until they hit the sky (cubemap), but, I'm afraid, it would be slow with Cascaded Voxel Cone Tracing. 2) How to calculate (rough) reflections from the sky (and distant objects)? If the scene consists of many "reflective" pixels, tracing cones through all cascades would destroy performance. Looks like Voxel Cone Tracing is only suited for smallish indoor scenes (like Doom 3-style cramped spaces).
  3. What do we do inside a dungeon? Hi everybody. Again, it's been a long time, but now the summer is finally over and I can show you some updates to the game. After completing the collision-handling code overhaul and while filling the lower dungeon section with life, I literally wondered what the player should do inside the dungeon -- besides walking around and killing monsters. For the upper dungeon part, it was okay to use find-the-right-key tasks. For the lower part, however, I wanted a little more RPG-like mechanics, so I added two non-monster NPCs in order to deepen the story (albeit a rather weak story). Together with the story elements, there is a puzzle that can be solved in order to gain a bonus on the sword. The puzzle consists of a couple of stones that have to be placed in the correct order (see the screenshot blow): An appropriate ending The dungeon crawler challenge version of the game ended rather abrupt. In order to give the game an appropriate ending, I felt that there should be an end boss. In a raycasting engine, there is a natural size limit for sprites (since the ceiling and floor heights are fixed here, and also because the sprites rotate with the camera). In order to create a boss that is adequately daunting, I decided to create a lindworm / a dragon-like snake that is assembled from multiple sprites. It moves very similar to the snake in the original snake-game. The screenshots below show a first impression. I also made a short video of the movement of the lindworm (see below). Ready for Beta-Testing? The puzzle and the boss-fight are not yet finished, but they are the last two features I will implement for this game. When those two tasks are done, I will start a short beta testing phase in order to collect player feedback and bringing this game towards a release. Thank you for reading, Carsten
  4. ongamex92

    Game engine devlog new video

    Hi all, here is a new video about the engine and the game that I work on in my free time after work. This time I hope nothing happens to the frame rate of the video...
  5. Hello and welcome to this weeks Dev Diary! It has been a sincerely exhausting week, as learning the ropes with Blender has been all but convenient. And here is why; Yet Some More Simple Lessons I started of with some beginner tutorials for how to make landscapes from heightmaps. I followed the said tutorials to the letter, but for some reason the shortcuts did not work, even when I did exactly, as the guy in the video, even re-watched from three to six times, to see every possible little detail that I might have missed. Well, it turned out, that the so called Beginner tutorials did require foreknowledge about the workings of Blender, as some skipped on what shortcuts to press in some stages and some outright cut off part of the video, where the method of selecting was not revealed. Only at fifth tutorial video, where the person in question used visual keyboard and had not cut content from the video, explaining simple way to make real 3D image from maps from start to finish. The little basic thing that all the other videos I watched had missed, was to press TAB in the Layout Workspace, which turn it into "Edit Mode", and only then you can select faces of the object. The intuitive way of unwrapping the object would be a simple one click in the object mode, which would select the object "in the right way" for the "U" shotcut to work. Or better yet, unwrap could be part of the right click menu, as it might be one of the most used functions in Blender. Lesson of this is; always try to find tutorials on the user interface of the more complicated software, no matter how frustrating it might be. My impression from the hype behind the Blender 2.8 was that it should be even more intuitive than before, but apparently, it's still nowhere near intuitive enough to start using without more careful study of the basics - unlike with SketchUp. The Alternative In the end, I didn't manage to make a working 3D terrain with Blender, even after working several hours with it, until late yesterday, when I had limited success, but nothing to show just yet, due to blender crashing with trying to subdivide the plane mesh to 60 and subdividing another 60 after adding the heightmap, so that the details would come out more. But to prove a point publicly, I started googling for SketchUp plugins - and today I found what I needed. A Simple tool called "Bitmap to Mesh", which turns any bitmap (or png) heightmaps to workable meshes. Here is the heightmap, while it is processing; And here is the ready mesh; No extensive tutorials, only 6 steps; Draw > Mesh From Heightmap Context Menu > Mesh From Heightmap Context Menu > Mesh From Bitmap Choose the Start Plane Choose the Width Choose the Depth And after processing, you have a rudimentary heightmap, ready for you to tinker with to make it look more terrain like. The above picture is just a test of this work process and does not represent the end product of my tutorial journey. As you can see, the workflow was effortless and extremely beginner friendly, yet powerful enough so that even professionals can start their miracle working with ease. I would also like to press a point here; I found this plugin today and before writing this Dev Diary, managed to get done what I needed in less than a hour, with nothing but a simple few lines of written tutorial. Blender isn't a bad software, but has a much steeper learning curve and needs a much more thorough study of tutorials and getting used to before you can use it effectively. Yet it is a very powerful tool for developers of all kind, like a swiss army knife for a camper - a tool for quite much any job an artist or game developer could need, if they know or learn how to use it. Also to be fare to Blender, it is a humongous task to make something so versatile to be as intuitive as possible. 2.8 is a huge step in the right direction, in my opinion. Conclusion I'm going to continue with my attempts to create a 3D world map for the RTS game - yes, I just can't shake that desire for a 3D game, even after saying that it would make more sense to do 2D first and touting that I would try to do a 2D (or 2.5D) game as my first release project. As much as I have dear memories from retro games, like the original Command and Conquer, Dune and the original Warcraft series, and even to some extent KKND (Krush Kill n' Destroy), I don't feel inspired by the thought of making a 2D RTS game in this day and era. Following the rationale of making a 2D game despite not feeling inspired by it doesn't seem right to me, as I wan't my first game to become something as enjoyable and easy to approach as a first game can be. I'm not expecting a masterpiece, but it would be nice to set the bar high enough for the game, so something bigger can come out of it at some point. Some developers are more comfortable with quantity of playable games they make, as it is easy to demand too much from the quality aspect, because many artists are more or less perfectionists. But I feel the most inspired by the idea of building a simple foundation for something much larger. Something that evolves through time and effort, yet offers something to do for the gamers and project funders from an early build. RTS as a genre would be perfect for this, as it has very simple foundations, but like other Real-Time -genres, can also branch out in near limitless ways. As a genre it does not appeal to as many people, as Action games do, but I will try to find a way to make something that feels unique. Thank you for tuning in, and I'll see you on the next one! You can check out every possible mid week announcements about the project on these official channels; • YouTube • Facebook • Twitter • Discord • Reddit • Pinterest • SoundCloud • LinkedIn •
  6. It's been two months now since I started to do daily game development streams. I've been trying my best, but it is time for this to come to a close. In this article I'll talk about the various things that happened, why I'm stopping, and the future of the Leaf game. Strap in! It's actually been slightly longer than two months, but since I missed some days due to being sick, and some others because I didn't feel like streaming – more on that later – I'll just count it as two months. In any case, in this time I've done 56 streams, almost all of them two hours long. That's a lot of hours, and I'm truly impressed that some people stuck around for almost all of them. Thank you very much! A lot happened in that time too, and I think it would be interesting to go over some of the major features and talk about them briefly. New Features in Leaf Slopes and Collision Collision detection was heavily revised from the previous version. The general procedure is to scan the current chunk for hits until there are no more hits to be found. If we have more than ten hits we assume that the player is in a wall somehow and just die. The number ten is obviously arbitrary, but somehow it seems sufficient and I haven't had any accidental deaths yet. When a hit is detected, it dispatches on the type of tile or entity that was collided with. It does so in two steps, the first is a test whether the collision will happen at all, to allow sub-tile precision, and the second is the actual collision resolution, should a full hit have been detected. The first test can be used to elide collisions with jump-through platforms or slopes if the player moves above the actual slope surface. The actual collision resolution is typically comprised of moving the player to the collision point, updating velocity along the hit normal, and finally zipping out of the ground if necessary to avoid floating point precision issues. The collision detection of the slopes itself is surprisingly simple and works on the same principle as swept AABB tests: we can enlarge the slope triangle by simply moving the line towards the player by the player's half-size. Once this shift is done we only need to do a ray-line collision test. During resolution there's some slight physics cheating going on to make the player stick to the ground when going down a slope, rather than flying off, but that's it. Packets and File Formats Leaf defines a multitude of file formats. These formats are typically all defined around the idea of a packet – a collection of files in a directory hierarchy. The idea of a packet allows me to define these formats as both directly on disk, in-memory as some data structure, or encapsulated within an archive. The packet protocol isn't that complicated and I intend on either at least putting it into Trial, or putting it into its own library altogether. Either way, it allows the transparent implementation of these formats regardless of backing storage. The actual formats themselves also follow a very similar file structure: a meta.lisp file for a brief metadata header, which identifies the format, the version, and some authoring metadata fields. This file is in typical s-expression form and can be used to create a version object, which controls the loading and writing process of the rest of the format. In the current v0, this usually means an extra data.lisp payload file, and a number of other associated payload files like texture images. The beauty of using generic functions with methods that specialise both on the version and object at the same time is that it allows me to define new versions in terms of select overrides, so that I can specify new behaviour for select classes, rather than having to redo the entire de/serialisation process, or breaking compatibility altogether. Dialogue and Quests The dialogue and quests are implemented as very generic systems that should have the flexibility (I hope) to deal with all the story needs I might have in the future. Dialogue is written in an extended dialect of Markless. For instance, the following is a valid dialogue snippet: ~ Fi | (:happy) Well isn't this a sight for sore eyes! | Finally a bit of sunshine! - I don't like rain ~ Player | I don't mind the rain, actually. | Makes it easier to think. - Yeah! ~ Player | Yeah, it's been too long! Hopefully this isn't announcing the coming of a sandstorm. ! incf (favour 'fi) - ... ! decf (favour 'fi) ~ Fi | ? (< 3 (favour 'fi)) | | So, what's our next move? | |? | | Alright, good luck out there! The list is translated into a choice for the player to make, which can impact the dialogue later. The way this is implemented is through a syntax extension in the cl-markless parser, followed by a compiler from the Markless AST to an assembly language, and a virtual machine to execute the assembly. The user of the dialogue system only needs to implement the evaluation of commands, the display of text, and the presentation of choices. The quest system on the other hand is based on node graphs. Each quest is represented as a directed graph of task nodes, each describing a task the player must fulfil through an invariant and a success condition. On success, one or more successor tasks can be unlocked. Tasks can also spawn dialogue pieces to become available as interactions with NPCs or items. The system is smart enough to allow different, competing branches, as well as parallel branches to complete a quest. I intend on building a graph editor UI for this once Alloy is further along. Both of these systems are, again, detached enough that I'll either put them into Trial, or put them into a completely separate library altogether. I'm sure I'll need to adjust things once I actually have some written story on hand to use these systems with. Platforming AI The platforming AI allows characters to move along the terrain just like the player would. This is extremely useful for story reasons, so that characters can naturally move to select points, or idle around places rather than just standing still. The way this is implemented is through a node graph that describes the possible movement options from one valid position to the next. This graph is built through a number of scanline passes over the tile map that either add new nodes or connect existing nodes together in new ways. The result is a graph with nodes that can connect through walk, crawl, fall, or jump edges. A character can be moved along this graph by first running A* to find a shortest path to the target node, and then performing a real-time movement through the calculated path. Generally the idea is to always move the player in the direction of the next target node until that node has been reached, in which case it's popped off the path. The jump edges already encode the necessary jump parameters to use, so when reaching a jump node the character just needs to assume the initial velocity and let standard physics do the rest. The implementation includes a simple visualiser so that you can see how characters would move across the chunk terrain. When the chunk terrain changes, the node graph is currently just recomputed from scratch which isn't fast, but then again during gameplay the chunk isn't going to change anyway so it's only really annoying during editing. I'll think about whether I want to implement incremental updates. Lighting Leaf has gone through two lighting systems. The old one worked through signed distance fields that were implicitly computed through a light description. New light types required new shader code to evaluate the SDF, and each light required many operations in the fragment stage, which is costly. The new system uses two passes, in the first lights are rendered to a separate buffer. The lights are rendered like regular geometry, so we can use discrete polygons to define light areas, and use other fancy tricks like textured lights. In the second pass the fragment shader simply looks up the current fragment position in the light texture and mixes the colours together. In effect this new system is easier to implement, more expressive, and much faster to run. Overall it's a massive win in almost every way I can imagine. There's further improvements I want to make still, such as shadow casting, dynamic daylights, and light absorption mapping to allow the light to dissipate into the ground gradually. Alloy Alloy is a new user interface toolkit that I've been working on as part of Leaf's development. I've been in need for a good UI toolkit that I can use within GL (and otherwise) for a while, and a lot of Leaf's features had to be stalled because I didn't have one yet. However, a lot of Alloy's development is also only very distantly related to game development itself, and hardly at all related to the game itself. Thus I think I'll talk more about Alloy in other articles sometime. Why I'm Stopping I initially started this daily stuff to get myself out of a rut. At the time I wasn't doing much at all, and that bothered me a lot, so committing to a daily endeavour seemed like a good way to kick myself out of it. And it was! For a long time it worked really well. I enjoyed the streams and made good progress with the game. Unfortunately I have the tendency to turn things like this into enormous burdens for myself. The stream turned from something I wanted to do into something I felt I had to do, and then ultimately into something I dreaded doing. This has happened before with all of my projects, especially streaming ones. With streams I quickly feel a lot of pressure because I get the idea that people aren't enjoying the content, that it's just a boring waste of time. Maybe it is, or maybe it isn't, I don't know. Either way, having to worry about the viewers and not just the project I'm working on, especially trying to constrain tasks to interesting little features that can fit into two hours turns into a big constraint that I can't keep up anymore. There's a lot of interesting work left to be done, sure, but I just can't bear things anymore at the moment. Dreading the stream poisoned a lot of the rest of my days and ultimately started to hurt my productivity and well-being over the past two weeks. Maybe I'll do more streams again at some point in the future, but for now I need a break for an indeterminate amount of time. The Future of Leaf Leaf isn't dead, though. I intend to keep working on it on my own, and I really do want to see it finished one day, however far away that day may be. Currently I feel like I need to focus on writing, which is a big challenge for me. I'm a very, very inexperienced writer, especially when it comes to long-form stories and world-building. There I have practically no idea on how to do anything. If you are a writer, or are interested in talking shop about stories, please contact me. Other than writing I'm probably going to mostly work on Alloy in the immediate future. I hope to have a better idea of the writing once I'm done, and that should give rise to more features to implement in Leaf directly. I'll try to keep posting updates on the blog here as things progress in any case, and there's a few systems I'd like to elaborate on in technical articles as well. Thanks to everyone who read my summaries, watched the streams or recordings, and chatted live during this time. It means a lot to me to see people genuinely interested in what I do.
  7. Hello and welcome to this weeks Dev Diary! Today I'll explain some of my last weeks ups and downs and explain the importance of setting up your tools properly for your work style, before doing major changes to your graphical works. The Mistake On an earlier dev diary, I mentioned about the importance of learning a habit of saving the different versions of your graphical work. For those, who has years of experience, this is a given and those that often have a habit of regimenting things, this comes more or less naturally. As such, because I have a tad different starting point and despite I like to be organized, at first I forgot to do this. Time wen't by and I learned to save different versions of my graphical works more frequently, which made life much easier. How ever - and here comes the importance of shuffling through your image editing software's settings before doing any larger works. Of course, I didn't do this, as I had no prior need to change any settings on editing software to fit my working style, which I considered mostly to be nitpicking of the pro's. I didn't underestimate it's importance to the workflow, but I didn't see it as important to my self. Well, how wrong was I, yet again and learned it the hard way. As I had no need to change the default settings in Photoshop, I also missed a rather important thing - the autosave function. Basics of the Basics, yet I overlooked that It if I like to fidle with the earlier versions just to see if I find some new way to branch out from that earlier point, the autosave function will overwrite periodically over the old file, effectively erasing the "bookmark" all together. This is why, it seems, that most of my earlier versions of some graphical designs are gone and in cases where you don't like the direction you took at all and just close the file without saving, it's already too late. Lesson to be learned from this - always make a copy of the original backup, when ever you want to try branching out in a different direction from a certain version of your work. The autosave functionality is important to be left enabled, in case the software or computer crashes for any reason. The Re-Design Which leads to, yet again, the Re-Designs of several assets. Here is an example of several different versions of the Sapphire star; The version history goes from left to right, oldest to newest. I'm still not satisfied of the direction that the re-designs wen't, as the first one might have been the best one overall, but would have required a tad more work on the details. As an example, the outlines should have been colored as sapphire blue to hide the outlines and made it look more gem-like, and a little bit of transparency to the bottom layer, to be able to make more caustic effects to it. The colors used in the first iteration were just white and light teal for the slight lighting effects and Sapphire Blue and Persian Blue, as the base colors. From the second to fourth versions, only 7 different shades of Sapphire Blue were used in all the effects and base colors. EDIT; Backgrounds matter - go take a look at the Reddit page how these stars look against the black background, compared to white. The Editing Process The editing of the Patreon page is coming along fine, although it does take time to re-think my approach, in order to write something that makes more people to want to invest in the project. No marketing text is perfect and it is impossible to write something, that appeals to anyone, which should be always remembered, when designing your Patreon page, or homepage for your project. The first image you give to people about you and the project is important, but I would dare to argue, that it is much more important to be honest about your self and your need to learn and develop your skills in various areas - be true to what you really know and don't know how to do. Be open about the fact, that this is a learning experience to you and just be who you are, that is a much more important factor in marketing. I'd like to call this "Natural Marketing", a way of being honest and straightforward to others, show trustworthiness by your actions and being honest about your own mishaps and successes. No human, alive or long since past, is or has been without any faults. Everyone is a beginner at some point and we all advance in different pace. Accept your self, flaws and all, and do your best to overcome those difficulties. Only you can decide for your self, if this is something you really wan't to achieve. And, you can always change your marketing text later. Good enough, is often just fine for the first publication. This concludes the pep talk for other starting game developers, artists and entrepreneurs, who would like to earn their living as a CEO of their own life. The Conclusion I'll be honest with you, I have been suffering from sleep deprivation lately, which has been slowing down my work progress and influences my rate of mistakes in my work. Last weeks short notice was partly because of this and the fact, that I had been too enticed in a game that I originally played to study it's mechanics more closely. The game in question is Total War: Three Kingdoms. The Total War series is one of those, that has inspired me to attempt creating a RTS game with similar elements. The TW: Three Kingdoms is the closest strategy game to my own visions thus far, but does not contain even nearly everything that I will try to implement in to the RTS Project. I just hope that the ideas work as well together in practice, as they work together in theory. Thank you for tuning in, and I'll see you on the next one! You can check out every possible mid week announcements about the project on these official channels; • YouTube • Facebook • Twitter • Discord • Reddit • Pinterest • SoundCloud • LinkedIn •
  8. Shinmera

    Seven Weeks Later

    This weekly summary of daily progress would normally be very short, as I fell ill and had to sit out a few days of development as a result. I'm writing this from bed at the moment, though I'm already feeling a lot better. In any case, this week I "finished" the tundra tileset that I'd been frustrated over for a long time now. You can see it in the header. Then, partly because I couldn't settle on what else to do, and partly because it seemed like an interesting, quick project to do, I wrote a particle system. This is what I'll talk about in a bit more detail. The system that's implemented in Trial -- the custom game engine used for Leaf -- allows for completely custom particle attributes and behaviour. Before I get into how that's handled, I'll talk about how the drawing of the particles is done. For the drawing we consider two separate parts -- the geometry used for each particle, and the data used to distinguish one particle from another. We pack both of these two parts into a singular vertex array, using instancing for the vertex attributes of the latter part. This allows us to use instanced drawing and draw all of the particles in one draw call. In the particle shader we then need to make sure to add the particle's location offset, and to do whatever is necessary to render the geometry appropriately as usual. This can be done easily enough in any game engine, though it would be much more challenging to create a generic system that can easily work with any particle geometry and any rendering logic. In Trial this is almost free. There's two parts in Trial that allow me to do this: first, the ability to inherit and combine opaque shader parts along the class hierarchy, and second, the ability to create structures that are backed by an opaque memory region, while retaining the type information. The latter part is not that surprising for languages where you can cast memory and control the memory layout precisely, but nonetheless in Trial you can combine these structures through inheritance, something not typically possible without significant hassle. Trial also allows you to describe the memory layout precisely. For instance, this same system is used to represent uniform buffer objects, as well as what we're using here, which is attributes in a vertex buffer. If you'll excuse the code dump, we'll now take a look at the actual particle system implementation: I had to use a screenshot, as GameDev does not have Lisp source highlighting, and reading it without is a pain. In any case, let's go over this real quick. We first define a base class for all particles. This only mandates the lifetime field, which is a vector composed of the current age and the max age. This is used by the emitter to check liveness. Any other attribute of a particle is specific to the use-case, so we leave that up to the user. Next we define our main particle-emitter class. It's called a "shader subject" in Trial, which means that it has shader code attached to the class, and can react to events in separate handler functions. Anyway, all we need for this class is to keep track of the number of live particles, the vertex array for all the particles, and the buffer we use to keep the per-particle data. In our constructor we construct the vertex array be combining the vertex attribute bindings of the particle buffer and the particle mesh. The painting logic is very light, as we just need to bind the vertex array and do an instanced draw call, using the live-particles count for our current number of instances. The three functions defined afterwards specify the protocol users need to follow to actually create and update the particles throughout their lifetime. The first function fills the initial state into the passed particle instance, the second uses the info from the input particle instance to fill the update into the output particle info, and the final function determines the number of new particles per update. These particle instances are instances of the particle class the user specifies through the particle-buffer, but their fields are backed by a common byte array. This allows us to make manipulation of the particles feel native and remain extensible, without requiring complex and expensive marshalling. Finally we come to the bulk of the code, which is the tick update handler. This does not do too much in terms of logic, however. We simply iterate over the particle vector, checking the current lifetime. If the particle is still alive, we call the update-particle-state function. If this succeeds, we increase the write-offset into the particle vector. If it does not succeed, or the particle is dead, the write-offset remains the same, and the particle at that position will be overwritten by the next live, successful update. This in effect means that live particles are always at the beginning of the vector, allowing us to cut off the dead ones with the live-particles count. Then, we simply construct as many new particles as we should without overrunning the array, and finally we upload the buffer data from RAM to the GPU by using update-buffer-data, which in effect translates to a glBufferSubData call. Now that we have this base protocol in place we can define a simple standard emitter, which should provide a much easier interface. Okey! Again we define a new structure, this time including the base particle so that we get the lifetime field as well. We add a location and velocity on to this, which we'll provide for basic movement. Then we define a subclass of our emitter, to provide the additional defaults. Using this subclass we can provide some basic updates that most particle systems based on it will expect: an initial location at the origin, updating the location by the velocity, increasing the lifetime by the delta time of the tick, and returning whether the particle is still live after that. On the painting side we provide the default handling of the position. To do so, we first pass the three standard transform matrices used in Trial as uniforms, and then define a vertex shader snippet that handles the vertex transformation. You might notice here that the second vertex input, the one for the per-particle location, does not have a location assigned. This is because we cannot know where this binding lies ahead of time. The user might have additional vertex attributes for their per-particle mesh that we don't know about. The user must later provide an additional vertex-shader snippet that does define this. So, finally, let's look at an actual use-case of this system. First we define an asset that holds our per-particle buffer data. To do this we simply pass along the name of the particle class we want to use, as well as the number of such instances to allocate in the buffer. We then use this, as well as a simple sphere mesh, to initialize our own particle emitter. Then come the particle update methods. For the initial state we calculate a random velocity within a cone region, using polar coordinates. This will cause the particles to shoot out at various angles. We use a hash on the current frame counter here to ensure that particles generated in the same frame get bunched together with the same initial values. We also set the lifetime to be between three and four seconds, randomly for each particle. In the update, we only take care of the velocity change, as the rest of the work is already done for us. For this we apply some weak gravity, and then check the lifetime of the particle. If it is within a certain range, we radically change the velocity of the particle in a random, spherical direction. In effect this will cause the particles, which were bunched together until now, to spread out randomly. For our generator, we simply create a fixed number of particles every 10 frames or so. In a fixed frame-rate, this should look mean a steady generation of particle batches. Finally, in the two shader code snippets we provide the aforementioned vertex attribute binding location, and some simple colouring logic to make the particles look more like fireworks. The final result of this exercise is this: Quite nice, I would say. With this we have a system that allows us to create very different particle effects, with relatively little code. For Leaf, I intend on using this to create 2D sprite-based particle effects, such as sparks, dust clouds, and so forth. I'm sure I'll revisit this at a later date to explore these different application possibilities. For next week though, I feel like I really should return to working on the UI toolkit. I have made some progress in thinking about it, so I feel better equipped to tackle it now.
  9. Continued adventures with the oneLoneCoder pixelGameEngine (youtube). With the current game challenge, time has started to arc over and on the surface, I'm not looking all that great. Under the hood though, things are playing well with each other. I have most all of my render system in place, most update behaviors ready and still have some performance headroom, although I've dipped under standard monitor speed (60hz) sometime last week in test runs. So, two weeks. That's what I get to finish game play, leaving something for cleanup and GC blog/project setup. The original plan is still good and taken from concept to reality. We're here. Got balls, they go boom, they dig a hole and do a pretty splash on their way out the door. Perfect. This last week a terrain smoothing feature was added. After a carve operation and the damage the particle system does as well, leaves pointy/jagged peaks. The smoothing provides landslide behavior and drops stray leftover pixels into the pile. As soon as I realized that, I saw a water implementation peeking it's head out of the sand. So again, here we are. The new yellow object is the puncture pin. Balls and pin against character and his pipes. If balls get the pin down to the pipe, we start flooding for effect game over. That's where my idea went. Did I mention, "not ...all that great" Working on that mechanic for the next day or two. (I'll be back) (back) What I appreciate in this video is what you get just from std::rand and mod for polarity from the screen vertical position, plus a higher resolution chaos threshold to act or not. But watching std::rand() cycle through is not a bad thing at all in my opinion. Especially for fire and forget. The initial stab at it was nice. Tomorrow, another particle system type as the spawner. As of now, I set an int in my world array to the water ID and update the array like so. Pretty easy and nice addition...now we're making progress. if (terrain.isStable == true) // not used? always fires at 100 pixel row chunks { static int tty = terrain.nMapHeight - 1; // search row cursor location (y axis marker) for (int n = 0; n < 200; n++) // update count rows of pixels { tty--; // advance on start for the iteration of the row for (int x = 0; x < terrain.nMapWidth; x++) { // scan the single row this frame int index = (tty * terrain.nMapWidth) + x; // this location data index int indexDown = index + terrain.nMapWidth; // the pixel below if (terrain.map[index] != 0) { // we're a pixel. is someone not below me? if (terrain.map[indexDown] == 0) { // fall - swap down one pixel terrain.map[indexDown] = terrain.map[index]; terrain.map[index] = 0; } // no, is someone down and left? if (terrain.map[index] == 1) { // dirt slide float rnd = (float)std::rand() / (float)RAND_MAX; bool bChaos = rnd > 0.95 ? true : false; // determine chaos for horizon movement if (terrain.map[indexDown - 1] != 1 && bChaos == true) { // settle left - swap (down/left) one pixel terrain.map[indexDown - 1] = 1; terrain.map[index] = 0; } // no, is someone down and right? else if (terrain.map[indexDown + 1] != 1 && bChaos == true) { // settle right - swap (down/right) one pixel terrain.map[indexDown + 1] = 1; terrain.map[index] = 0; } } if (terrain.map[index] == 2) // { // water slide // just right here (behavior) int sign; // tty % 2 == 0 ? sign = -1 : sign = 1; if (terrain.map[index+sign] == 0) { // flow left - swap (left) one pixel terrain.map[index+sign] = 2; terrain.map[index] = 0; } } } } if (tty == 0) // top row tested. reset to bottom to restart scan. tty = terrain.nMapHeight - 1; } } Reaching another week, with one remaining and some fluff. How we doing? pffttt....oh boy. yes and no. In a way, like that. I'm missing on a new and isolated collision check but the plan is improving. I have a loose event and restart working fine. Making progress on the the win aspect being one trigger away. The idea is to get at least one red ball over to the other side of the scrolling play area where a collector of sort awaits.. My level advance logic already works. The power up is in, I just havn't had the want to add in the cheezy AABB test or worse just length proximity trigger. Or imagined what I'm going to be powering up yet. This weeks visual and the few issues I'm going after right now. Picking up speed..is there enough for a safe landing? we'll see. Second video added showing improved game play and the win case. An ace fell out of my sleeve when testing goal placement for the level progression. An interesting twist, to have to un-bury your target first...from here on out I'm telling the story different. But the block to steer and not take damage is a bonus. pffttt...everything from a step back has been bonus with the bogus plan I had And lastly now, character involvement. In order to satisfy the scrolling aspect, the camera follows. In order to pretend like he has purpose, a shield has been granted to sink those explosive balls deep. This will close this entry. I now have power ups and an ester egg to go. I plan on tool tips (game play tips) in some small capacity and on the menu screen some clues on how this thing is played. Thanks for hanging out.