Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Pathfinding'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 29 results

  1. My game map is basically a 2D grid which contains solid tiles(walls or other obstacles that can't be passed) and normal tiles that can be passed I need an enemy to reach the target point (player, cover, player base,,,) But path finding algorithms only provides 45 degree moving ( sides or diagonal) but obviously there is a shorter way for example going 60 degrees instead of combination of diagonal and sides like on image. So, what is way i can create this, although the main reason is not shorter path but the way enemy is going ,because it's wheeled vehicle so it needs to slow down to rotate...
  2. I'll jump straight into the context: I have a 2D grid map stored into a simple 2D object array. Every object and agent exists on this 2D array (so their positions are discrete and binded to their location on the array). I'm expecting agents to move quite a bit, so the grid is dynamic in nature. The game itself is fully deterministic and discrete operations based, the goal being to make it multiplayer portable (with the lockstep method). I'd like to populate the map with large impassable structure, concave and convex, such as buildings and lakes. These agents are rigged with an FSM for decision making. I wish them to be able to act on these decision in believable way, and one of these challenges is to allow them A > B navigation that works and looks believable. I'd like to have as much active npc in the world as possible, and right now the biggest hurdle is finding the right path finding algorithm. I'm not sure about my update tick budget, but right now I'm thinking anything reliably less than ~500ms would be acceptable (I'm making the game "turn based" for testing purposes, and wish to later make it real time using a good amount of linear interpolation on the display layer). At this time I have a 100x100 map for testing, with hundreds of agents (with almost fully implemented AI, lacking path finding and collision detection), and a tick time of 10ms. What pathfinding algorithm and implementation would suit this situation ? I know A* would be okay-ish, but since the grid changes every ticks, and the number of agents is quite large, I'm not sure it would be the best (short of computing the path once and evaluating if a collision occurred each travelling steps, then recomputing it). I'm trying to keep everything simple, and I am willing to reduce the scope of the simulation regarding the number of active agents to do so. Any idea or advice ? FYI, I'm not sure if I'm doing this right, but every agent has a command list, which contains a list of "todos" populated by the fsm and ran each ticks (the first command in line). The pathfinding algorithm would be contained in the command object, so the flexibility is quite large (it could be broken down into smaller steps).
  3. Hello GameDev In this blog I'll be covering - development life-cycle thus far - provide an update on dynamic structures - introduce scripted events. Development life-cycle I think the time has come at last. What time is that exactly? Well let me explain... A couple months back I was thumbing through my blog series and I was startled. I noticed that it appeared as though I have been repeating myself for the better part of 4 years, coming up on 5 now. I wondered if maybe I was stuck in some sort of mental recursive loop or something, forever redoing, rebuilding, remaking. Quickly though I realised I was reading the situation incorrectly. Even though it may seem I've been talking about the same stuff for this long I've come to realise there have been 3 distinct phases this project has gone through and they are: Experimentation At the very beginning when I first started using THREE.js and node.js I played around with them and worked out a rough idea of how to get certain things to work a certain way. I made different colored boxes to represent different players. Each client could move one box and node.js would coordinate the movement. I had also written my first dynamic-wall functions. I remember they were atrociously slow and the boxes could not move around them. I learned what ideas were worth perusing and which needed to be scraped. Integration Eventually a time came when I felt comfortable enough with my abilities in javascript that I decided I needed start combining many of the different features I had made. I had boxes that could move around, walls that could be built and eventually I wrote my first function to merge the two, getting the boxes to move around walls that is. That too was extremely slow. Integrating these separate elements gave me a greater appreciation for logic that can be utilised across different applications, if that makes any sense . Stability Then in 2016 I decided it was time to kick things up a notch. I didn't know it at the time, but my first attempt at creating a robust function was probably the best thing to happen for this project. At the time I just wanted to finally make a function that wasn't going to break on me. I remember spending weeks tackling all the errors wondering does it ever end? But after I had that function executing like clock work it dawned on me just how important stable functions are. Especially if I want a framework for a game that is to be dependable. So there you have it Stable Integrated Code. Now of course there has always been a gradual transition from one phase to the next. And even today I find myself experimenting, after all experimenting can breath fresh life into a project. Dynamic Structures I'm incredibly proud to show off my first polished video showing the latest and greatest dynamic structure tool. In the video, if you're interested, I build a structure from scratch so you can see how user friendly I tried to make it Scripted Events I would also like to share with you a video that showcases the my first attempt at a scripted event. Now the video is the one I currently use on my project page updated earlier this month. If you haven't checked it out please do so. You can skip the first 25 seconds or so.. It's an idea I came up with a few months back for promoting the project. I've been getting to thinking that since these simulin live in a simulated world that I really want to play on that idea. So this is just one example of a scripted event. But there is no reason why the player can't also be informed the simulated nature of the world. For instance, the world is much to small come game time for there to be tons for wildlife roaming around, But I could have intermittent scripted events where wildlife materialise and stampede across a plain only to de-materialise once again. And I have many more ideas for scripted events. Conclusion I feel confident enough now and think it's time to move one full foot to game development. I've talked about transitioning in the past but I think enough is enough with the framework. I will still work on it until it has fully incorporated all the features I want. But lately I've been getting more and more excited to just get on with the game. Parting thought, If I didn't have somebody somewhere to share all this with I would never would have come this far. Thank you all for being a supportive and curious bunch!
  4. Hello GameDevs, Context: I would consider myself a novice and am working on a RTS working project to practice my C++ and game programming skills. It is a tech demo to practice optimisation in the future, I am aiming to have it contain as many basic agents with rudimentary AI that can be controlled and optimising the system to allow for the most amount of these agents while still maintaining 60 FPS as a benchmark. Problem: I am aiming to implement Flow Fields for pathfinding and have managed a basic implementation with Flow Field follow and no steering behaviour as of yet. However, I have run into a design decision I am unsure of ,and was wondering if anyone could advise me of some potential architecture choices. When implementing Goal-Based Vector Field Pathfinding, what would be the most efficient way of keeping the flow field data when manipulating many clusters of agents? Current Thinking: It seems inefficient to recalculate per cluster of agents unless a new goal is set for them, so I would have to maintain the flow field at least up until the goal is met. I was considering creating a data structure like a map, that indexes each generated and maintained Flow Field based on whether some agent clusters are still trying to reach that goal, after which I would terminate. Then update each cluster every 1 to 2 seconds in order to not throttle the cpu. Currently the algorithm for updating the Flow Field is rudimentary and not optimised. Each Flow Field currently takes up about 11MB (31x31 for testing purposes) of memory and I am concerned that I could run into a scenario that could take up a lot of memory if enough particularly small clusters of agents are all trying to move across the map, especially when the map would be bigger. Question: Could you provide any advice on potential architecture choices that would be ideal in this Scenario or direct me to some implementations/papers? I have so far only managed to find implementations and demos of Flow Fields only with 1 goal and 1 cluster in mind, but not when put into practice in a real game with multiple clusters of units simultaneously moving around the map. Many thanks.
  5. Hello All, I am new to Angelscript and I am wanting to create steering behaviors for enemy AI. I haven't found any Angelscript examples on the web for Seek, Flee, Follow, Arrive, Collision avoidance, etc so far. I have successfully written an A* routine, but that's about it. Does anyone know of any steering behaviors written in Angelscript ? Can you post a link? Since I wasn't able to find any steering behaviors written in Angelscript, I decided to try to convert a "Flee" behavior written in C. Unfortunately there is a command I can't seem to find in Angelscript that the C code uses. Specifically, the "Limit" command. I think it is equivalent to a "SCALAR" command? I am not sure. Anyone know? Thanks in advance for any help. This is the code for the "Flee" behavior below: class Vehicle { PVector location; PVector velocity; PVector acceleration; Additional variable for size float r; float maxforce; float maxspeed; Vehicle(float x, float y) { acceleration = new PVector(0,0); velocity = new PVector(0,0); location = new PVector(x,y); r = 3.0; Arbitrary values for maxspeed and force; try varying these! maxspeed = 4; maxforce = 0.1; } Our standard “Euler integration” motion model void update() { velocity.add(acceleration); velocity.limit(maxspeed); <<<------------ THIS LINE USES THE COMMAND "LIMIT" location.add(velocity); acceleration.mult(0); } Newton’s second law; we could divide by mass if we wanted. void applyForce(PVector force) { acceleration.add(force); } Our seek steering force algorithm void seek(PVector target) { PVector desired = PVector.sub(target,location); desired.normalize(); desired.mult(maxspeed); PVector steer = PVector.sub(desired,velocity); steer.limit(maxforce); applyForce(steer); } void display() { Vehicle is a triangle pointing in the direction of velocity; since it is drawn pointing up, we rotate it an additional 90 degrees. float theta = velocity.heading() + PI/2; fill(175); stroke(0); pushMatrix(); translate(location.x,location.y); rotate(theta); beginShape(); vertex(0, -r*2); vertex(-r, r*2); vertex(r, r*2); endShape(CLOSE); popMatrix(); }
  6. Let's say, on abstract level, the path, namely, A->B->C->D->E is valid, but the agent must choose portal #1 to reach E.... Presumably the agent has chosen portal #2, and go to B, C and D and finally ending up finding itself getting stuck at D and cannot move over to E... The whole computation is wasted. How do I avoid this problem? thanks Jack
  7. I am not sure I can ask questions about a specific library here, but if you haven't already. I'd like to tag some polys in a navigation mesh that correspond to grass or road etc, I can give an extent to do so, or in another way, I can directly feed a geometry in and the polys are tagged this way. But I am looking into alternative ways such as allowing the user to tag the polys using a text file or bitmap file (like the way heightfields are done).. If I define a area map which is a grayscale image, and the values range from 0-255, and for example, if the value of the first char is 0, then I can map this index to certain place in the navigation mesh, and say this is a walkable ground etc, unlike heightfields, where you define an image and the resultant thing is some terrain, but when you start off with a bitmap for area map, you end up with what? you see, I had the geometry already, the area map probably doesn't make sense here, same way as the text file thing.... Any ideas? Jack
  8. When my program was generating navigation meshes for my geometry, and the extent is 5000x5000 (m), with a cell size of 5.0 meters, it has to generate 250,000 cells, which makes my computer halt, the target object is a truck, which has a radius of about 20 meters, should I give the cell size a value of 20, because I don't really want to use this navigation mesh on one type of vehicle only. I want to use it for vehicles like cars, which has only about 5 meters in radius... the generation speed is a problem while the memory consumption is another. Any ideas how to improve this? Thanks Jack
  9. Hi, I was reading Game AI Pro how they implemented Supreme Commander path finding and came to one question. For the integration of cost field they are using Eikonal equation for traversing the areas. They recommended Fast Iterative Method and it's parallel version for solving this equation. However in all basic tutorials of flow field navigation that I found for integrating of the cost field is used simple grass/brush fire algorithm. My question is what would be the difference if they used in the game just grass fire algorithm? I guess the reason was some quality/performance trade of. Fortunately the Fast Iterative Method is very easy to implement so I can compare the performance, but what I don't understand is, what is the "quality" benefit.
  10. Hi everyone! I have spent several months on my first hobby game project and decided to push it out to get some feedbacks. Would really appreciate it if you'd give it a try! It is live (and free of course) on http://www.airline-club.com ! The game is still in very early development stage, but I think it is quite playable My goal is to have a simulation good enough to generate data similar to real world's, while giving flexibility to simulate new outcome based on the dynamics of the conditions. For example one can start a really successful airline in a city that can actually alter demand distribution that is different from those observed in our current world (some part that I have not implemented yet but on the list for more end-game type - Airlines can make heavy investment in city/airport such as funding airport expansion, city projects to enhance a city's attributes etc) Several screenshots: Virtual passengers taking various routes using a flight going from San Francisco to Tokyo: The main highlights are: 1. Real passengers that make decisions on what route to take based on various factors : Awareness of your airline, Loyalty to your airline, ticket pricing, seat class, service quality, # of connections, total travel time etc etc. This is different from several games that I played before which appeared to oversimplify flight demand with no connection flights. Having realistic simulation here provides many different strategic options 2. Thousands of airports with basic data driven by real geographical information (surrounding city population, income, airport scale etc) 3. Demand from airport to airport is calculated dynamically (not really based on real world data), which means various changes in game condition (not yet implemented, conditions are static for now) will affect demand accordingly 4. Reactive UI - not a UI expert here. But at least I tried to make the code runs fast and the control flow reasonable A quick clip of the gameplay can be found here: https://uc50f3ae07e58d5abedb49ec99d6.dl.dropboxusercontent.com/cd/0/inline/AJusVsuSNgpWPKZsgBp7Eoo_WHboR0yZkArO4N4CewWG2580AOWDvIEbEruILjNmsgoLXaUqs3uJGMojsALOgz0B8StIMe2s0XTqvY85AyUzVMKS2EDyXq6ZGzhobshfTGFJm02puIcd5zkGg7IpsGYEfcP0WWK40o6UpReN67I1ok52wI11m04pZv4Zp9JAydQ/file Many thanks for your attention again and hope to see you in the game soon o/
  11. Hi In a game like stronghold with many workers going back and forth from a workplace to a dropoff-point for example, would I store the path and then use it backwards to not have to redo pathfinding all the time? If the moving units in the city doesnt block movement, very often I could just reuse the old path. Only if i run into a wall (if the player constructs a new building in the path of the old workers) will I recalculate the path. Workers in old age of empires for example, do they pathfind every time they go back to drop off gold in the storage? Thanks! Erik
  12. Player Replaced the ball with a real character model. At the moment I have just a few character models with two animations (run and idle): Video The character was made with MakeHuman and the animations are from Mixamo. The workflow is straight forward and (more or less) painless: In MakeHuman export the character as .mhx2. Import in Blender. The MakeHuman Blender Plugin has a MHX2 importer. If needed apply transformation, but if you use MHX2 i think this is not needed. Export from Blender as .fbx. Upload the FBX file to Mixamo. Select the animation. Download the FBX file. Import in Blender. Export the character with the Urho3D-Blender plugin. I would recommend to export a Prefab so you can load the whole thing with materials etc. at once. NPCs RPGs (and other games) usually have NPCs - non playing characters, well they are playing but they are computer controlled. This game should be largely scriptable. There are different game formats and maps and each should just differ in the script that is executed. A game script could look like this: function onStart() -- self is the this pointer -- Spawn an NPC and set the position. local smith = self:AddNpc("/scripts/creatures/npcs/smith.lua") if (smith ~= nil) then local x = -6.71275 local z = 12.5906 local y = self:GetTerrainHeight(x, z) smith:SetPosition(x, y, z) -- Here we just use the Y-axis, internally it uses Quaternions smith:SetRotation(180) end local merchant = self:AddNpc("/scripts/creatures/npcs/merchant.lua") if (merchant ~= nil) then local x = 4.92965 local z = 12.2049 local y = self:GetTerrainHeight(x, z) merchant:SetPosition(x, y, z) merchant:SetRotation(180) end end function onStop() end function onAddObject(object) print("Object added: " .. object:GetName()) end function onRemoveObject(object) print("Object removed: " .. object:GetName()) end function onPlayerJoin(player) player:AddEffect(empty, 1000, 0) print("Player joined: " .. player:GetName()) end function onPlayerLeave(player) player:RemoveEffect(1000) print("Player left: " .. player:GetName()) end -- Game Update function onUpdate(timeElapsed) -- print(timeElapsed) end As with other scripting APIs, there are events that are called by the program (from C++), and there are objects that have methods accessible by the script. An NPC script is as simple: -- Object stats/properties name = "Smith" level = 20 modelIndex = 5 -- Smith body model sex = 2 -- Male creatureState = 1 -- Idle prof1Index = 1 -- Warrior prof2Index = 0 -- None function onInit() return true end function onUpdate(timeElapsed) -- Do something intelligent! end -- self was selected by creature function onSelected(creature) print(creature:GetName() .. " selected me, the " .. self:GetName() .. " :D") end -- creature collides with self function onCollide(creature) -- Testing Octree query local objects = self:QueryObjects(2.0) print(type(objects)) for i, v in ipairs(objects) do print(i, v, v:GetName()) end end Navigation Navigation is a difficult topic. Fortunately there is Recast & Detour, a library to find paths with a navigation mesh. Currently it can to go to a static position and follow a moving object. Technically it's the same, but it doesn't recalculate the path when the object is not moving. However, I've read somewhere that Detour can calculate incomplete paths which would be useful for following objects, but I didn't figure that out yet. Video Detour uses navigation meshes created with Recast. The image bellow shows the generated navigation mesh based on the heightfield of this map. As mentioned earlier, the heightfield is just a bitmap where brighter colors are higher altitudes than darker colors. A mesh is created with a custom command line program from this bitmap, which Recast uses to generate the navigation mesh. To be continued...
  13. Just wrapped up organising the code. Originally when I set out to program this project I had the idea that would-be followers of my efforts could just copy and paste quick little bits to the base files from my Introduction to Three.js post. However, I'm not that organise and don't have the level of skill to be able to pull that off. I've had to go through many older functions and modify this or that. So you'll have to bare with me but if you want to stay up to date with this project and have a copy of the code as of 'today' you'll have to just download everything here TD.zip current functionality: Toggle Screen Size; if you press ' f ' you'll toggle the size of the screen. Walls; If you press ' w ' you'll be able to select locations for walls. Moving your mouse along the grid you'll see the grid line the cursor is closest to turn yellow. If you left click the mouse button you'll create a wall. If you right click on a line under a wall that already exists the wall will be removed. Walls can be created between adjoining cells/squares/spaces. When you select a grid line that will block the path to an exit the line will turn red indicating that a wall can't be built there. If you try to left click nothing will happen. Towers; If you press ' t ' you'll be able to select the locations for tower. Moving your mouse along the grid you'll see a large square 2x2 appear in cyan if you select a location that doesn't envelop any surrounding wall. The large square will turn red if the location isn't permitted. As of right now the Tower placement function doesn't take into account Towers blocking the path to an exit. If you want to switch between walls and towers you must de-select your previous choice by pressing the say keyboard key. Clear as mud? Hope not. Here's a video showing what's been added. Now that this is all done I can start adding towers and enemies. I'm going to scrap the Misty Mist Tower. I'm instead going to replace it with the Tach Tower. It will shot bullets that will slow down an enemy with each successive hit. It's been done a bunch before. But unlike other games where the effect wears off, in this one the effect will be permanent. However; in order to dramatically reduce the speed of an enemy, it have to be shot many times. Let me know what you think! Thanks for reading.
  14. So the game I'm working on is going to use rooms that are connected to each other by little holes, creating something somehow similar to "Binding of Isaac", but with organic room disposition rather than rooms placed on a grid with specific dimensions. To do this, I needed to search for a good level generation algorithms. I've found that the best algorithm is the BSP trees algorithm, which was traditionally used in old-schools roguelike games. Binary Partition Trees The algorithm works by taking an area and split it in half, thus creating tow new area to be recursively processed by the algorithm. The algorithm can be run until we stumble upon an area of a specific dimension. This is what that algorithm can do: Traditionally, each area becomes leaves, in which a random rectangle room is drawn within these. Afterward, each of these rooms is connected to each others using longs corridors. This approach works in 2D, as the player always has a clear view of his surroundings. Here is an excellent reference on the subject: Basic BSP Dungeon generation Adaptations However, because the game is in a first-person perspective, the corridors won't work as the player can't really know his surrounding very well. We need to adapt the algorithm so that the player can feel comfortable while navigating in our level. I've chosen to eliminate corridors and rooms altogether and instead use the leaves themselves. Instead, each leaf is connected to each other through small holes. Also, the BSP tree algorithm creates a web-like dungeon with no clear end or start, which is fine if you're making a traditional dungeon, but we want our levels to have a certain flow so that the player can quickly find its way out if they want to. The way I planned to do that is by transforming the BSP leaves into a kind of navmesh grid. Afterward, we just pick some positions and select specific leaves that makes up the path. Creating the navmesh graph First, before we can use the graph search algorithm, we need to build our graph. BSP tree is still binary trees, so using those to deal with connections are out of the question. We need to somehow get all the leaves created in the BSP tree algorithm and put them in a more flexible structure: enter the quadtree. Quadtrees are a kind of tree that can have at most 4 children. This characteristic is quite useful when it comes to 2D rectangles. Here's a visual representation of a quadtree: With these kinds of trees, it's possible to get every overlapping leaf from a specific rectangle. If for a given room, we query a slightly bigger search zone, then we'll be able to find all of the given room's neighbours. We can then connect everything up and finally launch our graph search using randomly picked rooms that are far enough from each other. Do the pathfinding I've already made a blog entry on pathfinding so the procedure is almost the same... However, there is some difference here... One of the most important difference is that we add the concept of "hot" cells. When a specific cell is deemed "hot" and that the graph search algorithm stumbles upon it then its cost will be infinite. That way, we can say to the algorithm this: "Do not pick this cell unless it's your last option". This makes for a somehow imperfect path... But in our case, imperfect is perfect. Afterwards, we add all of the chosen rooms in a final list. All rooms in this list will be part of our level and will be rendered out later on. Add more rooms After we picked the main rooms, we can then append bonus rooms to some of these main rooms if the player is lucky, not unlike hidden rooms in "Binding of Isaac"... Also, the game is going to sometime have an "alternative path". These paths will try to be shorter than the main path and overall have more bonus rooms and loot to them. I've planned that the player needs to fulfil some requirement to enter this path. Because we already created a graph of each possible rooms, it's just a matter of luck to see if a room has a special room connected to it. Rendering it out Once the rooms configurations were made, we now need to create the geometries and collisions of the level. Before going with the BSP approach, I've tried to use cellular automata to generate cave-like structures... It wasn't controllable enough, but I've kept some of the code from it (mainly its geometry generation) Here's the cellular automata tutorial Basically, we render each rooms pixel by pixel. We then cut those "holes" I've talked about earlier and voilà. Here, I've coloured each room to give a better idea of what kind of room each is which. Blue rooms are part of the alternate path I've mentioned before. The green and red rooms represent both the starting and ending room respectively. Yellow rooms are bonus rooms. Afterward, it's only a matter of placing props and enemies. This method of creating levels is cool. It can make the game more flexible and interesting. It also all depends on luck, which is a stat that can change in-game.
  15. Subscribe to our subreddit to get all the updates from the team! Introduction In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls. Configuration 3D or 2D navigation mesh, as long as the 3D one is not cyclic. Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals. An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh. The Algorithm The agent is the pink low-poly humanoid and the final destination is the flag. The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game. Psss, check this article if you haven't seen it yet. The algorithm is the following : For each waypoint, pick the current one and the next one until the next one is the last. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one. Convert each edge to a 2D line by discarding the Y component (UP vector). For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6. Here's a video of the algorithm in action :
  16. Hello GameDev, This entry is going to be a big one for me, and it's going to cover a lot. What I plan to cover on my recent development journey is the following: 1 - Goal of this Blog entry. 2 - Lessons learned using Node.js for development and testing as opposed to Chrome console. 3 - Linear Path algorithm for any surface. 4 - Dynamic Path Finding using Nodes for any surface, incorporating user created, dynamic assets. 5 - short term goals for the game. -- - -- - -- - -- - -- - -- - Goal of this Blog entry - -- - -- - -- - -- - -- - -- My goal for this adventure is to create a dynamic path-finding algorithm so that: - any AI that is to be moved will be able to compute the shortest path from any two points on the surface of the globe. - the AI will navigate around bodies of water, vegetation, dynamic user assets such as buildings and walls. - will compute path in less then 250 milliseconds. There are a few restrictions the AI will have to follow, in the image above you can see land masses that are cut off from one another via rivers and bodies of water are uniquely colored. If an AI is on a land mass of one color, for now, it will only be able to move to a location on the same colored land mass. However; there are some land masses that take up around 50% of the globe and have very intricate river systems. So the intended goal is be able to have an AI be on one end of the larger land mass and find the shortest path to the opposite end within 250 milliseconds. Currently my path finding algorithm can find the shortest path in anywhere from 10 ms and up, and when I say up, I mean upwards of 30 seconds, and that's because of the way I built the algorithm, which is in the process of being optimised. -- - -- - -- - -- - -- - -- - Lessons learned using Node.js for development and testing - -- - -- - -- - -- - -- - -- As of this writing I am using Node.js to test the efficiency of my algorithms. This has slowed down my development. I am not a programmer by trade, I've taught myself the bulk-work of what I know, and I often spend my time re-inventing the wheel and learning things the hard way. Last year I made the decision to move my project over to Node.js for continued development, eventually it all had to be ported over to Node.js anyways. In hind sight I would have done things differently. I would have continued to use Chrome console for testing and development, small scale, then after the code was proven to be robust would I then port it over to Node.js. If there is one lesson I'd like to pass on to aspiring and new programmers, it's this, use a language and development environment that allows you, the programmer, to jump into the code while it's running and follow each iteration, line by line, of code as it's be executed, basically debugging. It is so easy to catch errors in logic that way. Right now I'm throwing darts at a dart board, guesses what I should be sending to the console for feedback to help me learn more about logical errors using Node.js, see learning the hard way. -- - -- - -- - -- - -- - -- - Linear Path algorithm for any surface. - -- - -- - -- - -- - -- - -- In the blog entry above I go into detail explaining how I create a world. The important thing to take away from it is that every face of the world has information about all surrounding faces sharing vertices pairs. In addition, all vertices have information regarding those faces that use it for their draw order, and all vertices have information regarding all vertices that are adjacent to them. An example vertices and face object would look like the following: Vertices[ 566 ] = { ID: 566, x: -9.101827364, y: 6.112948791, z: 0.192387718, connectedFaceIDs: [ 90 , 93 , 94 , 1014 , 1015 , 1016 ], // clockwise order adjacentVertices: [ 64 , 65 , 567 , 568 , 299 , 298 ] // clockwise order } Face[ 0 ] = { ID: 0, a: 0, b: 14150, c: 14149, sharedEdgeVertices: [ { a:14150 , b: 14149 } , { a:0 , b: 14150 } , { a:14149 , b:0 } ], // named 'cv' in previous blog post sharedEdgeFaceIDs: [ 1 , 645 , 646 ], // named 's' in previous blog post drawOrder: [ 1 , 0 , 2 ], // named 'l' in previous blog post } Turns out the algorithm is speedy for generating shapes of large sizes. My buddy who is a Solutions Architect told me I'm a one trick pony, HA! Anyways, this algorithm comes in handy because now if I want to identify a linear path along all faces of a surface, marked as a white line in the picture above, you can reduce the number of faces to be tested, during raycasting, to the number of faces the path travels across * 2. To illustrate, imagine taking a triangular pizza slice which is made of two faces, back to back. the tip of the pizza slice is touching the center of the shape you want to find a linear path along, the two outer points of the slice are protruding out from the surface of the shape some distance so as to entirely clear the shape. When I select my starting and ending points for the linear path I also retrieve the face information those points fall on, respectively. Then I raycaste between the sharedEdgeVertices, targeting the pizza slice. If say a hit happens along the sharedEdgeVertices[ 2 ], then I know the next face to test for the subsequent raycaste is face ID 646, I also know that since the pizza slice comes in at sharedEdgeVertice[ 2 ], that is it's most likely going out at sharedEdgeVertices[ 1 ] or [ 0 ]. If not [ 1 ] then I know it's 99% likely going to be [ 0 ] and visa-versa. Being able to identify a linear path along any surface was the subject of my first Adventure in Robust Coding. Of course there are exceptions that need to be accounted for. Such as, when the pizza slice straddles the edge of a face, or when the pizza slice exits a face at a vertices. Sometimes though when I'm dealing with distances along the surface of a given shape where the pizza slice needs to be made up of more than one set of back to back faces, another problem can arise: I learned about the limitations of floating point numbers too, or at least that's what it appear to be to me. I'm sure most of you are familiar with some variation of the infinite chocolate bar puzzle So with floating point numbers I learned that you can have two faces share two vertices along one edge, raycaste at a point that is directly between the edges of two connecting faces, and occasionally, the raycaste will miss hitting either of the two faces. I attribute this in large part because floating point numbers only capture an approximation of a point, not the exact point. Much like in the infinite chocolate bar puzzle there exists a tiny gap along the slice equal in size to the removed piece, like wise, that tiny gap sometimes causes a miss for the raycaste. If someone else understands this better please correct me. -- - -- - -- - -- - -- - -- - Dynamic Path Finding using Nodes for any surface - -- - -- - -- - -- - -- - -- Now that I've got the linear path algorithm working in tip top shape, I use it in conjunction with Nodes to create the pathfinding algorithm. Firstly I identify the locations for all nodes. I do this using a Class I created called Orientation Vector, I mention them in the blog post above. When they're created, they have a position vector, a pointTo vector, and an axis vector. The beauty of this class is that I can merge them, which averages their position, pointTo, and axis vectors, and it allows me to rotate them along any axis, and it allows me to move them any distance along the axis of their pointTo vector. To create shoreline collision geometry, and node collision geometry, illustrated above, and node locations along shorelines, illustrated below, I utilise the Orientation Vector Class. Firstly, the water table for the world is set to an arbitrary value, right now it's 1.08, so if a vector for a given face falls below the table and one or two vertors are above the table then I know the face is a shoreline face. Then I use simple Math to determine at what two points the face meets the water and create two OVectors, each pointing at each-other. Then I rotate them along their y axis 90 and -90 degrees respectively so that they are now facing inland. Since each face, which are shoreline faces, touch one another, there will be duplicate OVectors a each point along the shore. However, each Ovector will have a pointTo vector relative to it's sister Ovector during creation. I merge the paired Ovectors at each point along the shore, this averages their position, pointTo and axis. I then move them inland a small distance. The result is the blue arrows above. The blue arrows are the locations of three of the thousands of nodes created for a given world. Each Node has information about the shoreline collision geometry, the node collision geometry ( the geometry connecting nodes ), and the Node to its left and the Node to its right. Each face of collision geometry is given a Node ID to refer to. So to create the path-finding algorithm. I first identify the linear path between the starting and ending points. I then test each segment of the linear path for collision geometry. If I get a hit, I retrieve the Node ID. This gives me the location for the Node associated for a given face of collision geometry. I then travel left and right along connecting Nodes checking to see if a new Linear path to the end point is possible, if no immediate collision geometry is encountered, the process continues and is repeated as needed. Subsequently, a list of points is established, marking the beginning, encountered Nodes and end of the line of travel. The List is then trimmed by testing linear paths between every third point, if a valid path is found, the middle point is spliced. Then all possible paths that have been trimmed are calculated for distance. the shortest one wins. Below is the code for the algorithm I currently use. its my first attempt at using classes to create an algorithm. Previously I just relied on elaborate arrays. I plan on improving the the process mentioned above by keeping track of distance as each path spreads out from it's starting location. Only the path which is shortest in distance will go through its next iteration. With this method, once a path to the end is found, I can bet it will be shortest, so I won't need to compute all possible paths like I am now. The challenge I've been facing for the past two months is sometimes the Nodes end up in the water, The picture above shows a shoreline where the distance the OVectors travel would place them in the water. Once a node is in the water, it allows the AI to move to it, then there is no shoreline collision geometry for it to encounter, which would keep it on land, and so the AI just walks into the ocean. Big Booo! I've been writing variations of the same function to correct the location of the geometry shown below in Red and Yellow below. But what a long process. I've rewritten this function time and time again. I want it to be, well as the title of this Blog states, Robust, but it's slow going. As of today's date, it's not Robust, and the optimised path-finding algorithm hasn't been written either. I'll be posting updates in this blog entry as I make progress towards my goal. I'll also make mention what I achieve for shortest, long time for pathfinding. Hopefully it'll be below 250 ms. -- - -- - -- - -- - -- - -- - short term goals for the game - -- - -- - -- - -- - -- - -- Badly... SO BADLY I want to be focusing on game content, that's all I've been thinking about. Argh, But this all has to get wrapped up before I can. I got ahead of myself, I'm guilty of being too eager. But there is no sense building game content on top of an engine which is prone to errors. My immediate goals for the engine are as follows: // TO DO's // // Dec 26th 2017 // /* * << IN PROGRESS >> -update path node geometry so no errors occur * -improve path finding alg with new technique * -improve client AI display -only one geometry for high detail, and one for tetrahedron. * -create ability to select many AI at the same time by drawing a rectangle by holding mouse button. * -create animation server to recieve a path and process animation, and test out in client with updates. * -re-write geometry merging function so that the client vertices and faces have a connected Target ID * -incorporate dynamic asset functionality into client. * -create a farm and begin writing AI. * -program model clusters * -sychronize server and client AI. Test how many AI and how quickly AI can be updated. Determine rough estimate of number of players the server can support. * */ see the third last one! That's the one, oh what a special day that'll be. I've created a Project page, please check it out. It gives my best description to date of what the game is going to be about. Originally I was going to name it 'Seed', a family member made the logo I use as my avatar and came up with the name back in 2014. The project is no longer going to be called Seed, it's instead going to be called Unirule. [ edit: 02/02/18 Some new screen shots to show off. All the new models were created by Brandross. There are now three earth materials, clay, stone and marble. There are also many types of animals and more tree types. ] Thanks for reading and if you've got anything to comment on I welcome it all. Awoken
  17. Just released another version of DRTS. You can play it at https://distilledgames.itch.io/distilled-rts Picking Maps I improved the UI to preview maps before starting to play with bots. Now it is easier to find a map you like and see what you are getting into before starting the game. Camera Controls In addition to earlier input methods, you now can control the camera also via keyboard: To pan, use arrow keys, for zoom plus and minus. To improve onboarding, I added buttons for controlling the camera. This visual addition to the UI should help new players notice that the field of view can be changed. Under The Hood To prepare for competitive games, the map generator was improved for exact symmetry of edges in generated maps. Pathfinding now yields optimal paths in more cases.
  18. There are a bunch of path finding implementations online. But, to be honest, I wasn't much satisfied with most of them, for one of these reasons: Dynamic memory allocation in the middle of the algorithm Algorithm that does too much (more than what is needed) Too many files for just a single task So I made this two-files (`uastar.c` and `uastar.h`) library: https://github.com/ferreiradaselva/uastar No memory dynamic allocation. Straight to the point (the README.md explains how to use). It's nothing biggie, but certainly useful. Path finder at work: I'm leaving this in announcements, because I probably won't add more features (it's pretty much done).
  19. This Week This week I was primarily focused on creating updated turret models to fit the new game style I chose. I changed the ground asphalt color to a lighter one so it wouldn't overshadow the turrets. I also fixed some bugs related to rockets' smoke trails. Other than that, there's really not much to tell. So today's blog entry will have lots of pictures and very little amount of text. So lets get started, shall we. New Turret Models This week I almost completely finished modeling 2 turrets, including all the upgrades which make it 6 models in total. Laser Gun Turret Laser gun turret shoots laser beams at ground enemies, it cannot target air units. It is cheap and should be used early in the game. Level 1 Level 2 Level 3 Rocket Turret Rocket turret targets both ground and air entities but it is expensive, so it shouldn't be overused. Level 1 Level 2 Level 3 Cinematic Gameplay Screenshots Actual Gameplay Screenshots Next Week For the next week I am planning on continuing with the remodeling of other turrets and if I will have enough time left, I might start implementing a new turret or I might just start fixing bugs, because there are plenty of them. That is all for now, thanks for reading!
  20. Awoken

    Node Loops

    Hello GameDev, During my efforts to accomplish the last robust challenge of mine I discovered a new way, for me, to speed up the processing time to achieve much faster path-finding results. About the technique: I created continues node loops, Every node has information about the loop it's a member of. Each Node has the following object: { id: 4, loop: 10, loopSize: 8, loopStart: 0, loopEnd: 7 } and the following functions: left( x ){ if( this.id - x < this.loopStart ){ return ( this.id - x + this.loopSize ); } else { return ( this.id - x ); } } right( x ){ if( this.id + x > this.loopEnd ){ return ( this.id + x - this.loopSize ); } else { return ( this.id + x ); } } getDirection( n , M ){ var m = M || this.id; if( n - m > 0 ){ if( Math.abs( n - m - this.loopSize ) < ( n - m ) ){ return true; // to IDa } else { return false; // to IDb } } else { if( ( n - m + this.loopSize ) < Math.abs( n - m ) ){ return false; // to IDb } else { return true; // to IDa } } } getNodesBetween( n , M ){ var m = M || this.id; if( n - m > 0 ){ if( Math.abs( n - m - this.loopSize ) < ( n - m ) ){ return Math.abs( n - m - this.loopSize ); // to IDa } else { return ( n - m ); // to IDb } } else { if( ( n - m + this.loopSize ) < Math.abs( n - m ) ){ return ( n - m + this.loopSize ); // to IDb } else { return Math.abs( n - m ); // to IDa } } } With information about its parent loop accessible to each node, informed choices are possible about the nodes that should be tested for each subsequent iteration. The following animated .gif visually demonstrates the logic. 1: Select Origin and Destination points 2: Check for collisions between them and determine node pairs ( Node pairs are nodes that are apart of the same loop ) 3: Determine the smallest number of nodes between the node pairs, ( Are there fewer nodes if we travel left? or right? ) 4: Check for collisions between Origin node and each incremental node apart of the loop. ( stop once collision is found ) 5: Use last successful node tested as the new Origin point. 6: Check for collisions between Destination node and each incremental node apart of the loop. ( stop once collision i found ) 7: Use last successful node tested as the new Destination point. 7. Repeat. The Good, It is much faster than using the blind search method I was using before, especially if both directions are searched when an obstacle is encountered. If only one node loop is encountered it will return the shortest path with the addition of a refinement function. The Bad, It doesn't return the shortest path. For that it has to be used in combination with other techniques and two new 'branches' of the function would be required for each unique node loop encountered.
  21. Hello guys, I just registered this site and heard from my lecturer that this a good site to talk about certain topics since my research topic are mostly programmer who are experienced with AI can answer the survey. The reason of the survey below is to understand which is suitable solution for 2d platformer pathfinding for AI and which one is easier to implement for 2D platformer. I would appreciate if you guys give your responses for the survey link shared and thank you for spending time answering the survey. Sorry if the survey is a bit hard to understand, I tried to make it understandable as best as I can. Again, thank you! https://goo.gl/forms/S0etAlAAHL6S5kTI2
  22. Here are the two main files: Path: https://pastebin.com/ZkrbJ78t AStar: https://pastebin.com/EVGazFtU A few things I've done to optimize the search: 1. Rather than a dictionary to keep track of scores and parents, I use a 2d array of objects that are mapped to the grid i.e. [x,y] represents grid point (x,y). Objects have F,G,H and Parent fields, these fields are reset to default values during each search. 2. Cheap weighted heuristic 3. Orthogonal neighbors (searching 4 neighbors rather than 8) Paths don't have to be beautiful because the point reduction smooths them out quite nice. In Path.cs you can see the two functions to reduce points, Incremental and Stretch. Stretch is much faster because it has the potential to reduce points all the way to the end right off the bat or early on thus reducing calls to the LineOfSight function, but it isn't so reliable around small areas/corners. The incremental function is reliable but it calls to the LineOfSight function for each and every point along the path, this causes the path reduction to be just about as slow as calculating the path itself. Line of Sight is determined by getting all points between 2 points and checking that they are all flagged as Walkable. My grid is dynamic and roughly 250x250. I tried out Theta* and JPS, both were slower than my current A*. I'm trying to avoid using a navmesh or worker thread, I suppose mostly just looking for ideas to tighten up my code
  23. Hello hello. I'm in the preliminary design phase for a space based game, and I need some advice on how to approach the AI side of things. Here's the situation in a nutshell. Say I'm a space explorer with a spaceship, and I am competing with other space explorers to be the first one to discover things. I have a procedurally generated 2D top-down solar system, and to make things a little simpler, let's say all the planets in the system are static, meaning they are not orbiting their sun. But they all have their gravity wells of varying strength. As a player I have to negotiate newtonian physics around these planets, using engine thrust at the right amounts and timing, to get to where I want. That part is not a problem. I'm also willing to assume non-newtonian rotation so that AI and player do not need to account for appyling torque to get a correct bearing. So far I have not mentioned whether this is real-time or turn-based and that's because of my uncertainty around AI. Problem is I'm not sure how to approach the AI side of things either way. Ideally I'd like to have an AI that can optimize trajectory for speed and/or fuel efficiency, but I have been able to find precious little on the topic on the interwebs. The best I've found so far is the following article from a decade ago, and it does not really point to a solution: http://playtechs.blogspot.ca/2007/05/pathfinding-in-space.html If I can find a good resource on how to pull this off in realtime, I'd like to go that route. At the moment my fallback is using a turn based system for exploration and visualizing the system as a hex grid. Then using A* I could get AI agents to naively assume use of thrust to come to a stand still each time they want to change trajectory, and then add extra thrust to move in the next direction, but I'm worried this would be extremely under-optimized in terms of fuel efficiency and the reach of rival ships as a result. I could also factor in the AI ship's current velocity into the graph search, which would likely greatly increase the search space for pathfinding, but I haven't experimented with it yet to be able to say whether it's viable. Any thoughts?
  24. Hello, I'm trying to implement enemy pathfinding algorihtm, but i have problem with empty tile collision when moving enemy to node. For example this image shows how it should move like shown in example: But it stucks at last tile: It happens because enemy collides with right side of "air" tile and then it removes from node list because it "collided", but it works with not "air" tiles of course. How do fix this problem? Code: void Enemy::generateMoveToNode(AStar::Vec2i lastNode) { auto lastSave = AStar::Vec2i{ 0.0f, 0.0f }; while (!target.empty()) { if (target.back().y == lastNode.y) { lastSave = target.back(); target.pop_back(); } else { moveToNodes.push_back(lastSave); moveToNodes.push_back(target.back()); generateMoveToNode(target.back()); return; } } moveToNodes.push_back(lastSave); } void Enemy::updateTarget(std::shared_ptr<InputManager> inputManager) { if (moveToNodes.empty()) return; // Calculate half sizes. float halfWidthA = getSize(0) / 2.0f; float halfHeightA = getSize(1) / 2.0f; float halfWidthB = 32.0f / 2.0f; float halfHeightB = 32.0f / 2.0f; // Calculate centers. auto centerA = glm::vec2(getPosition(0) + halfWidthA, getPosition(1) + halfHeightA); auto centerB = glm::vec2((moveToNodes.front().x * 32.0f) + halfWidthB, (moveToNodes.front().y * 32.0f) + halfHeightB); // Calculate current and minimum-non-intersecting distances between centers. float distanceX = centerA.x - centerB.x; float distanceY = centerA.y - centerB.y; float minDistanceX = halfWidthA + halfWidthB; float minDistanceY = halfHeightA + halfHeightB; setKey(inputManager->getKeyBinding("Move Left"), false); setKey(inputManager->getKeyBinding("Move Right"), false); setKey(inputManager->getKeyBinding("Jump"), false); setKey(inputManager->getKeyBinding("Duck"), false); // If we are not intersecting at all, return (0, 0). if (abs(distanceX) >= minDistanceX || abs(distanceY) >= minDistanceY) { if (moveToNodes.front().y > ceil(getPosition(1) / 32.0f)) setKey(inputManager->getKeyBinding("Jump"), true); else if (moveToNodes.front().y < ceil(getPosition(1) / 32.0f)) { if (getCanClimb()) setKey(inputManager->getKeyBinding("Duck"), true); } else { if (moveToNodes.front().x < ceil(getPosition(0) / 32.0f)) setKey(inputManager->getKeyBinding("Move Left"), true); else if (moveToNodes.front().x > floor(getPosition(0) / 32.0f)) setKey(inputManager->getKeyBinding("Move Right"), true); } updateInput(inputManager); return; } // Calculate and return intersection depths. float depthX = distanceX > 0 ? minDistanceX - distanceX : -minDistanceX - distanceX; float depthY = distanceY > 0 ? minDistanceY - distanceY : -minDistanceY - distanceY; updateInput(inputManager); moveToNodes.erase(moveToNodes.begin()); } generateMoveToNode: recursive function to generate all nodes. updateTarget: updates enemy every frame to check if it hits node and then removes it from list and checks next till no nodes left.
  • Advertisement
×