Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Pathfinding'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 24 results

  1. This Week This week I was primarily focused on creating updated turret models to fit the new game style I chose. I changed the ground asphalt color to a lighter one so it wouldn't overshadow the turrets. I also fixed some bugs related to rockets' smoke trails. Other than that, there's really not much to tell. So today's blog entry will have lots of pictures and very little amount of text. So lets get started, shall we. New Turret Models This week I almost completely finished modeling 2 turrets, including all the upgrades which make it 6 models in total. Laser Gun Turret Laser gun turret shoots laser beams at ground enemies, it cannot target air units. It is cheap and should be used early in the game. Level 1 Level 2 Level 3 Rocket Turret Rocket turret targets both ground and air entities but it is expensive, so it shouldn't be overused. Level 1 Level 2 Level 3 Cinematic Gameplay Screenshots Actual Gameplay Screenshots Next Week For the next week I am planning on continuing with the remodeling of other turrets and if I will have enough time left, I might start implementing a new turret or I might just start fixing bugs, because there are plenty of them. That is all for now, thanks for reading!
  2. So the game I'm working on is going to use rooms that are connected to each other by little holes, creating something somehow similar to "Binding of Isaac", but with organic room disposition rather than rooms placed on a grid with specific dimensions. To do this, I needed to search for a good level generation algorithms. I've found that the best algorithm is the BSP trees algorithm, which was traditionally used in old-schools roguelike games. Binary Partition Trees The algorithm works by taking an area and split it in half, thus creating tow new area to be recursively processed by the algorithm. The algorithm can be run until we stumble upon an area of a specific dimension. This is what that algorithm can do: Traditionally, each area becomes leaves, in which a random rectangle room is drawn within these. Afterward, each of these rooms is connected to each others using longs corridors. This approach works in 2D, as the player always has a clear view of his surroundings. Here is an excellent reference on the subject: Basic BSP Dungeon generation Adaptations However, because the game is in a first-person perspective, the corridors won't work as the player can't really know his surrounding very well. We need to adapt the algorithm so that the player can feel comfortable while navigating in our level. I've chosen to eliminate corridors and rooms altogether and instead use the leaves themselves. Instead, each leaf is connected to each other through small holes. Also, the BSP tree algorithm creates a web-like dungeon with no clear end or start, which is fine if you're making a traditional dungeon, but we want our levels to have a certain flow so that the player can quickly find its way out if they want to. The way I planned to do that is by transforming the BSP leaves into a kind of navmesh grid. Afterward, we just pick some positions and select specific leaves that makes up the path. Creating the navmesh graph First, before we can use the graph search algorithm, we need to build our graph. BSP tree is still binary trees, so using those to deal with connections are out of the question. We need to somehow get all the leaves created in the BSP tree algorithm and put them in a more flexible structure: enter the quadtree. Quadtrees are a kind of tree that can have at most 4 children. This characteristic is quite useful when it comes to 2D rectangles. Here's a visual representation of a quadtree: With these kinds of trees, it's possible to get every overlapping leaf from a specific rectangle. If for a given room, we query a slightly bigger search zone, then we'll be able to find all of the given room's neighbours. We can then connect everything up and finally launch our graph search using randomly picked rooms that are far enough from each other. Do the pathfinding I've already made a blog entry on pathfinding so the procedure is almost the same... However, there is some difference here... One of the most important difference is that we add the concept of "hot" cells. When a specific cell is deemed "hot" and that the graph search algorithm stumbles upon it then its cost will be infinite. That way, we can say to the algorithm this: "Do not pick this cell unless it's your last option". This makes for a somehow imperfect path... But in our case, imperfect is perfect. Afterwards, we add all of the chosen rooms in a final list. All rooms in this list will be part of our level and will be rendered out later on. Add more rooms After we picked the main rooms, we can then append bonus rooms to some of these main rooms if the player is lucky, not unlike hidden rooms in "Binding of Isaac"... Also, the game is going to sometime have an "alternative path". These paths will try to be shorter than the main path and overall have more bonus rooms and loot to them. I've planned that the player needs to fulfil some requirement to enter this path. Because we already created a graph of each possible rooms, it's just a matter of luck to see if a room has a special room connected to it. Rendering it out Once the rooms configurations were made, we now need to create the geometries and collisions of the level. Before going with the BSP approach, I've tried to use cellular automata to generate cave-like structures... It wasn't controllable enough, but I've kept some of the code from it (mainly its geometry generation) Here's the cellular automata tutorial Basically, we render each rooms pixel by pixel. We then cut those "holes" I've talked about earlier and voilà. Here, I've coloured each room to give a better idea of what kind of room each is which. Blue rooms are part of the alternate path I've mentioned before. The green and red rooms represent both the starting and ending room respectively. Yellow rooms are bonus rooms. Afterward, it's only a matter of placing props and enemies. This method of creating levels is cool. It can make the game more flexible and interesting. It also all depends on luck, which is a stat that can change in-game.
  3. Awoken

    More Adventures in Robust Coding

    Hello GameDev, This entry is going to be a big one for me, and it's going to cover a lot. What I plan to cover on my recent development journey is the following: 1 - Goal of this Blog entry. 2 - Lessons learned using Node.js for development and testing as opposed to Chrome console. 3 - Linear Path algorithm for any surface. 4 - Dynamic Path Finding using Nodes for any surface, incorporating user created, dynamic assets. 5 - short term goals for the game. -- - -- - -- - -- - -- - -- - Goal of this Blog entry - -- - -- - -- - -- - -- - -- My goal for this adventure is to create a dynamic path-finding algorithm so that: - any AI that is to be moved will be able to compute the shortest path from any two points on the surface of the globe. - the AI will navigate around bodies of water, vegetation, dynamic user assets such as buildings and walls. - will compute path in less then 250 milliseconds. There are a few restrictions the AI will have to follow, in the image above you can see land masses that are cut off from one another via rivers and bodies of water are uniquely colored. If an AI is on a land mass of one color, for now, it will only be able to move to a location on the same colored land mass. However; there are some land masses that take up around 50% of the globe and have very intricate river systems. So the intended goal is be able to have an AI be on one end of the larger land mass and find the shortest path to the opposite end within 250 milliseconds. Currently my path finding algorithm can find the shortest path in anywhere from 10 ms and up, and when I say up, I mean upwards of 30 seconds, and that's because of the way I built the algorithm, which is in the process of being optimised. -- - -- - -- - -- - -- - -- - Lessons learned using Node.js for development and testing - -- - -- - -- - -- - -- - -- As of this writing I am using Node.js to test the efficiency of my algorithms. This has slowed down my development. I am not a programmer by trade, I've taught myself the bulk-work of what I know, and I often spend my time re-inventing the wheel and learning things the hard way. Last year I made the decision to move my project over to Node.js for continued development, eventually it all had to be ported over to Node.js anyways. In hind sight I would have done things differently. I would have continued to use Chrome console for testing and development, small scale, then after the code was proven to be robust would I then port it over to Node.js. If there is one lesson I'd like to pass on to aspiring and new programmers, it's this, use a language and development environment that allows you, the programmer, to jump into the code while it's running and follow each iteration, line by line, of code as it's be executed, basically debugging. It is so easy to catch errors in logic that way. Right now I'm throwing darts at a dart board, guesses what I should be sending to the console for feedback to help me learn more about logical errors using Node.js, see learning the hard way. -- - -- - -- - -- - -- - -- - Linear Path algorithm for any surface. - -- - -- - -- - -- - -- - -- In the blog entry above I go into detail explaining how I create a world. The important thing to take away from it is that every face of the world has information about all surrounding faces sharing vertices pairs. In addition, all vertices have information regarding those faces that use it for their draw order, and all vertices have information regarding all vertices that are adjacent to them. An example vertices and face object would look like the following: Vertices[ 566 ] = { ID: 566, x: -9.101827364, y: 6.112948791, z: 0.192387718, connectedFaceIDs: [ 90 , 93 , 94 , 1014 , 1015 , 1016 ], // clockwise order adjacentVertices: [ 64 , 65 , 567 , 568 , 299 , 298 ] // clockwise order } Face[ 0 ] = { ID: 0, a: 0, b: 14150, c: 14149, sharedEdgeVertices: [ { a:14150 , b: 14149 } , { a:0 , b: 14150 } , { a:14149 , b:0 } ], // named 'cv' in previous blog post sharedEdgeFaceIDs: [ 1 , 645 , 646 ], // named 's' in previous blog post drawOrder: [ 1 , 0 , 2 ], // named 'l' in previous blog post } Turns out the algorithm is speedy for generating shapes of large sizes. My buddy who is a Solutions Architect told me I'm a one trick pony, HA! Anyways, this algorithm comes in handy because now if I want to identify a linear path along all faces of a surface, marked as a white line in the picture above, you can reduce the number of faces to be tested, during raycasting, to the number of faces the path travels across * 2. To illustrate, imagine taking a triangular pizza slice which is made of two faces, back to back. the tip of the pizza slice is touching the center of the shape you want to find a linear path along, the two outer points of the slice are protruding out from the surface of the shape some distance so as to entirely clear the shape. When I select my starting and ending points for the linear path I also retrieve the face information those points fall on, respectively. Then I raycaste between the sharedEdgeVertices, targeting the pizza slice. If say a hit happens along the sharedEdgeVertices[ 2 ], then I know the next face to test for the subsequent raycaste is face ID 646, I also know that since the pizza slice comes in at sharedEdgeVertice[ 2 ], that is it's most likely going out at sharedEdgeVertices[ 1 ] or [ 0 ]. If not [ 1 ] then I know it's 99% likely going to be [ 0 ] and visa-versa. Being able to identify a linear path along any surface was the subject of my first Adventure in Robust Coding. Of course there are exceptions that need to be accounted for. Such as, when the pizza slice straddles the edge of a face, or when the pizza slice exits a face at a vertices. Sometimes though when I'm dealing with distances along the surface of a given shape where the pizza slice needs to be made up of more than one set of back to back faces, another problem can arise: I learned about the limitations of floating point numbers too, or at least that's what it appear to be to me. I'm sure most of you are familiar with some variation of the infinite chocolate bar puzzle So with floating point numbers I learned that you can have two faces share two vertices along one edge, raycaste at a point that is directly between the edges of two connecting faces, and occasionally, the raycaste will miss hitting either of the two faces. I attribute this in large part because floating point numbers only capture an approximation of a point, not the exact point. Much like in the infinite chocolate bar puzzle there exists a tiny gap along the slice equal in size to the removed piece, like wise, that tiny gap sometimes causes a miss for the raycaste. If someone else understands this better please correct me. -- - -- - -- - -- - -- - -- - Dynamic Path Finding using Nodes for any surface - -- - -- - -- - -- - -- - -- Now that I've got the linear path algorithm working in tip top shape, I use it in conjunction with Nodes to create the pathfinding algorithm. Firstly I identify the locations for all nodes. I do this using a Class I created called Orientation Vector, I mention them in the blog post above. When they're created, they have a position vector, a pointTo vector, and an axis vector. The beauty of this class is that I can merge them, which averages their position, pointTo, and axis vectors, and it allows me to rotate them along any axis, and it allows me to move them any distance along the axis of their pointTo vector. To create shoreline collision geometry, and node collision geometry, illustrated above, and node locations along shorelines, illustrated below, I utilise the Orientation Vector Class. Firstly, the water table for the world is set to an arbitrary value, right now it's 1.08, so if a vector for a given face falls below the table and one or two vertors are above the table then I know the face is a shoreline face. Then I use simple Math to determine at what two points the face meets the water and create two OVectors, each pointing at each-other. Then I rotate them along their y axis 90 and -90 degrees respectively so that they are now facing inland. Since each face, which are shoreline faces, touch one another, there will be duplicate OVectors a each point along the shore. However, each Ovector will have a pointTo vector relative to it's sister Ovector during creation. I merge the paired Ovectors at each point along the shore, this averages their position, pointTo and axis. I then move them inland a small distance. The result is the blue arrows above. The blue arrows are the locations of three of the thousands of nodes created for a given world. Each Node has information about the shoreline collision geometry, the node collision geometry ( the geometry connecting nodes ), and the Node to its left and the Node to its right. Each face of collision geometry is given a Node ID to refer to. So to create the path-finding algorithm. I first identify the linear path between the starting and ending points. I then test each segment of the linear path for collision geometry. If I get a hit, I retrieve the Node ID. This gives me the location for the Node associated for a given face of collision geometry. I then travel left and right along connecting Nodes checking to see if a new Linear path to the end point is possible, if no immediate collision geometry is encountered, the process continues and is repeated as needed. Subsequently, a list of points is established, marking the beginning, encountered Nodes and end of the line of travel. The List is then trimmed by testing linear paths between every third point, if a valid path is found, the middle point is spliced. Then all possible paths that have been trimmed are calculated for distance. the shortest one wins. Below is the code for the algorithm I currently use. its my first attempt at using classes to create an algorithm. Previously I just relied on elaborate arrays. I plan on improving the the process mentioned above by keeping track of distance as each path spreads out from it's starting location. Only the path which is shortest in distance will go through its next iteration. With this method, once a path to the end is found, I can bet it will be shortest, so I won't need to compute all possible paths like I am now. The challenge I've been facing for the past two months is sometimes the Nodes end up in the water, The picture above shows a shoreline where the distance the OVectors travel would place them in the water. Once a node is in the water, it allows the AI to move to it, then there is no shoreline collision geometry for it to encounter, which would keep it on land, and so the AI just walks into the ocean. Big Booo! I've been writing variations of the same function to correct the location of the geometry shown below in Red and Yellow below. But what a long process. I've rewritten this function time and time again. I want it to be, well as the title of this Blog states, Robust, but it's slow going. As of today's date, it's not Robust, and the optimised path-finding algorithm hasn't been written either. I'll be posting updates in this blog entry as I make progress towards my goal. I'll also make mention what I achieve for shortest, long time for pathfinding. Hopefully it'll be below 250 ms. -- - -- - -- - -- - -- - -- - short term goals for the game - -- - -- - -- - -- - -- - -- Badly... SO BADLY I want to be focusing on game content, that's all I've been thinking about. Argh, But this all has to get wrapped up before I can. I got ahead of myself, I'm guilty of being too eager. But there is no sense building game content on top of an engine which is prone to errors. My immediate goals for the engine are as follows: // TO DO's // // Dec 26th 2017 // /* * << IN PROGRESS >> -update path node geometry so no errors occur * -improve path finding alg with new technique * -improve client AI display -only one geometry for high detail, and one for tetrahedron. * -create ability to select many AI at the same time by drawing a rectangle by holding mouse button. * -create animation server to recieve a path and process animation, and test out in client with updates. * -re-write geometry merging function so that the client vertices and faces have a connected Target ID * -incorporate dynamic asset functionality into client. * -create a farm and begin writing AI. * -program model clusters * -sychronize server and client AI. Test how many AI and how quickly AI can be updated. Determine rough estimate of number of players the server can support. * */ see the third last one! That's the one, oh what a special day that'll be. I've created a Project page, please check it out. It gives my best description to date of what the game is going to be about. Originally I was going to name it 'Seed', a family member made the logo I use as my avatar and came up with the name back in 2014. The project is no longer going to be called Seed, it's instead going to be called Unirule. [ edit: 02/02/18 Some new screen shots to show off. All the new models were created by Brandross. There are now three earth materials, clay, stone and marble. There are also many types of animals and more tree types. ] Thanks for reading and if you've got anything to comment on I welcome it all. Awoken
  4. Hi everyone! I have spent several months on my first hobby game project and decided to push it out to get some feedbacks. Would really appreciate it if you'd give it a try! It is live (and free of course) on http://www.airline-club.com ! The game is still in very early development stage, but I think it is quite playable My goal is to have a simulation good enough to generate data similar to real world's, while giving flexibility to simulate new outcome based on the dynamics of the conditions. For example one can start a really successful airline in a city that can actually alter demand distribution that is different from those observed in our current world (some part that I have not implemented yet but on the list for more end-game type - Airlines can make heavy investment in city/airport such as funding airport expansion, city projects to enhance a city's attributes etc) Several screenshots: Virtual passengers taking various routes using a flight going from San Francisco to Tokyo: The main highlights are: 1. Real passengers that make decisions on what route to take based on various factors : Awareness of your airline, Loyalty to your airline, ticket pricing, seat class, service quality, # of connections, total travel time etc etc. This is different from several games that I played before which appeared to oversimplify flight demand with no connection flights. Having realistic simulation here provides many different strategic options 2. Thousands of airports with basic data driven by real geographical information (surrounding city population, income, airport scale etc) 3. Demand from airport to airport is calculated dynamically (not really based on real world data), which means various changes in game condition (not yet implemented, conditions are static for now) will affect demand accordingly 4. Reactive UI - not a UI expert here. But at least I tried to make the code runs fast and the control flow reasonable A quick clip of the gameplay can be found here: https://uc50f3ae07e58d5abedb49ec99d6.dl.dropboxusercontent.com/cd/0/inline/AJusVsuSNgpWPKZsgBp7Eoo_WHboR0yZkArO4N4CewWG2580AOWDvIEbEruILjNmsgoLXaUqs3uJGMojsALOgz0B8StIMe2s0XTqvY85AyUzVMKS2EDyXq6ZGzhobshfTGFJm02puIcd5zkGg7IpsGYEfcP0WWK40o6UpReN67I1ok52wI11m04pZv4Zp9JAydQ/file Many thanks for your attention again and hope to see you in the game soon o/
  5. Subscribe to our subreddit to get all the updates from the team! Introduction In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls. Configuration 3D or 2D navigation mesh, as long as the 3D one is not cyclic. Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals. An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh. The Algorithm The agent is the pink low-poly humanoid and the final destination is the flag. The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game. Psss, check this article if you haven't seen it yet. The algorithm is the following : For each waypoint, pick the current one and the next one until the next one is the last. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one. Convert each edge to a 2D line by discarding the Y component (UP vector). For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6. Here's a video of the algorithm in action :
  6. Just wrapped up organising the code. Originally when I set out to program this project I had the idea that would-be followers of my efforts could just copy and paste quick little bits to the base files from my Introduction to Three.js post. However, I'm not that organise and don't have the level of skill to be able to pull that off. I've had to go through many older functions and modify this or that. So you'll have to bare with me but if you want to stay up to date with this project and have a copy of the code as of 'today' you'll have to just download everything here TD.zip current functionality: Toggle Screen Size; if you press ' f ' you'll toggle the size of the screen. Walls; If you press ' w ' you'll be able to select locations for walls. Moving your mouse along the grid you'll see the grid line the cursor is closest to turn yellow. If you left click the mouse button you'll create a wall. If you right click on a line under a wall that already exists the wall will be removed. Walls can be created between adjoining cells/squares/spaces. When you select a grid line that will block the path to an exit the line will turn red indicating that a wall can't be built there. If you try to left click nothing will happen. Towers; If you press ' t ' you'll be able to select the locations for tower. Moving your mouse along the grid you'll see a large square 2x2 appear in cyan if you select a location that doesn't envelop any surrounding wall. The large square will turn red if the location isn't permitted. As of right now the Tower placement function doesn't take into account Towers blocking the path to an exit. If you want to switch between walls and towers you must de-select your previous choice by pressing the say keyboard key. Clear as mud? Hope not. Here's a video showing what's been added. Now that this is all done I can start adding towers and enemies. I'm going to scrap the Misty Mist Tower. I'm instead going to replace it with the Tach Tower. It will shot bullets that will slow down an enemy with each successive hit. It's been done a bunch before. But unlike other games where the effect wears off, in this one the effect will be permanent. However; in order to dramatically reduce the speed of an enemy, it have to be shot many times. Let me know what you think! Thanks for reading.
  7. Awoken

    Node Loops

    Hello GameDev, During my efforts to accomplish the last robust challenge of mine I discovered a new way, for me, to speed up the processing time to achieve much faster path-finding results. About the technique: I created continues node loops, Every node has information about the loop it's a member of. Each Node has the following object: { id: 4, loop: 10, loopSize: 8, loopStart: 0, loopEnd: 7 } and the following functions: left( x ){ if( this.id - x < this.loopStart ){ return ( this.id - x + this.loopSize ); } else { return ( this.id - x ); } } right( x ){ if( this.id + x > this.loopEnd ){ return ( this.id + x - this.loopSize ); } else { return ( this.id + x ); } } getDirection( n , M ){ var m = M || this.id; if( n - m > 0 ){ if( Math.abs( n - m - this.loopSize ) < ( n - m ) ){ return true; // to IDa } else { return false; // to IDb } } else { if( ( n - m + this.loopSize ) < Math.abs( n - m ) ){ return false; // to IDb } else { return true; // to IDa } } } getNodesBetween( n , M ){ var m = M || this.id; if( n - m > 0 ){ if( Math.abs( n - m - this.loopSize ) < ( n - m ) ){ return Math.abs( n - m - this.loopSize ); // to IDa } else { return ( n - m ); // to IDb } } else { if( ( n - m + this.loopSize ) < Math.abs( n - m ) ){ return ( n - m + this.loopSize ); // to IDb } else { return Math.abs( n - m ); // to IDa } } } With information about its parent loop accessible to each node, informed choices are possible about the nodes that should be tested for each subsequent iteration. The following animated .gif visually demonstrates the logic. 1: Select Origin and Destination points 2: Check for collisions between them and determine node pairs ( Node pairs are nodes that are apart of the same loop ) 3: Determine the smallest number of nodes between the node pairs, ( Are there fewer nodes if we travel left? or right? ) 4: Check for collisions between Origin node and each incremental node apart of the loop. ( stop once collision is found ) 5: Use last successful node tested as the new Origin point. 6: Check for collisions between Destination node and each incremental node apart of the loop. ( stop once collision i found ) 7: Use last successful node tested as the new Destination point. 7. Repeat. The Good, It is much faster than using the blind search method I was using before, especially if both directions are searched when an obstacle is encountered. If only one node loop is encountered it will return the shortest path with the addition of a refinement function. The Bad, It doesn't return the shortest path. For that it has to be used in combination with other techniques and two new 'branches' of the function would be required for each unique node loop encountered.
  8. I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon. It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities. Why Not Waypoints? As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper. In other words, there was a shift from efficiency to flexibility. In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way. One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes. Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh). The navigation mesh Graph To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it. Personally, because of my experience, I view navigation meshes like the ones found in most Source games. However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex). Navigation Meshes In practice A navigation mesh implementation is actually three algorithms : A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development. Finding our cells Before doing any graph searches, we need to find 2 things : Our starting cell Our destination cell For example, let's use this navigation mesh : In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on. Also, let's use these points as our starting and destination points: Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination. Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells. In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells. Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell. The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this: Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively: Graph Search Algorithm A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other. There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid. For this reason, the use of the A* algorithm is probably the right solution. We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits... A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one. We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it. We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells. While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later. We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path. A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand. In our example, we find that this is our path: And here are highlighted (in pink) the traversed connections: The String Pulling Algorithm String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed. String pulling is in fact not linked to characters at all : it is rather a metaphor. Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross. This is precisely what we're doing but with a string that goes from one point to another. There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually... ...stupidly simple. To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking) When a point is in the funnel, we continue the algorithm with the other side. If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints. The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it. In our case, the funnel algorithm gives this path: The Steering Algoritm Now that we have a list of waypoints, we can finally just run our character at every point. But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls. That's the role of the steering algorithm. Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all... We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ... With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!) So, this is the navigation mesh algorithm. I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one. We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point. So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture : Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.
  9. Just released another version of DRTS. You can play it at https://distilledgames.itch.io/distilled-rts Picking Maps I improved the UI to preview maps before starting to play with bots. Now it is easier to find a map you like and see what you are getting into before starting the game. Camera Controls In addition to earlier input methods, you now can control the camera also via keyboard: To pan, use arrow keys, for zoom plus and minus. To improve onboarding, I added buttons for controlling the camera. This visual addition to the UI should help new players notice that the field of view can be changed. Under The Hood To prepare for competitive games, the map generator was improved for exact symmetry of edges in generated maps. Pathfinding now yields optimal paths in more cases.
  10. I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon. It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities. Why Not Waypoints? As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper. In other words, there was a shift from efficiency to flexibility. In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way. One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes. Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh). The navigation mesh Graph To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it. Personally, because of my experience, I view navigation meshes like the ones found in most Source games. However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex). Navigation Meshes In practice A navigation mesh implementation is actually three algorithms : A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development. Finding our cells Before doing any graph searches, we need to find 2 things : Our starting cell Our destination cell For example, let's use this navigation mesh : In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on. Also, let's use these points as our starting and destination points: Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination. Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells. In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells. Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell. The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this: Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively: Graph Search Algorithm A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other. There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid. For this reason, the use of the A* algorithm is probably the right solution. We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits... A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one. We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it. We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells. While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later. We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path. A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand. In our example, we find that this is our path: And here are highlighted (in pink) the traversed connections: The String Pulling Algorithm String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed. String pulling is in fact not linked to characters at all : it is rather a metaphor. Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross. This is precisely what we're doing but with a string that goes from one point to another. There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually... ...stupidly simple. To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking) When a point is in the funnel, we continue the algorithm with the other side. If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints. The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it. In our case, the funnel algorithm gives this path: The Steering Algoritm Now that we have a list of waypoints, we can finally just run our character at every point. But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls. That's the role of the steering algorithm. Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all... We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ... With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!) So, this is the navigation mesh algorithm. I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one. We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point. So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture : Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.
  11. Let's say, on abstract level, the path, namely, A->B->C->D->E is valid, but the agent must choose portal #1 to reach E.... Presumably the agent has chosen portal #2, and go to B, C and D and finally ending up finding itself getting stuck at D and cannot move over to E... The whole computation is wasted. How do I avoid this problem? thanks Jack
  12. I am not sure I can ask questions about a specific library here, but if you haven't already. I'd like to tag some polys in a navigation mesh that correspond to grass or road etc, I can give an extent to do so, or in another way, I can directly feed a geometry in and the polys are tagged this way. But I am looking into alternative ways such as allowing the user to tag the polys using a text file or bitmap file (like the way heightfields are done).. If I define a area map which is a grayscale image, and the values range from 0-255, and for example, if the value of the first char is 0, then I can map this index to certain place in the navigation mesh, and say this is a walkable ground etc, unlike heightfields, where you define an image and the resultant thing is some terrain, but when you start off with a bitmap for area map, you end up with what? you see, I had the geometry already, the area map probably doesn't make sense here, same way as the text file thing.... Any ideas? Jack
  13. When my program was generating navigation meshes for my geometry, and the extent is 5000x5000 (m), with a cell size of 5.0 meters, it has to generate 250,000 cells, which makes my computer halt, the target object is a truck, which has a radius of about 20 meters, should I give the cell size a value of 20, because I don't really want to use this navigation mesh on one type of vehicle only. I want to use it for vehicles like cars, which has only about 5 meters in radius... the generation speed is a problem while the memory consumption is another. Any ideas how to improve this? Thanks Jack
  14. Hi, I was reading Game AI Pro how they implemented Supreme Commander path finding and came to one question. For the integration of cost field they are using Eikonal equation for traversing the areas. They recommended Fast Iterative Method and it's parallel version for solving this equation. However in all basic tutorials of flow field navigation that I found for integrating of the cost field is used simple grass/brush fire algorithm. My question is what would be the difference if they used in the game just grass fire algorithm? I guess the reason was some quality/performance trade of. Fortunately the Fast Iterative Method is very easy to implement so I can compare the performance, but what I don't understand is, what is the "quality" benefit.
  15. Hi In a game like stronghold with many workers going back and forth from a workplace to a dropoff-point for example, would I store the path and then use it backwards to not have to redo pathfinding all the time? If the moving units in the city doesnt block movement, very often I could just reuse the old path. Only if i run into a wall (if the player constructs a new building in the path of the old workers) will I recalculate the path. Workers in old age of empires for example, do they pathfind every time they go back to drop off gold in the storage? Thanks! Erik
  16. ferreiradaselva

    uastar - Simple A* path finder in C

    There are a bunch of path finding implementations online. But, to be honest, I wasn't much satisfied with most of them, for one of these reasons: Dynamic memory allocation in the middle of the algorithm Algorithm that does too much (more than what is needed) Too many files for just a single task So I made this two-files (`uastar.c` and `uastar.h`) library: https://github.com/ferreiradaselva/uastar No memory dynamic allocation. Straight to the point (the README.md explains how to use). It's nothing biggie, but certainly useful. Path finder at work: I'm leaving this in announcements, because I probably won't add more features (it's pretty much done).
  17. Hello guys, I just registered this site and heard from my lecturer that this a good site to talk about certain topics since my research topic are mostly programmer who are experienced with AI can answer the survey. The reason of the survey below is to understand which is suitable solution for 2d platformer pathfinding for AI and which one is easier to implement for 2D platformer. I would appreciate if you guys give your responses for the survey link shared and thank you for spending time answering the survey. Sorry if the survey is a bit hard to understand, I tried to make it understandable as best as I can. Again, thank you! https://goo.gl/forms/S0etAlAAHL6S5kTI2
  18. Here are the two main files: Path: https://pastebin.com/ZkrbJ78t AStar: https://pastebin.com/EVGazFtU A few things I've done to optimize the search: 1. Rather than a dictionary to keep track of scores and parents, I use a 2d array of objects that are mapped to the grid i.e. [x,y] represents grid point (x,y). Objects have F,G,H and Parent fields, these fields are reset to default values during each search. 2. Cheap weighted heuristic 3. Orthogonal neighbors (searching 4 neighbors rather than 8) Paths don't have to be beautiful because the point reduction smooths them out quite nice. In Path.cs you can see the two functions to reduce points, Incremental and Stretch. Stretch is much faster because it has the potential to reduce points all the way to the end right off the bat or early on thus reducing calls to the LineOfSight function, but it isn't so reliable around small areas/corners. The incremental function is reliable but it calls to the LineOfSight function for each and every point along the path, this causes the path reduction to be just about as slow as calculating the path itself. Line of Sight is determined by getting all points between 2 points and checking that they are all flagged as Walkable. My grid is dynamic and roughly 250x250. I tried out Theta* and JPS, both were slower than my current A*. I'm trying to avoid using a navmesh or worker thread, I suppose mostly just looking for ideas to tighten up my code
  19. Hello, I'm trying to implement enemy pathfinding algorihtm, but i have problem with empty tile collision when moving enemy to node. For example this image shows how it should move like shown in example: But it stucks at last tile: It happens because enemy collides with right side of "air" tile and then it removes from node list because it "collided", but it works with not "air" tiles of course. How do fix this problem? Code: void Enemy::generateMoveToNode(AStar::Vec2i lastNode) { auto lastSave = AStar::Vec2i{ 0.0f, 0.0f }; while (!target.empty()) { if (target.back().y == lastNode.y) { lastSave = target.back(); target.pop_back(); } else { moveToNodes.push_back(lastSave); moveToNodes.push_back(target.back()); generateMoveToNode(target.back()); return; } } moveToNodes.push_back(lastSave); } void Enemy::updateTarget(std::shared_ptr<InputManager> inputManager) { if (moveToNodes.empty()) return; // Calculate half sizes. float halfWidthA = getSize(0) / 2.0f; float halfHeightA = getSize(1) / 2.0f; float halfWidthB = 32.0f / 2.0f; float halfHeightB = 32.0f / 2.0f; // Calculate centers. auto centerA = glm::vec2(getPosition(0) + halfWidthA, getPosition(1) + halfHeightA); auto centerB = glm::vec2((moveToNodes.front().x * 32.0f) + halfWidthB, (moveToNodes.front().y * 32.0f) + halfHeightB); // Calculate current and minimum-non-intersecting distances between centers. float distanceX = centerA.x - centerB.x; float distanceY = centerA.y - centerB.y; float minDistanceX = halfWidthA + halfWidthB; float minDistanceY = halfHeightA + halfHeightB; setKey(inputManager->getKeyBinding("Move Left"), false); setKey(inputManager->getKeyBinding("Move Right"), false); setKey(inputManager->getKeyBinding("Jump"), false); setKey(inputManager->getKeyBinding("Duck"), false); // If we are not intersecting at all, return (0, 0). if (abs(distanceX) >= minDistanceX || abs(distanceY) >= minDistanceY) { if (moveToNodes.front().y > ceil(getPosition(1) / 32.0f)) setKey(inputManager->getKeyBinding("Jump"), true); else if (moveToNodes.front().y < ceil(getPosition(1) / 32.0f)) { if (getCanClimb()) setKey(inputManager->getKeyBinding("Duck"), true); } else { if (moveToNodes.front().x < ceil(getPosition(0) / 32.0f)) setKey(inputManager->getKeyBinding("Move Left"), true); else if (moveToNodes.front().x > floor(getPosition(0) / 32.0f)) setKey(inputManager->getKeyBinding("Move Right"), true); } updateInput(inputManager); return; } // Calculate and return intersection depths. float depthX = distanceX > 0 ? minDistanceX - distanceX : -minDistanceX - distanceX; float depthY = distanceY > 0 ? minDistanceY - distanceY : -minDistanceY - distanceY; updateInput(inputManager); moveToNodes.erase(moveToNodes.begin()); } generateMoveToNode: recursive function to generate all nodes. updateTarget: updates enemy every frame to check if it hits node and then removes it from list and checks next till no nodes left.
  20. Hi I have a A* gridbased pathfind for a city builder I'm planning (think banished or stronghold). Units can walk in 8 directions from each tile. But I want more free movement. Look at the example pic below. The pathfind finds the path in step 0 to 19. But now I want to "simplify" the path so the unit goes straight from node marked 0 to 10 (and obviously also from 12 to 19; i forgot to mark this in the picture). Any good algoritm to "reduce" the path in this way? I understand I need to check if there is a unbroken "line of walk" between two nodes but if i start with checking 0 to 1, 0 to 2, 0 to 3 until i find that 0 to 12 doesnt work and stop there (so I throw away nodes marked 1 through 8) this would be VERY slow. Any idea?
  21. Hello hello. I'm in the preliminary design phase for a space based game, and I need some advice on how to approach the AI side of things. Here's the situation in a nutshell. Say I'm a space explorer with a spaceship, and I am competing with other space explorers to be the first one to discover things. I have a procedurally generated 2D top-down solar system, and to make things a little simpler, let's say all the planets in the system are static, meaning they are not orbiting their sun. But they all have their gravity wells of varying strength. As a player I have to negotiate newtonian physics around these planets, using engine thrust at the right amounts and timing, to get to where I want. That part is not a problem. I'm also willing to assume non-newtonian rotation so that AI and player do not need to account for appyling torque to get a correct bearing. So far I have not mentioned whether this is real-time or turn-based and that's because of my uncertainty around AI. Problem is I'm not sure how to approach the AI side of things either way. Ideally I'd like to have an AI that can optimize trajectory for speed and/or fuel efficiency, but I have been able to find precious little on the topic on the interwebs. The best I've found so far is the following article from a decade ago, and it does not really point to a solution: http://playtechs.blogspot.ca/2007/05/pathfinding-in-space.html If I can find a good resource on how to pull this off in realtime, I'd like to go that route. At the moment my fallback is using a turn based system for exploration and visualizing the system as a hex grid. Then using A* I could get AI agents to naively assume use of thrust to come to a stand still each time they want to change trajectory, and then add extra thrust to move in the next direction, but I'm worried this would be extremely under-optimized in terms of fuel efficiency and the reach of rival ships as a result. I could also factor in the AI ship's current velocity into the graph search, which would likely greatly increase the search space for pathfinding, but I haven't experimented with it yet to be able to say whether it's viable. Any thoughts?
  22. 3D Rigger Mixamo Is Making Some Changes Originally founded in 2008, Mixamo has been a go-to source for 3D animation and rigging for nearly a decade. On August 28th, 2013, Mixamo launched a platform called Face Plus. It was designed to allow developers or animators turn facial expressions recorded with even a webcam into top-notch 3D animation. This is not unlike what Holotech Studios’ “FaceRig” does for streamers and video creators as well. Back this past May 2017, Adobe (publishers of the Mixamo platform since 2015) announced the end of an era. Mixamo’s Face Plus will be removed on August 22nd, 2017, as well as several other free features. According to the Adobe post on the Mixamo website, they lay out what is changing and what is staying the same. Set out as bullet points in the post, they seem to be cleaning house on the “free” features. But, they are keeping a small handful of their main key points, such as: Select and use characters from Mixamo’s Character Collection Upload 3D characters to get them ready to animate using the Auto-Rigger Apply animations to characters Download as an .fbx or .dae Noticeably, the list that’s getting 86’ed includes major tools and the ability to save assets from their site: The ability to save and retrieve assets in “My Assets”. The Face Plus facial animation plugin The Decimator polygon reduction tool The ability to download Control-rig scripts Download types for .bvh and .biped that streamline integration with 3rd party applications Mixamo forums will close and all help articles will be moved to Adobe’s HelpX Don't Worry! There's still time! They’re allowing people to still download the tools and plug-ins until August 22nd, as well as utilizing their easy-to-download features of saving Assets directly. Those that use Mixamo, utilize Face Plus, or even just do animation are highly encouraged to take advantage and download the tools for the next week. To download, sign into your (free) Adobe account at the Mixamo website and start downloading. Developers have only one week left to grab as much as they can. For a complete FAQ and how-to for the Mixamo massacre, check out the Adobe post from their Customer Care Team. UPDATE (8/15/17): Adobe would like to stress that Mixamo, itself, is not going anywhere, just the Face Plus features, the tools mentioned above, and a couple other features to make the platform more "streamlined and a little modernized". Once the dust is cleared next week from the changes, I'll reach out to Adobe from a more guided tour of the new set-up. Additionally, I have changed the title of this article to reflect the clarification. Apologies for the inconvenience to those that have already read the article.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!