Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Pathfinding'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 33 results

  1. I'm working on a step-time strategy game (not sure if that's the proper term, but it moves in discrete time steps, about 4 frames, before pausing for input) with positively monstrous pathfinding calculations. I've solved the biggest of my problems already, but there is one algorithm that causes significant slowdown, and I'm sure it could be optimized, but I'm not sure how best to do that because my case has a unique property that isn't covered in any A* optimizations I've researched. (Or perhaps I'm just not thinking about it the right way.) My game allows the creation of formations, which are defined as positional groupings of units relative to a leader. It's extremely important to the gameplay that these formations keep their shape as best as possible while balancing the need to move smoothly through any narrow spaces that constrict their shape. The algorithm I'm asking about now is responsible for choosing a path for the leader of the formation that (1) avoids unnecessarily traveling into narrow spaces and (2) always allows a certain amount of clearance between itself and obstacles, whenever those things are possible. Where I have found typical clearance-based approaches to A* fall short in addressing my problem is, ultimately, the leader will choose a path that goes through areas even just 1 space wide if there is no other option. In other words, I can't have the algorithm disregard those, but I can't have it explore those nodes either until it has explored the entirety of the rest of the map. Here is my basic test map for this. The 5 green symbols at the top-left represent a formation led by the V in the center. The yellow circle at the center of the map is the goal. Here is the solution that a regular A* would give, finding the shortest path: Here is the solution I want (and the algorithm already does this, just slowly) Here is the Python code that achieves this -- I know there are going to be some questions about what the vectors are and all of that, but basically, this is just a typical A* setup with the addition of a "c_cost," which is 0 if the given node has all the space that it wants (as defined by optimal_area), or 2 ^ the number of spaces missing from the amount of space we want. def leader_pathfind(leader, going_to, treat_passable = [], treat_impassable = []): """ Seek the shorest path that attempts to avoid obstacles at a distance equal to that of the furthest member in the formation """ global game_width global game_height global diagonal_movement_cost global formation_dict at_now = leader['position'] treat_passable.append(going_to) # Prevents indefinite hanging while rerouting if an obstacle has moved to the destination if at_now == going_to: return [] formation_obj = formation_dict[leader['id']] vectors = formation_obj.get_formation_vectors() max_dimension = 0 for vector in vectors: for axis in vector: if abs(axis) > max_dimension: max_dimension = abs(axis) optimal_area = basic_pow(max_dimension * 2 + 1, 2) def clear_area(position): """ Check in square rings around the position for obstacles/edges of map, then return the total area that is passable within the square that first hits an obstacle/edge of map """ if not position_is_passable(position, treat_passable = treat_passable, treat_impassable = treat_impassable): return sys.maxint distance = 0 hit_obstacle = False while not hit_obstacle: distance += 1 corner_a = (position[0] - distance, position[1] - distance) corner_b = (position[0] + distance, position[1] + distance) outline = rectline(corner_a, corner_b) for point in outline: if not within_game_bounds(point): hit_obstacle = True break elif not position_is_passable(point, treat_passable = treat_passable, treat_impassable = treat_impassable): hit_obstacle = True break elif distance == max_dimension: # We have all the space we want, no need to compute further hit_obstacle = True break tent_area = rectfill(corner_a, corner_b) area = 0 for point in tent_area: if within_game_bounds(point): if position_is_passable(point, treat_passable = treat_passable, treat_impassable = treat_impassable): # Some stray diagonals may get counted here occasionally, but that should be acceptable for this purpose area += 1 return area def c(position): # c_cost is in terms of clear area relative to the optimal clearance area = clear_area(position) c_cost = 0 if area >= optimal_area else basic_pow(2, abs(area - optimal_area)) # Cost grows exponentially by powers of 2 for each point less than the optimal area return c_cost # Traditional A* # def h(position): return basic_distance(position, going_to) / 100.0 f_dict = {at_now: h(at_now)} g_dict = {at_now: 0} open_set = [at_now] closed_set = [] get_last = {} found_path = False current = at_now while len(open_set) > 0: current = open_set[0] for op in open_set: if op in f_dict.keys(): if f_dict[op] < f_dict[current]: current = op open_set.remove(current) closed_set.append(current) if current == going_to: found_path = True break for neighbor in neighbors(current, treat_passable = treat_passable, treat_impassable = treat_impassable): if neighbor in closed_set: continue if not neighbor in open_set: open_set.append(neighbor) movement_cost = diagonal_movement_cost if diagonally_adjacent(current, neighbor) else 1 tent_g = g_dict[current] + movement_cost tent_f = tent_g + c(neighbor) + h(neighbor) if not neighbor in f_dict.keys(): g_dict[neighbor] = tent_g f_dict[neighbor] = tent_f get_last[neighbor] = current else: if tent_f < f_dict[neighbor]: g_dict[neighbor] = tent_g f_dict[neighbor] = tent_f get_last[neighbor] = current if found_path: path = [going_to] current = going_to while current != at_now: current = get_last[current] path.append(current) path.reverse() path.pop(0) return path else: write_log('leader_pathfind: could not find a path from ' + str(at_now) + ' to ' + str(going_to)) return False Jump Point Search, which I've used elsewhere, relies on the assumption of a uniform-cost grid, so I believe I need to rely on abstracting my map down to a smaller number of abstract nodes, and I'm at a loss for how I would do that for this case. What I've read on using abstract nodes rely on things like identifying entrances/exits from abstract nodes, but that won't be useful if the algorithm chooses a node with acceptable entrance/exit points but lots of obstacles in the middle. Here is an arbitrarily chosen slice of the map with the correct path: Broken up into abstract 5x5 nodes, I am failing to see a pattern I could rely on to give the algorithm the information it needs, especially since a crucial part of the path rests right on the right edge of the two leftmost abstract nodes. To know that this is the correct path, i.e. has sufficient clearance, it would need to calculate the clearance from that edge, dipping into the nodes in the middle. So I can't choose nodes that will be helpful without already having the information that I need the nodes to speed up the calculation of. That's as far as I've gotten with my thought process. Can anyone make any suggestions, general or specific? Thanks in advance
  2. Hello everyone! I've implemented some simple mechanics in my game, but faced not so simple problem. Is there are any ways to change AI navigation with events in the game (for example when you hit a switch). The problem can be clearly seen on the bridge example on the screenshots below. When bridge disappear enemies still trying to use the navigation which leads to their fall. I've made bridge disappearing by switching the material and setting no collision on it. I've placed bridge in the editor activated and it generated navigation. But how to get rid of it and change it, for example, with nav link I don't know. And also how to turn navigation back when the bridge is active? Here we got bridge and we got navigation for it. Here we don't got bridge, but still have navigation for it.
  3. Hi guys, it's been some weeks since i started to think to create an AI for a mobile game that i play, which can be played on PC as well so that i can use softwares The problem is that i'm not really into "AI stuff" so i really don't know where to start, or how can my AI do what i want it to do. In this first post i just want some tips about something my AI should do. I'm not going to tell you all the game, because i prefer not to, and i think is not necessary either. Ok so, one of the things my AI should do is this: Recognise the map, the border of the map (basically the area where i can tap), it should recognise all the defenses in the map (which you can see, because you see the whole area from above), Just this for now, i don't really know how the AI can recognise all the different defenses in the area just by seeing them, and it need to be a precise thing, we are talking about millimiters. Maybe the AI can recreate the map in its software, but i don't know if im saying something right, so i'm just gonna leave this to you, hopefully someome will clarify thing to me. Thanks. Edit: just thought about the fact that i could even recreate the map by hand, with an ipotetic software with all the defenses and stats
  4. theaaronstory

    Design Spicing up the AI (game mechanics)

    Above the usual tropes of having good design, character, specialty and synergy, thought I'd add some [basic] special cases to the mix: Angle of attack: Enemies would be able to come from all 360 degrees, and would not be limited to the ground plane. Player's cone of vision: It would inform the AI where the player is facing, allowing them to have more choices to attack. [This could also lead to changes in player awareness.] Sense of surroundings: Primarily to reduce kiting, but would be useful elsewhere. It would be concentric in design (Close > Mid > Range). Would double as a sensory limit, to call/hear sound cues. [In order to call for reinforcements for example.] Cover/fallback system: Keep ranged units out of harm's way, or get enemies to hide in tall grass to avoid detection. [Or joining up allies in order to survive.] Multiple attack modes: Might have a dagger and a knife, Why not use them? [Maybe both, at the same time? ] Reactive environment: Place traps, loot chests/resources, and take them with them, or to a camp. Player tracking: Either using environmental cues (footsteps in snow), or by player "crumbs" on the floor (if higher level AI). However, these would decay eventually. [Might add aerial tracking/wind direction.] [These also could be used by the player, if ability is available, or would refer to the lesser visual cues.] Different attack tactics: Encirclement, ambush, flank or corner. [Or grab you and try to drag you somewhere else.] Zone/Time awareness: Example: If weak, then would not venture close to an enemy, or would stay more in the shadows. [And come out more during the night; and be more shifty.] [Or would be more active or passive.] Own stamina: YES. Separate from the base aggro-timer. Auto pickup (for player as well): Maybe grab a weapon from the floor or two? [But mostly for the player, to customize their experience; like having auto gold pickup.] More synergy: Mobs would actively seek out tactics, when near each other. [Example: one throws oil on you, the other lights you on fire.] Extra scripted states of behavior: To fill in time between those "cumbersome" moments of having to fight! [Adding more randomness to enemies, by giving them "jobs" to do, like eating, resting, smelling of yellow flowers, etc.] Power sensitivity/Scaling: If player is too low, or too high level, they would either run or be more courageous. [Much like having an XP penalty for doing so.] Reduced predictability: By giving them extra mobility: dash or [trying to] avoid certain attacks, before happening (if smart enough). Whiskers path-finding: To give them more realistic movements. Chance of unpredictability: Enemies might go overdrive, or panic, or grab a random behavior from the pool. [From other monsters, within limits of course.] Intended weaknesses: Afraid of light, or to specific events, etc. Against the elements: Based on their ability to move, and their environment (snow, mud, rain, etc.), their movements would also vary [greatly].
  5. I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon. It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities. Why Not Waypoints? As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper. In other words, there was a shift from efficiency to flexibility. In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way. One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes. Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh). The navigation mesh Graph To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it. Personally, because of my experience, I view navigation meshes like the ones found in most Source games. However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex). Navigation Meshes In practice A navigation mesh implementation is actually three algorithms : A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development. Finding our cells Before doing any graph searches, we need to find 2 things : Our starting cell Our destination cell For example, let's use this navigation mesh : In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on. Also, let's use these points as our starting and destination points: Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination. Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells. In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells. Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell. The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this: Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively: Graph Search Algorithm A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other. There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid. For this reason, the use of the A* algorithm is probably the right solution. We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits... A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one. We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it. We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells. While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later. We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path. A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand. In our example, we find that this is our path: And here are highlighted (in pink) the traversed connections: The String Pulling Algorithm String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed. String pulling is in fact not linked to characters at all : it is rather a metaphor. Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross. This is precisely what we're doing but with a string that goes from one point to another. There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually... ...stupidly simple. To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking) When a point is in the funnel, we continue the algorithm with the other side. If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints. The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it. In our case, the funnel algorithm gives this path: The Steering Algoritm Now that we have a list of waypoints, we can finally just run our character at every point. But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls. That's the role of the steering algorithm. Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all... We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ... With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!) So, this is the navigation mesh algorithm. I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one. We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point. So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture : Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.
  6. Hello GameDevs, Context: I would consider myself a novice and am working on a RTS working project to practice my C++ and game programming skills. It is a tech demo to practice optimisation in the future, I am aiming to have it contain as many basic agents with rudimentary AI that can be controlled and optimising the system to allow for the most amount of these agents while still maintaining 60 FPS as a benchmark. Problem: I am aiming to implement Flow Fields for pathfinding and have managed a basic implementation with Flow Field follow and no steering behaviour as of yet. However, I have run into a design decision I am unsure of ,and was wondering if anyone could advise me of some potential architecture choices. When implementing Goal-Based Vector Field Pathfinding, what would be the most efficient way of keeping the flow field data when manipulating many clusters of agents? Current Thinking: It seems inefficient to recalculate per cluster of agents unless a new goal is set for them, so I would have to maintain the flow field at least up until the goal is met. I was considering creating a data structure like a map, that indexes each generated and maintained Flow Field based on whether some agent clusters are still trying to reach that goal, after which I would terminate. Then update each cluster every 1 to 2 seconds in order to not throttle the cpu. Currently the algorithm for updating the Flow Field is rudimentary and not optimised. Each Flow Field currently takes up about 11MB (31x31 for testing purposes) of memory and I am concerned that I could run into a scenario that could take up a lot of memory if enough particularly small clusters of agents are all trying to move across the map, especially when the map would be bigger. Question: Could you provide any advice on potential architecture choices that would be ideal in this Scenario or direct me to some implementations/papers? I have so far only managed to find implementations and demos of Flow Fields only with 1 goal and 1 cluster in mind, but not when put into practice in a real game with multiple clusters of units simultaneously moving around the map. Many thanks.
  7. Hello All, I am new to Angelscript and I am wanting to create steering behaviors for enemy AI. I haven't found any Angelscript examples on the web for Seek, Flee, Follow, Arrive, Collision avoidance, etc so far. I have successfully written an A* routine, but that's about it. Does anyone know of any steering behaviors written in Angelscript ? Can you post a link? Since I wasn't able to find any steering behaviors written in Angelscript, I decided to try to convert a "Flee" behavior written in C. Unfortunately there is a command I can't seem to find in Angelscript that the C code uses. Specifically, the "Limit" command. I think it is equivalent to a "SCALAR" command? I am not sure. Anyone know? Thanks in advance for any help. This is the code for the "Flee" behavior below: class Vehicle { PVector location; PVector velocity; PVector acceleration; Additional variable for size float r; float maxforce; float maxspeed; Vehicle(float x, float y) { acceleration = new PVector(0,0); velocity = new PVector(0,0); location = new PVector(x,y); r = 3.0; Arbitrary values for maxspeed and force; try varying these! maxspeed = 4; maxforce = 0.1; } Our standard “Euler integration” motion model void update() { velocity.add(acceleration); velocity.limit(maxspeed); <<<------------ THIS LINE USES THE COMMAND "LIMIT" location.add(velocity); acceleration.mult(0); } Newton’s second law; we could divide by mass if we wanted. void applyForce(PVector force) { acceleration.add(force); } Our seek steering force algorithm void seek(PVector target) { PVector desired = PVector.sub(target,location); desired.normalize(); desired.mult(maxspeed); PVector steer = PVector.sub(desired,velocity); steer.limit(maxforce); applyForce(steer); } void display() { Vehicle is a triangle pointing in the direction of velocity; since it is drawn pointing up, we rotate it an additional 90 degrees. float theta = velocity.heading() + PI/2; fill(175); stroke(0); pushMatrix(); translate(location.x,location.y); rotate(theta); beginShape(); vertex(0, -r*2); vertex(-r, r*2); vertex(r, r*2); endShape(CLOSE); popMatrix(); }
  8. Let's say, on abstract level, the path, namely, A->B->C->D->E is valid, but the agent must choose portal #1 to reach E.... Presumably the agent has chosen portal #2, and go to B, C and D and finally ending up finding itself getting stuck at D and cannot move over to E... The whole computation is wasted. How do I avoid this problem? thanks Jack
  9. I am not sure I can ask questions about a specific library here, but if you haven't already. I'd like to tag some polys in a navigation mesh that correspond to grass or road etc, I can give an extent to do so, or in another way, I can directly feed a geometry in and the polys are tagged this way. But I am looking into alternative ways such as allowing the user to tag the polys using a text file or bitmap file (like the way heightfields are done).. If I define a area map which is a grayscale image, and the values range from 0-255, and for example, if the value of the first char is 0, then I can map this index to certain place in the navigation mesh, and say this is a walkable ground etc, unlike heightfields, where you define an image and the resultant thing is some terrain, but when you start off with a bitmap for area map, you end up with what? you see, I had the geometry already, the area map probably doesn't make sense here, same way as the text file thing.... Any ideas? Jack
  10. When my program was generating navigation meshes for my geometry, and the extent is 5000x5000 (m), with a cell size of 5.0 meters, it has to generate 250,000 cells, which makes my computer halt, the target object is a truck, which has a radius of about 20 meters, should I give the cell size a value of 20, because I don't really want to use this navigation mesh on one type of vehicle only. I want to use it for vehicles like cars, which has only about 5 meters in radius... the generation speed is a problem while the memory consumption is another. Any ideas how to improve this? Thanks Jack
  11. Hi, I was reading Game AI Pro how they implemented Supreme Commander path finding and came to one question. For the integration of cost field they are using Eikonal equation for traversing the areas. They recommended Fast Iterative Method and it's parallel version for solving this equation. However in all basic tutorials of flow field navigation that I found for integrating of the cost field is used simple grass/brush fire algorithm. My question is what would be the difference if they used in the game just grass fire algorithm? I guess the reason was some quality/performance trade of. Fortunately the Fast Iterative Method is very easy to implement so I can compare the performance, but what I don't understand is, what is the "quality" benefit.
  12. Hello GameDev In this blog I'll be covering - development life-cycle thus far - provide an update on dynamic structures - introduce scripted events. Development life-cycle I think the time has come at last. What time is that exactly? Well let me explain... A couple months back I was thumbing through my blog series and I was startled. I noticed that it appeared as though I have been repeating myself for the better part of 4 years, coming up on 5 now. I wondered if maybe I was stuck in some sort of mental recursive loop or something, forever redoing, rebuilding, remaking. Quickly though I realised I was reading the situation incorrectly. Even though it may seem I've been talking about the same stuff for this long I've come to realise there have been 3 distinct phases this project has gone through and they are: Experimentation At the very beginning when I first started using THREE.js and node.js I played around with them and worked out a rough idea of how to get certain things to work a certain way. I made different colored boxes to represent different players. Each client could move one box and node.js would coordinate the movement. I had also written my first dynamic-wall functions. I remember they were atrociously slow and the boxes could not move around them. I learned what ideas were worth perusing and which needed to be scraped. Integration Eventually a time came when I felt comfortable enough with my abilities in javascript that I decided I needed start combining many of the different features I had made. I had boxes that could move around, walls that could be built and eventually I wrote my first function to merge the two, getting the boxes to move around walls that is. That too was extremely slow. Integrating these separate elements gave me a greater appreciation for logic that can be utilised across different applications, if that makes any sense . Stability Then in 2016 I decided it was time to kick things up a notch. I didn't know it at the time, but my first attempt at creating a robust function was probably the best thing to happen for this project. At the time I just wanted to finally make a function that wasn't going to break on me. I remember spending weeks tackling all the errors wondering does it ever end? But after I had that function executing like clock work it dawned on me just how important stable functions are. Especially if I want a framework for a game that is to be dependable. So there you have it Stable Integrated Code. Now of course there has always been a gradual transition from one phase to the next. And even today I find myself experimenting, after all experimenting can breath fresh life into a project. Dynamic Structures I'm incredibly proud to show off my first polished video showing the latest and greatest dynamic structure tool. In the video, if you're interested, I build a structure from scratch so you can see how user friendly I tried to make it Scripted Events I would also like to share with you a video that showcases the my first attempt at a scripted event. Now the video is the one I currently use on my project page updated earlier this month. If you haven't checked it out please do so. You can skip the first 25 seconds or so.. It's an idea I came up with a few months back for promoting the project. I've been getting to thinking that since these simulin live in a simulated world that I really want to play on that idea. So this is just one example of a scripted event. But there is no reason why the player can't also be informed the simulated nature of the world. For instance, the world is much to small come game time for there to be tons for wildlife roaming around, But I could have intermittent scripted events where wildlife materialise and stampede across a plain only to de-materialise once again. And I have many more ideas for scripted events. Conclusion I feel confident enough now and think it's time to move one full foot to game development. I've talked about transitioning in the past but I think enough is enough with the framework. I will still work on it until it has fully incorporated all the features I want. But lately I've been getting more and more excited to just get on with the game. Parting thought, If I didn't have somebody somewhere to share all this with I would never would have come this far. Thank you all for being a supportive and curious bunch!