Jump to content
  • Advertisement

thecheeselover

Member
  • Content Count

    174
  • Joined

  • Last visited

  • Days Won

    1

thecheeselover last won the day on May 30

thecheeselover had the most liked content!

Community Reputation

625 Good

2 Followers

About thecheeselover

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Business
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. thecheeselover

    Image Recognition by Equilateral Encoding

    I believe that as long as you have the right categories (so right colors), then yes, it is better. Ideally, I want to finish this book to read the next one, which seems very interesting to me : Artificial Intelligence for Humans, Vol 2: Nature-Inspired Algorithms Here are its chapters : Chapter 1: Population and Scoring Chapter 2: Crossover and Mutation Chapter 3: Genetic Algorithms TSP: Genetic Algorithm Chapter 4: Genetic Programming Chapter 5: Speciation Chapter 6: Particle Swarm Optimization Flocking in 2D Flocking in 3D Chapter 7: Ant Colony Optimization Chapter 8: Cellular Automation Conway’s Game of Life Chapter 9: Artificial Life Chapter 10: Modeling Problems I have an interest for nature inspired algorithms, such as genetic programming. That will come when I'll finish a game lol With cocaine and hookers.
  2. thecheeselover

    (Updated) Struggling With Remembering What I've Learned.

    It really depends on what you prefer. Back in 2012, XNA was nice for C#. For Java, I'd recommend LibGDX.
  3. thecheeselover

    (Updated) Struggling With Remembering What I've Learned.

    May I suggest you to do something extremely simple, like a Pong? The only problem with Unity is that you cannot start to learn C# with it and totally understand what's going on under the hood because Unity is such a gigantic game engine. If you want to learn how to code and not just program video games, which is always applied to game development contrary to some closed minded people believe, maybe you should start with the basics and then jump into game development by slowly making bigger games with bigger engines? I personally do not understand why some people think they can make games alone without any relevant programming knowledge such as what are collections and what is the difference between them? What I mean is that I believe programming should be learnt on the side or beforehand. My final advice would be to use a small 2D engine to make games. That would force you to learn how to program properly and understand what you do. I hope you will figure out a way to learn what you want and make the games you want
  4. Context I recently started reading the book Artificial Intelligence for Humans, Volume 1: Fundamental Algorithms by Jeff Heaton. Upon finishing reading the chapter Normalizing Data, I decided to practice what I had just learnt to hopefully finally understand 100 % of a special algorithm : equilateral encoding. Equilateral Encoding The purpose of this algorithm is to normalize N categories by creating an equilateral triangle of N - 1 dimensions. Each vertex in that triangle is a category and so, because of the properties of the equilateral triangle, it's easy to construct this structure with medians of 1 unit. For selecting a category, all you need is a location vector of N - 1 dimensions that's ideally inside the equilateral triangle. For example, consider the vertex C and point c. The vertex C is 100% of a specified category, whilst the point c is 0% of that category. The vertices A and B are also 0% of C's category. In other words, the further away a location vector is from a vertex along its median axis, the less it belongs to the vertex`s category. Also, if the mentioned axis distance is greater than the length of a median (all medians have the same length in an equilateral triangle, which is the range of values in this context), then the location vector absolutely does not belong to the specified category. The reason why we just don't select directly a category is because this algorithm is originally for machine learning. It quantifies qualitative data, shares the blame better and uses less memory than simpler algorithms, as it uses N - 1 dimensions. The Application First of, here's the link to my application : https://github.com/thecheeselover/image-recognition-by-equilateral-encoding How It Works To be able to know the similarities between two images, a comparison must be done. Using the equilateral encoding algorithm, a pixel versus pixel comparison is the only way to do so and thus shapes do not count towards the similarities between the two images. What category should we use to compare images? Well, colors of course! Here are the colors I specified as categories : Red Green Blue White Black The algorithm will select the closest category for each pixel according to the differences between both colors and save it's location vector in an array. Then, when both images are loaded, compare each image pixels against each other by computing the median distance, as mentioned before, between both of them. Simply project the first pixel's vector onto the other one and compute the Euclidean distance to get the median distance. Sum all computed median distances This way, it would also works if there were location vectors not totally on a vertex, even though it doesn't happen for the just described comparison algorithm. Preliminaries Of course, this can be really slow for immense images. Downscale the images to have a maximum width and height, 32 pixels in my case. Also, remove the transparency as it is not a color component in itself. If both images do not have the same aspect ratio and so not the same downscaled dimensions, then consider using extremely different location vectors and so just different categories for "empty" pixels against non-empty pixels. The Code The equilateral encoding table : package org.benoitdubreuil.iree.model; import org.benoitdubreuil.iree.utils.MathUtils; /** * The image data for the image recognition by equilateral encoding. * <p> * See the author Jeff Heaton of this algorithm and his book that contains everything about it : * <a href="https://www.heatonresearch.com/book/aifh-vol1-fundamental.html">Artificial Intelligence for Humans, Vol 1: Fundamental Algorithms</a> * <p> * The base source code for the equilateral encoding algorithm : * <a href="https://github.com/jeffheaton/aifh/blob/master/vol1/java-examples/src/main/java/com/heatonresearch/aifh/normalize/Equilateral.java">Equilateral.java</a> */ public class EquilateralEncodingTable { // The algorithm with only one category does not make sense private static final int MIN_CATEGORIES_FOR_ENCODING = 2; private int m_categoryCount; private int m_dimensionCount; private double m_lowestValue; private double m_highestValue; private double m_valueRange; private double[][] m_categoriesCoordinates; public EquilateralEncodingTable(int categoryCount, double lowestValue, double highestValue) { if (categoryCount < MIN_CATEGORIES_FOR_ENCODING) { throw new ArrayIndexOutOfBoundsException("Not enough categories for equilateral encoding"); } this.m_categoryCount = categoryCount; this.m_dimensionCount = computeDimensionCount(categoryCount); this.m_lowestValue = lowestValue; this.m_highestValue = highestValue; this.m_valueRange = highestValue - lowestValue; this.m_categoriesCoordinates = computeEquilateralEncodingTable(); } /** * Encodes a supplied index and gets the equilaterally encoded coordinates at that index. * * @param equilateralCoordsIndex The index at which the equilaterally encoded coordinates are. * * @return The equilaterally encoded coordinates. */ public double[] encode(int equilateralCoordsIndex) { return m_categoriesCoordinates[equilateralCoordsIndex]; } /** * Decodes the supplied coordinates by finding its closest equilaterally encoded coordinates. * * @param coordinates The coordinates that need to be decoded. It should not be equilateral it doesn't matter, as the goal is simply to find the closest equilaterally encoded * coordinates. * * @return The index at which the closest equilaterally encoded coordinates are. */ public int decode(double[] coordinates) { double closestDistance = Double.POSITIVE_INFINITY; int closestEquilateralCoordsIndex = -1; for (int i = 0; i < m_categoriesCoordinates.length; ++i) { double dist = computeDistance(coordinates, i); if (dist < closestDistance) { closestDistance = dist; closestEquilateralCoordsIndex = i; } } return closestEquilateralCoordsIndex; } /** * Computes the Euclidean distance between the supplied coordinates and the equilaterally encoded coordinates at the supplied index. * * @param coordinates Coordinates of the first n-dimensional vector. * @param equilateralCoordsIndex Index for the equilaterally encoded coordinates. * * @return The Euclidean distance between the two vectors. */ public double computeDistance(double[] coordinates, int equilateralCoordsIndex) { return MathUtils.computeDistance(coordinates, m_categoriesCoordinates[equilateralCoordsIndex]); } /** * Computes the equilateral encoding table, which is used as a look up for table for the data. * * @return The equilateral encoding table. */ private double[][] computeEquilateralEncodingTable() { double negativeReciprocalOfN; double scalingFactor; final double[][] matrix = new double[m_categoryCount][m_dimensionCount]; matrix[0][0] = -1; matrix[1][0] = 1.0; if (m_categoryCount > 2) { for (int dimension = 2; dimension < m_categoryCount; ++dimension) { // scale the matrix so far scalingFactor = dimension; negativeReciprocalOfN = Math.sqrt(scalingFactor * scalingFactor - 1.0) / scalingFactor; for (int coordinate = 0; coordinate < dimension; ++coordinate) { for (int oldDimension = 0; oldDimension < dimension - 1; ++oldDimension) { matrix[coordinate][oldDimension] *= negativeReciprocalOfN; } } scalingFactor = -1.0 / scalingFactor; for (int coordinate = 0; coordinate < dimension; ++coordinate) { matrix[coordinate][dimension - 1] = scalingFactor; } for (int coordinate = 0; coordinate < dimension - 1; ++coordinate) { matrix[dimension][coordinate] = 0.0; } matrix[dimension][dimension - 1] = 1.0; // Scale the matrix for (int row = 0; row < matrix.length; ++row) { for (int col = 0; col < matrix[0].length; ++col) { double min = -1; double max = 1; matrix[row][col] = ((matrix[row][col] - min) / (max - min)) * m_valueRange + m_lowestValue; } } } } return matrix; } public static int computeDimensionCount(int categoryCount) { return categoryCount - 1; } public int getCategoryCount() { return m_categoryCount; } public int getDimensionCount() { return m_dimensionCount; } public double getLowestValue() { return m_lowestValue; } public double getHighestValue() { return m_highestValue; } public double getValueRange() { return m_valueRange; } public double[][] getCategoriesCoordinates() { return m_categoriesCoordinates; } } The mathematics utilities : package org.benoitdubreuil.iree.utils; public final class MathUtils { /** * Computes the Euclidean distance between the supplied vectors. * * @param lhsCoordinates Coordinates of the first n-dimensional vector. * @param rhsCoordinates Coordinates of the second n-dimensional vector. * * @return The Euclidean distance between the two vectors. */ public static double computeDistance(double[] lhsCoordinates, double[] rhsCoordinates) { double result = 0; for (int i = 0; i < rhsCoordinates.length; ++i) { result += Math.pow(lhsCoordinates[i] - rhsCoordinates[i], 2); } return Math.sqrt(result); } /** * Normalizes the supplied vector. * * @param vector The vector to normalize. * * @return The same array, but normalized. */ public static double[] normalizeVector(double[] vector) { double squaredLength = 0; for (int dimension = 0; dimension < vector.length; ++dimension) { squaredLength += vector[dimension] * vector[dimension]; } if (squaredLength != 1.0 && squaredLength != 0.0) { double reciprocalLength = 1.0 / Math.sqrt(squaredLength); for (int dimension = 0; dimension < vector.length; ++dimension) { vector[dimension] *= reciprocalLength; } } return vector; } /** * Negates the vector. * * @param vector The vector to negate. * * @return The same array, but each coordinate negated. */ public static double[] negateVector(double[] vector) { for (int dimension = 0; dimension < vector.length; ++dimension) { vector[dimension] = -vector[dimension]; } return vector; } /** * Multiplies the vector by a scalar. * * @param vector The vector to multiply. * @param scalar The scalar by which to multiply the vector. * * @return The same array, but each coordinate multiplied by the scalar. */ public static double[] multVector(double[] vector, double scalar) { for (int dimension = 0; dimension < vector.length; ++dimension) { vector[dimension] *= scalar; } return vector; } /** * Computes the dot product of the two supplied points. * * @param lhsVector The first point. * @param rhsVector The second point. * * @return The dot product of the two points. */ public static double dotProductVector(double[] lhsVector, double[] rhsVector) { double dotResult = 0; for (int dimension = 0; dimension < lhsVector.length; ++dimension) { dotResult += lhsVector[dimension] * rhsVector[dimension]; } return dotResult; } private MathUtils() { } } The image data that shows how to encode pixels : package org.benoitdubreuil.iree.model; import org.benoitdubreuil.iree.controller.ControllerIREE; import org.benoitdubreuil.iree.gui.ImageGUIData; import org.benoitdubreuil.iree.pattern.observer.IObserver; import org.benoitdubreuil.iree.pattern.observer.Observable; import java.awt.*; import java.awt.image.BufferedImage; public class ImageData extends Observable<ImageData> implements IObserver<ImageGUIData> { private double[][] m_encodedPixelData; /** * Encodes the pixel data of the supplied image data. * * @param newValue The new value of the observed. */ @Override public void observableChanged(ImageGUIData newValue) { if (newValue.getDownScaled() != null) { encodePixelData(newValue); modelChanged(this); } } private void encodePixelData(ImageGUIData imageGUIData) { EquilateralEncodingTable encodingTable = ControllerIREE.getInstance().getEncodingTable(); EquilateralEncodingCategory[] categories = EquilateralEncodingCategory.values(); BufferedImage image = imageGUIData.getDownScaled(); int width = image.getWidth(); int height = image.getHeight(); m_encodedPixelData = new double[width * height][]; for (int x = 0; x < width; ++x) { for (int y = 0; y < height; ++y) { int orignalPixel = image.getRGB(x, height - 1 - y); int r = (orignalPixel >> 16) & 0xff; int g = (orignalPixel >> 8) & 0xff; int b = orignalPixel & 0xff; int minColorDistance = 255 * 3; int minColorDistanceCategory = 0; for (int category = 0; category < encodingTable.getCategoryCount(); ++category) { Color categoryColor = categories[category].getColor(); int colorDistance = Math.abs(r - categoryColor.getRed()) + Math.abs(g - categoryColor.getGreen()) + Math.abs(b - categoryColor.getBlue()); if (colorDistance < minColorDistance) { minColorDistance = colorDistance; minColorDistanceCategory = category; } } m_encodedPixelData[x * height + y] = encodingTable.encode(minColorDistanceCategory).clone(); } } } public int getPixelCount() { return m_encodedPixelData.length; } double[][] getEncodedPixelData() { return m_encodedPixelData; } } The actual image data recognition algorithm : package org.benoitdubreuil.iree.model; import org.benoitdubreuil.iree.controller.ControllerIREE; import org.benoitdubreuil.iree.utils.MathUtils; public final class ImageDataRecognition { /** * Compares two images and return how similar they are. * * @param lhs The first image to compare. * @param rhs The second image to compare. * * @return Inclusively from 0, totally different, to 1, the same. */ public static double compareImages(ImageData lhs, ImageData rhs) { double meanDistance = 0; double[][] lhsEncodedPixelData = lhs.getEncodedPixelData(); double[][] rhsEncodedPixelData = rhs.getEncodedPixelData(); if (lhsEncodedPixelData != null && rhsEncodedPixelData != null) { EquilateralEncodingTable table = ControllerIREE.getInstance().getEncodingTable(); int maxPixelCount = Math.max(lhs.getPixelCount(), rhs.getPixelCount()); double emptyPixelMeanValue = 1.0; for (int pixel = 0; pixel < maxPixelCount; ++pixel) { double[] lhsVector; double[] rhsVector; if (pixel < lhs.getPixelCount()) { lhsVector = lhsEncodedPixelData[pixel]; } else { meanDistance += emptyPixelMeanValue; continue; } if (pixel < rhs.getPixelCount()) { rhsVector = rhsEncodedPixelData[pixel]; } else { meanDistance += emptyPixelMeanValue; continue; } double rhsDotLhs = MathUtils.dotProductVector(rhsVector, lhsVector); double[] rhsProjectedOntoLhs = MathUtils.multVector(lhsVector.clone(), rhsDotLhs); meanDistance += MathUtils.computeDistance(lhsVector, rhsProjectedOntoLhs) / table.getValueRange() / maxPixelCount; } } return meanDistance; } private ImageDataRecognition() { } }
  5. thecheeselover

    Tons of ideas, don’t know computers...

    From your post, you really seem like an "idea guy" that cannot contribute to a project. Please read the following link to understand what I mean. What can you contribute to the project?
  6. thecheeselover

    Voxel Traversal Algorithm (Ray Casting)

    I didn't know that algorithm. However, the result of the Bresenham's method is the first of two possible choices, which can traverse diagonally. For my algorithm, I need the second choice because I don't want the ray cast to select voxels diagonally, as it might select hidden voxels.
  7. Subscribe to our subreddit to get all the updates from the team! Introduction In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls. Configuration 3D or 2D navigation mesh, as long as the 3D one is not cyclic. Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals. An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh. The Algorithm The agent is the pink low-poly humanoid and the final destination is the flag. The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game. Psss, check this article if you haven't seen it yet. The algorithm is the following : For each waypoint, pick the current one and the next one until the next one is the last. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one. Convert each edge to a 2D line by discarding the Y component (UP vector). For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6. Here's a video of the algorithm in action :
  8. Subscribe to our subreddit to get all the updates from the team! Last month, I made a pretty simple dungeon generator algorithm. It's an organic brute force algorithm, in the sense that the rooms and corridors aren't carved into a grid and that it stops when an area doesn't fit in the graph. Here's the algorithm : Start from the center (0, 0) in 2D Generate a room Choose a side to extend to Attach a corridor to that side If it doesn't fit, stop the generation Attach a room at the end of the corridor If it doesn't fit, stop the generation Repeat from steps 3 to 7 until enough rooms are generated It allowed us to test out our pathfinding algorithm (A* & String pulling). Here are some pictures of the output in 2D and 3D :
  9. thecheeselover

    Unit Vision

    Subscribe to our subreddit to get all the updates from the team! First off, here's a video that shows the unit vision in action : So, what is the unit vision? It's a simple mechanism that notifies a unit when another unit enters its vision field. It also takes into account if the vision is blocked by entities. This is how it is implemented step by step : A cone ghost control is attached to the unit's head All units intersecting with the cone's AABB fire events (AABB because of how Bullet Physics work) Cast a ray towards the visible unit and then adjust the angle so that it fits in the cone If the ray cast touches the supposedly visible unit, then it is truly visible Using the debug view of Bullet Physics in the jMonkey Engine 3.1, we're able to see what the vision cone actually looks like. And when inside the cone we can't see it because of culling. However, we can see the enemy's arm moving, which is a test I did for when a unit see another unit. Behind a box, the enemy does not move its arm because he can't see me. But when I leave my hiding spot, he can see me again.
  10. thecheeselover

    Zone generation

    Subscribe to our subreddit to get all the updates from the team! I have integrated the zone separation with my implementation of the Marching Cubes algorithm. Now I have been working on zone generation. A level is separated in the following way : Shrink the zone map to exactly fit an integer number of Chunk2Ds, which are of 32² m². For each Chunk2D, analyse all zones inside its boundaries and determine all possible heights for Chunk3Ds, which are of 32³ m³. Imagine this as a three dimensional array as an hash map : we are trying to figure out all keys for Chunk3Ds for a given Chunk2D. Create and generate a Chunk3D for each height found. Execute the Marching Cubes algorithm to assemble the geometry for each Chunk3D. In our game, we want levels to look like and feel like a certain world. The first world we are creating is the savanna. Even though each Chunk3D is generated using 3D noise, I made a noise module to map 3D noises into the 2D to able to apply 2D perturbation to the terrain. I also tried some funkier procedural noises : An arch! The important thing with procedural generation, it's to have a certain level of control over it. With the new zone division system, I have achieved a minimum on that path for my game.
  11. thecheeselover

    Zone division

    Subscribe to our subreddit to get all the updates from the team! A friend and I are making a rogue-lite retro procedural game. As in many procedural rogue-lite games, it will have rooms to complete but also the notion of zones. The difference between a zone and a room is that a zone is open air whilst a room is not. Rooms are connected mainly by corridors while zones are mostly naturally connected / separated by rivers and mountains. Because we want levels with zones to be generated, we need to tame the beast that is procedural generation. How can we generate each zone itself and also clearly divide them? Until now, I had only been using the Java noise library called Joise, which is the Java community port of JTippetts' Accidental Noise Library. I needed the zone data to be generated with basis function modules, i.e. Perlin noise, but in contrast I needed a more structured approach for the zone division. Joise library does have a cell noise module that is a Worley noise. It looks like this depending on its 4 parameters (1, 0, 0, 0) : Using math modules, I was able to morph that noise into something that looks like a Voronoi diagram. Here's what a Voronoi diagram should look like (never mind the colors, the important parts are the cell edges and the cell centers) : A more aesthetic version : The Worley noise that I had morphed into a Voronoi-like diagram did not include the cell centers, did not include metadata about the edges and was not enough deterministic in a sense that sometimes, the edges would around 60 pixels large. I then searched for a Java Voronoi library and found this one called Voronoi-Java. With this, I was able to generate simple Voronoi diagrams : Relaxed : 1 iteration Relaxed : 2 iterations The relaxation concept is actually the Lloyd's algorithm fortunately included within the library. Now how can I make that diagram respect my level generation mechanics? Well, if we can limit an approximated number of cells within a certain resolution, that would be a good start. The biggest problem here, is that the relaxation reduces the number of cells within a restricted resolution (contrary to the global resolution) and so we need to keep that in mind. To do that, I define a constant for the total number of sites / cells. Here's my code : private Voronoi createVoronoiDiagram(int resolution) { Random random = new Random(); Stream<Point> gen = Stream.generate(() -> new Point(random.nextDouble() * resolution, random.nextDouble() * resolution)); return new Voronoi(gen.limit(VORONOI_SITE_COUNT).collect(Collectors.toList())).relax().relax().relax(); } A brief pseudo-code of the algorithm would be the following : Create the Voronoi diagram Find the centermost zone Selects X number of zones while there are zones that respect the selection criteria Draw the border map Draw the smoothed border map The selection criteria is applied for each edge that is connected only to one selected zone. Here's the selection criteria : Is connected to a closed zone, i.e. that all its edges form a polygon Does have two vertices Is inclusively in the resolution's boundaries Here's the result of a drawn border map! In this graph, I have a restricted number of cells that follow multiple criteria and I know each edge and each cell center point. To draw the smoothed border map, the following actions must be taken : emit colors from already drawn pixels and then apply a gaussian blur. Personally, I use the JH Labs Java Image Filters library for the gaussian blur. With color emission only : With color emission and a gaussian blur : You may ask yourself why have we created a smoothed border map? There's a simple reason for this, which is that we want the borders to be gradual instead of abrupt. Let's say we want rivers or streams between zones. This gradual border will allow us to progressively increase the depth of the river and making it look more natural in contrast with the adjacent zones. All that's left is to flood each selected cell and apply that to a zone map.
  12. Subscribe to our subreddit to get all the updates from the team! Recently I've been tackling with more organic low poly terrains. The default way of creating indices for a 3D geometry is the following (credits) : A way to create simple differences that makes the geometry slightly more complicated and thus more organic is to vertically swap the indices of each adjacent quad. In other words, each adjacent quad to a centered quad is its vertical mirror. Finally, by not sharing the vertices and hence by creating two triangles per quad, this is the result with a coherent noise generator (joise) : It is called flat shading.
  13. I integrated LibGDX's Behavior Tree AI module into our roguelite, vaporwave and procedural game, which is made use the jMonkey Engine 3.1. I then created the following behavior : - Follow enemy - Search the enemy's last recorded position - Return back to initial chase position - Stop the chase after a certain amount of time - Attacking the enemy resets the timer Here's the behavior tree logic. Beware that it's possible there are still bugs. subtree name:"notIsReturningToInitialChasePosition" invert isReturningToInitialChasePosition subtree name:"notIsMovingToFollowingEnemyLastPosition" invert isMovingToFollowingEnemyLastPosition subtree name:"notHaveSpottedEnemies" invert haveSpottedEnemies subtree name:"followEnemy" selector parallel lookOutForEnemies alwaysSucceed lookAtClosestSeenEnemy alwaysSucceed (isPlayingAnimation animation:"look_around") stopAnimation animation:"look_around" followClosestSeenEnemy parallel policy:"selector" alwaysFail wait seconds:10 attackClosestSeenEnemy root selector $followEnemy (isFollowingEnemyAlive) (knowFollowingEnemyLastPosition) ($notHaveSpottedEnemies) sequence resetLookAt moveToFollowingEnemyLastPosition parallel policy:"selector" alwaysFail playAnimation animation:"look_around" lookOutForEnemies alwaysFail resetLookAt returnToInitialChasePosition I'm new to AI programming and still have a lot of work to do for making better behavior trees. I think it's really easy to make spaghetti behaviors. Do you have any tutorials on how to create a "good" behavior tree?
  14. thecheeselover

    Voxel Colony Simulation / Survival

    Album for Voxel Colony Simulation / Survival
  15. Today, I finally finished the following features : Add new solid voxels at the selected position Destroy voxels at the selected position Generate chunks on the fly Remove chunks on the fly Here is a video demonstrating those features :
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!