Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Algorithm'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 185 results

  1. I'm very interested in the snow physics in Red Dead Redemption 2. Here is a demo video from YouTube. Does anyone know the techniques behind this snow physics? For example, What is the shading model of the snow? How do they create snow deformation?
  2. Hello everyone, it's my first time posting on this forum. I'm very excited to meet you. Anyways, my first question is about procedurally generating a world. I'm seeing how to implement a procedural generated world, and placing objects after that seem to be a big challenge. Let's say I have a big object that doesn't fit inside a chunk or that it is spawned in the frontier of a chunk. How would you go about generating the whole thing using the nearby chunks? With terrain it's pretty easy because the terrain is not hardcoded, so the algorithm is entirely free. But when I want to place hardcoded objects like castles or trees, I can't just cut the castle in half - or can I? What's a way to approach this problem? I'm using LWJGL/OpenGL in Java for a 2D game.
  3. Hi guys, I've got a little problem which is irritating me as there must be a simple solution but Google isn't necessarily being my friend right now. In my new game, I've got a set of projectiles for which I need to check for collisions with typically rectangular shapes (but which aren't axis aligned). My general approach is to create an axis aligned bounding box for both the projectile's movement and the target object. If there's a match on the broad phase, then I want to check whether the projectile collides with any of the edges of the collision and then report the first collision (i.e. if the projectile would have gone through multiple lines then I want to know the first collision so I can display the explosion animation in the right place). I represent my shapes as a series of line segments (there's a future question about colliding with semi-circles and circles, but that's later). Can anyone 'refresh my memory' on being able to test for the interaction of line segments? Thanks Steve
  4. Hello there, Let's say I am looking into designing a game similar to Guitar hero for mobile. If a player plays niche songs from obscure bands, what is the best way to register his highscore on a global server? I'm interested in resource efficiency and response times. Thanks!
  5. I'm writing a little game for visual deficient people, but I'm having a hard time getting the mouse position. Let me explain : I need to know where in the table the mouse cursor is, without having a click, and then I want to play a sound. That sound would be different for every position. Any thoughts? Thanks, in advance! e.g., when the mouse is on the 1st box would be played the audio "a1", when it's on the 2nd box, "a2", and so on. I tried with: mouse_x, mouse_y = get_Position() if mouse_x and mouse_y == map[x][y] then if map[x][y] == 0.1 then Audio:play() But it makes a loop and the sound keeps playing forever!
  6. Hello! When I implemented SSR I encountered the problem of artifacts. Screenshots here Code: #version 330 core uniform sampler2D normalMap; // in world space uniform sampler2D colorMap; uniform sampler2D reflectionStrengthMap; uniform sampler2D positionMap; // in world space uniform mat4 projection, view; uniform vec3 cameraPosition; in vec2 texCoord; layout (location = 0) out vec4 fragColor; void main() { mat4 vp = projection * view; vec3 position = texture(positionMap, texCoord).xyz; vec3 normal = texture(normalMap, texCoord).xyz; vec4 coords; vec3 viewDir = normalize(position - cameraPosition); vec3 reflected = reflect(viewDir, normal); float L = 0.5; vec3 newPos; for (int i = 0; i < 10; i++) { newPos = position + reflected * L; coords = vp * vec4(newPos, 1.0); coords.xy = 0.5 + 0.5 * coords.xy / coords.w; newPos = texture(positionMap, coords.xy).xyz; L = length(position - newPos); } float fresnel = 0.0 + 2.8 * pow(1 + dot(viewDir, normal), 4); L = clamp(L * 0.1, 0, 1); float error = (1 - L); vec3 color = texture(colorMap, coords.xy).xyz; fragColor = mix(texture(colorMap, texCoord), vec4(color, 1.0), texture(reflectionStrengthMap, texCoord).r); } I will be grateful for help!
  7. Hello! During the implementation of SSLR, I ran into a problem: only objects that are far from the reflecting surface are reflected. For example, as seen in the screenshot, this is a lamp and angel wings. I give the code and screenshots below. #version 330 core uniform sampler2D normalMap; // in view space uniform sampler2D depthMap; // in view space uniform sampler2D colorMap; uniform sampler2D reflectionStrengthMap; uniform mat4 projection; uniform mat4 inv_projection; in vec2 texCoord; layout (location = 0) out vec4 fragColor; vec3 calcViewPosition(in vec2 texCoord) { // Combine UV & depth into XY & Z (NDC) vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r); // Convert from (0, 1) range to (-1, 1) vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1); // Undo Perspective transformation to bring into view space vec4 ViewPosition = inv_projection * ScreenSpacePosition; ViewPosition.y *= -1; // Perform perspective divide and return return ViewPosition.xyz / ViewPosition.w; } vec2 rayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) { dir *= 0.25f; for (int i = 0; i < 20; i++) { hitCoord += dir; vec4 projectedCoord = projection * vec4(hitCoord, 1.0); projectedCoord.xy /= projectedCoord.w; projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5; float depth = calcViewPosition(projectedCoord.xy).z; dDepth = hitCoord.z - depth; if(dDepth < 0.0) return projectedCoord.xy; } return vec2(-1.0); } void main() { vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0; vec3 viewPos = calcViewPosition(texCoord); // Reflection vector vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal))); // Ray cast vec3 hitPos = viewPos; float dDepth; float minRayStep = 0.1f; vec2 coords = rayCast(reflected * minRayStep, hitPos, dDepth); if (coords != vec2(-1.0)) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, coords), texture(reflectionStrengthMap, texCoord).r); else fragColor = texture(colorMap, texCoord); } Screenshot: colorMap: normalMap: depthMap: I will be grateful for help
  8. Hello all, I have made a simple shadow map shader with a minimal problem on my implementation, I know there's something missing that it draw on polygon not facing the light while it should not, since my shader knowledge on available functions is limited I cannot spot the problem, I put this on DX11 HLSL tag though GLSL and any hints, tips is appreciated and welcome ^_^y IMAGE : CODE: // Shadow color applying //------------------------------------------------------------------------------------------------------------------ //*>> Pixel position in light space // float4 m_LightingPos = mul(IN.WorldPos3D, __SM_LightViewProj); //*>> Shadow texture coordinates // float2 m_ShadowTexCoord = 0.5 * m_LightingPos.xy / m_LightingPos.w + float2( 0.5, 0.5 ); m_ShadowTexCoord.y = 1.0f - m_ShadowTexCoord.y; //*>> Shadow map depth // float m_ShadowDepth = tex2D( ShadowMapSampler, m_ShadowTexCoord ).r; //*>> Pixel depth // float m_PixelDepth = (m_LightingPos.z / m_LightingPos.w) - 0.001f; //*>> Pixel depth in front of the shadow map depth then apply shadow color // if ( m_PixelDepth > m_ShadowDepth ) { m_ColorView *= float4(0.5,0.5,0.5,0); } // Final color //------------------------------------------------------------------------------------------------------------------ return m_ColorView;
  9. Hey, So I have got this asteroid type game and today I encountered a new issue while testing this game. What happened was that two asteroids were close to each other and I shot a bullet at them. The asteroids were so close to each other that a single bullet could collide to both of them. It collided and my game crashed there itself. I figured out it happened because two asteroids and one bullet collided in the same frame. This is the code - ```void Collision::DoCollisions(Game *game) const { for (ColliderList::const_iterator colliderAIt = colliders_.begin(), end = colliders_.end(); colliderAIt != end; ++colliderAIt) { ColliderList::const_iterator colliderBIt = colliderAIt; for (++colliderBIt; colliderBIt != end; ++colliderBIt) { Collider *colliderA = *colliderAIt; Collider *colliderB = *colliderBIt; if (CollisionTest(colliderA, colliderB)) { game->DoCollision(colliderA->entity, colliderB->entity); } } } }``` ``` void Game::DoCollision(GameEntity *a, GameEntity *b) { Ship *player = static_cast<Ship *>(a == player_ ? a : (b == player_ ? b : 0)); Bullet *bullet = static_cast<Bullet *>(IsBullet(a) ? a : (IsBullet(b) ? b : 0)); Asteroid *asteroid = static_cast<Asteroid *>(IsAsteroid(a) ? a : (IsAsteroid(b) ? b : 0)); Bullet *bulletMode = static_cast<Bullet *>(IsBulletMode(a) ? a : (IsBulletMode(b) ? b : 0)); if (player && asteroid) { player->playerCollided = true; //AsteroidHit(asteroid); //DeletePlayer(); } if (bullet && asteroid) { collidedBullets.push_back(bullet); collidedAsteroid.push_back(asteroid); //AsteroidHit(asteroid); //DeleteBullet(); } if(bulletMode && asteroid) { collidedBulletMode.push_back(bulletMode); collidedAsteroid.push_back(asteroid); } }``` ``` void Game::CollisionResponse() { if(player_->playerCollided == true) { DeletePlayer(); } else { if(!collidedAsteroid.empty()) { for(AsteroidList::const_iterator collidedAsteroidIt = collidedAsteroid.begin(), end = collidedAsteroid.end(); collidedAsteroidIt != end ; ++collidedAsteroidIt ) { AsteroidHit(*collidedAsteroidIt); } collidedAsteroid.clear(); } if(!collidedBullets.empty()) { for (BulletList::const_iterator bulletIt = collidedBullets.begin(), end = collidedBullets.end() ; bulletIt!=end; ++bulletIt) { DeleteBullet(*bulletIt); } collidedBullets.clear(); } if(!collidedBulletMode.empty()) { for (BulletList::const_iterator bulletIt = collidedBulletMode.begin(), end = collidedBulletMode.end() ; bulletIt!=end; ++bulletIt) { DeleteBulletMode(*bulletIt); } collidedBulletMode.clear(); } } }``` in my game->docollision() - whenever an asteroid and a bullet used to collide, the collided objects get collected in collidedasteroids and collidedbullets respectively. When two asteroids collided with the same bullet, the two asteroids got collected safely in collidedAsteroid but the single bullet got collected in collidedBullets twice, so when the deletion was happening, the second time iteration of the bullet couldn't find the respective bullet and it got crashed. How am I supposed to approach this problem now? Thanks
  10. array of points(green dot) connected into a curve line,It was just nothing more than array of points arranged in order , order of insertion. As shown in the above figure, my goal is for the player to start from "S" and choose a point that is closest to the player and is naturally reasonable(i mean i can't walk through the wall). X by BWPlayer X by BWPlayer X by BWPlayer X by BWPlayer But because the shortest distance between the two points is W, it suddenly move through to the other side, which is definitely not what we want.actually what i desire is "N". any hints on this,look forward to your reply,thank you. Of course, i have used cross-production to determine if the point being chosen is relative to the left-side or right-side of my current location.with points on the right-side of my position has been chosen as candidates,then pick one of them, which is minimum distance compared with others.
  11. Gnollrunner

    Mountain Ranges

    For this entry we implemented the ubiquitous Ridged Multi-fractal function. It's not so interesting in and of itself, but it does highlight a few features that were included in our voxel engine. First as we mentioned, being a voxel engine, it supports full 3D geometry (caves, overhangs and so forth) and not just height-maps. However if we look at a typical world these features are the exception rather than the rule. It therefor makes sense to optimize the height-map portion of our terrain functions. This is especially true since our voxels are vertically aligned. This means that there will be many places where the same height calculation is repeated. Even if we look at a single voxel, nearly the same calculation is used for a lower corner and it's corresponding upper corner. The only difference been the subtraction from the voxel vertex position. ...... Enter the unit sphere! In our last entry we talked about explicit voxels, with edges and faces and vertexes. However all edges and faces are not created equal. Horizontal faces (in our case the triangular faces), and horizontal edges contain a special pointer that references their corresponding parts in a unit sphere, The unit sphere can be thought of as residing in the center of each planet. Like our world octree, it is formed from a subdivided icosahedron, only it is not extruded and is organized into a quadtree instead of an octree, being more 2D in nature. Vertexes in our unit sphere can be used to cache height-map function values to avoid repeated calculations. We also use our unit sphere to help the horizontal part of our voxel subdivision operation. By referencing the unit sphere we only have to multiply a unit sphere vertex by a height value to generate voxel vertex coordinates. Finally our unit-sphere is also used to provide coordinates during the ghost-walking process we talked about in our first entry. Without it, our ghost-walking would be more computationally expensive as it would have to calculate spherical coordinates on each iteration instead of just calculating heights, which are quite simple to calculate as they are all generated by simply averaging two other heights. Ownership of units sphere faces is a bit complex. Ostensibly they are owned by all voxel faces that reference them (and therefore add to their reference counter) . However this presents a bit of a problem as they are also used in ghost-walking which happens every LOD/re-chunking iteration, and it fact they may or may not end up being referenced by voxels faces, depending on whether mesh geometry is found. Even if no geometry is found we may want to keep them for the next ghost-walk search. To solve this problem, we implemented undead-objects. Unit sphere faces can become undead and can even be created that way if they are built by the ghost-walker. When they are undead they are kept in a special list which keeps them psudo-alive. They also have an un-dead life value associated with them. When they are touched by the ghost-walker that value is renewed. However if after a few iterations they are untouched, they become truly dead and are destroyed. Picture time again..... So here is our Ridged Multi-Fractal in wire frame. We'll flip it around to show our level transition........ Here's a place that needs a bit of work. The chunk level transitions are correct but they are probably a bit more complex than they need to be. We use a very general voxel tessellation algorithm since we have to handle various combinations of vertical and horizontal transitions. We will probably optimize this later, especially for the common cases but for now it serves it's purpose. Next up we are going to try to add threads. We plan to use a separate thread(s) for the LOD/re-chunk operations, and another one for the graphics .
  12. Hi! How is XP calculated? For example, to reach level 4, the player should reach 200 XP and to reach level 5 player requires 450 XP. Is there a formula? Is there a tutorial available online? Any source where I can learn how XP is calculated? Can you give me a simple example from any existing game where the XP increases whether the player wins or loses? Thanks!
  13. Context I recently started reading the book Artificial Intelligence for Humans, Volume 1: Fundamental Algorithms by Jeff Heaton. Upon finishing reading the chapter Normalizing Data, I decided to practice what I had just learnt to hopefully finally understand 100 % of a special algorithm : equilateral encoding. Equilateral Encoding The purpose of this algorithm is to normalize N categories by creating an equilateral triangle of N - 1 dimensions. Each vertex in that triangle is a category and so, because of the properties of the equilateral triangle, it's easy to construct this structure with medians of 1 unit. For selecting a category, all you need is a location vector of N - 1 dimensions that's ideally inside the equilateral triangle. For example, consider the vertex C and point c. The vertex C is 100% of a specified category, whilst the point c is 0% of that category. The vertices A and B are also 0% of C's category. In other words, the further away a location vector is from a vertex along its median axis, the less it belongs to the vertex`s category. Also, if the mentioned axis distance is greater than the length of a median (all medians have the same length in an equilateral triangle, which is the range of values in this context), then the location vector absolutely does not belong to the specified category. The reason why we just don't select directly a category is because this algorithm is originally for machine learning. It quantifies qualitative data, shares the blame better and uses less memory than simpler algorithms, as it uses N - 1 dimensions. The Application First of, here's the link to my application : https://github.com/thecheeselover/image-recognition-by-equilateral-encoding How It Works To be able to know the similarities between two images, a comparison must be done. Using the equilateral encoding algorithm, a pixel versus pixel comparison is the only way to do so and thus shapes do not count towards the similarities between the two images. What category should we use to compare images? Well, colors of course! Here are the colors I specified as categories : Red Green Blue White Black The algorithm will select the closest category for each pixel according to the differences between both colors and save it's location vector in an array. Then, when both images are loaded, compare each image pixels against each other by computing the median distance, as mentioned before, between both of them. Simply project the first pixel's vector onto the other one and compute the Euclidean distance to get the median distance. Sum all computed median distances This way, it would also works if there were location vectors not totally on a vertex, even though it doesn't happen for the just described comparison algorithm. Preliminaries Of course, this can be really slow for immense images. Downscale the images to have a maximum width and height, 32 pixels in my case. Also, remove the transparency as it is not a color component in itself. If both images do not have the same aspect ratio and so not the same downscaled dimensions, then consider using extremely different location vectors and so just different categories for "empty" pixels against non-empty pixels. The Code The equilateral encoding table : package org.benoitdubreuil.iree.model; import org.benoitdubreuil.iree.utils.MathUtils; /** * The image data for the image recognition by equilateral encoding. * <p> * See the author Jeff Heaton of this algorithm and his book that contains everything about it : * <a href="https://www.heatonresearch.com/book/aifh-vol1-fundamental.html">Artificial Intelligence for Humans, Vol 1: Fundamental Algorithms</a> * <p> * The base source code for the equilateral encoding algorithm : * <a href="https://github.com/jeffheaton/aifh/blob/master/vol1/java-examples/src/main/java/com/heatonresearch/aifh/normalize/Equilateral.java">Equilateral.java</a> */ public class EquilateralEncodingTable { // The algorithm with only one category does not make sense private static final int MIN_CATEGORIES_FOR_ENCODING = 2; private int m_categoryCount; private int m_dimensionCount; private double m_lowestValue; private double m_highestValue; private double m_valueRange; private double[][] m_categoriesCoordinates; public EquilateralEncodingTable(int categoryCount, double lowestValue, double highestValue) { if (categoryCount < MIN_CATEGORIES_FOR_ENCODING) { throw new ArrayIndexOutOfBoundsException("Not enough categories for equilateral encoding"); } this.m_categoryCount = categoryCount; this.m_dimensionCount = computeDimensionCount(categoryCount); this.m_lowestValue = lowestValue; this.m_highestValue = highestValue; this.m_valueRange = highestValue - lowestValue; this.m_categoriesCoordinates = computeEquilateralEncodingTable(); } /** * Encodes a supplied index and gets the equilaterally encoded coordinates at that index. * * @param equilateralCoordsIndex The index at which the equilaterally encoded coordinates are. * * @return The equilaterally encoded coordinates. */ public double[] encode(int equilateralCoordsIndex) { return m_categoriesCoordinates[equilateralCoordsIndex]; } /** * Decodes the supplied coordinates by finding its closest equilaterally encoded coordinates. * * @param coordinates The coordinates that need to be decoded. It should not be equilateral it doesn't matter, as the goal is simply to find the closest equilaterally encoded * coordinates. * * @return The index at which the closest equilaterally encoded coordinates are. */ public int decode(double[] coordinates) { double closestDistance = Double.POSITIVE_INFINITY; int closestEquilateralCoordsIndex = -1; for (int i = 0; i < m_categoriesCoordinates.length; ++i) { double dist = computeDistance(coordinates, i); if (dist < closestDistance) { closestDistance = dist; closestEquilateralCoordsIndex = i; } } return closestEquilateralCoordsIndex; } /** * Computes the Euclidean distance between the supplied coordinates and the equilaterally encoded coordinates at the supplied index. * * @param coordinates Coordinates of the first n-dimensional vector. * @param equilateralCoordsIndex Index for the equilaterally encoded coordinates. * * @return The Euclidean distance between the two vectors. */ public double computeDistance(double[] coordinates, int equilateralCoordsIndex) { return MathUtils.computeDistance(coordinates, m_categoriesCoordinates[equilateralCoordsIndex]); } /** * Computes the equilateral encoding table, which is used as a look up for table for the data. * * @return The equilateral encoding table. */ private double[][] computeEquilateralEncodingTable() { double negativeReciprocalOfN; double scalingFactor; final double[][] matrix = new double[m_categoryCount][m_dimensionCount]; matrix[0][0] = -1; matrix[1][0] = 1.0; if (m_categoryCount > 2) { for (int dimension = 2; dimension < m_categoryCount; ++dimension) { // scale the matrix so far scalingFactor = dimension; negativeReciprocalOfN = Math.sqrt(scalingFactor * scalingFactor - 1.0) / scalingFactor; for (int coordinate = 0; coordinate < dimension; ++coordinate) { for (int oldDimension = 0; oldDimension < dimension - 1; ++oldDimension) { matrix[coordinate][oldDimension] *= negativeReciprocalOfN; } } scalingFactor = -1.0 / scalingFactor; for (int coordinate = 0; coordinate < dimension; ++coordinate) { matrix[coordinate][dimension - 1] = scalingFactor; } for (int coordinate = 0; coordinate < dimension - 1; ++coordinate) { matrix[dimension][coordinate] = 0.0; } matrix[dimension][dimension - 1] = 1.0; // Scale the matrix for (int row = 0; row < matrix.length; ++row) { for (int col = 0; col < matrix[0].length; ++col) { double min = -1; double max = 1; matrix[row][col] = ((matrix[row][col] - min) / (max - min)) * m_valueRange + m_lowestValue; } } } } return matrix; } public static int computeDimensionCount(int categoryCount) { return categoryCount - 1; } public int getCategoryCount() { return m_categoryCount; } public int getDimensionCount() { return m_dimensionCount; } public double getLowestValue() { return m_lowestValue; } public double getHighestValue() { return m_highestValue; } public double getValueRange() { return m_valueRange; } public double[][] getCategoriesCoordinates() { return m_categoriesCoordinates; } } The mathematics utilities : package org.benoitdubreuil.iree.utils; public final class MathUtils { /** * Computes the Euclidean distance between the supplied vectors. * * @param lhsCoordinates Coordinates of the first n-dimensional vector. * @param rhsCoordinates Coordinates of the second n-dimensional vector. * * @return The Euclidean distance between the two vectors. */ public static double computeDistance(double[] lhsCoordinates, double[] rhsCoordinates) { double result = 0; for (int i = 0; i < rhsCoordinates.length; ++i) { result += Math.pow(lhsCoordinates[i] - rhsCoordinates[i], 2); } return Math.sqrt(result); } /** * Normalizes the supplied vector. * * @param vector The vector to normalize. * * @return The same array, but normalized. */ public static double[] normalizeVector(double[] vector) { double squaredLength = 0; for (int dimension = 0; dimension < vector.length; ++dimension) { squaredLength += vector[dimension] * vector[dimension]; } if (squaredLength != 1.0 && squaredLength != 0.0) { double reciprocalLength = 1.0 / Math.sqrt(squaredLength); for (int dimension = 0; dimension < vector.length; ++dimension) { vector[dimension] *= reciprocalLength; } } return vector; } /** * Negates the vector. * * @param vector The vector to negate. * * @return The same array, but each coordinate negated. */ public static double[] negateVector(double[] vector) { for (int dimension = 0; dimension < vector.length; ++dimension) { vector[dimension] = -vector[dimension]; } return vector; } /** * Multiplies the vector by a scalar. * * @param vector The vector to multiply. * @param scalar The scalar by which to multiply the vector. * * @return The same array, but each coordinate multiplied by the scalar. */ public static double[] multVector(double[] vector, double scalar) { for (int dimension = 0; dimension < vector.length; ++dimension) { vector[dimension] *= scalar; } return vector; } /** * Computes the dot product of the two supplied points. * * @param lhsVector The first point. * @param rhsVector The second point. * * @return The dot product of the two points. */ public static double dotProductVector(double[] lhsVector, double[] rhsVector) { double dotResult = 0; for (int dimension = 0; dimension < lhsVector.length; ++dimension) { dotResult += lhsVector[dimension] * rhsVector[dimension]; } return dotResult; } private MathUtils() { } } The image data that shows how to encode pixels : package org.benoitdubreuil.iree.model; import org.benoitdubreuil.iree.controller.ControllerIREE; import org.benoitdubreuil.iree.gui.ImageGUIData; import org.benoitdubreuil.iree.pattern.observer.IObserver; import org.benoitdubreuil.iree.pattern.observer.Observable; import java.awt.*; import java.awt.image.BufferedImage; public class ImageData extends Observable<ImageData> implements IObserver<ImageGUIData> { private double[][] m_encodedPixelData; /** * Encodes the pixel data of the supplied image data. * * @param newValue The new value of the observed. */ @Override public void observableChanged(ImageGUIData newValue) { if (newValue.getDownScaled() != null) { encodePixelData(newValue); modelChanged(this); } } private void encodePixelData(ImageGUIData imageGUIData) { EquilateralEncodingTable encodingTable = ControllerIREE.getInstance().getEncodingTable(); EquilateralEncodingCategory[] categories = EquilateralEncodingCategory.values(); BufferedImage image = imageGUIData.getDownScaled(); int width = image.getWidth(); int height = image.getHeight(); m_encodedPixelData = new double[width * height][]; for (int x = 0; x < width; ++x) { for (int y = 0; y < height; ++y) { int orignalPixel = image.getRGB(x, height - 1 - y); int r = (orignalPixel >> 16) & 0xff; int g = (orignalPixel >> 8) & 0xff; int b = orignalPixel & 0xff; int minColorDistance = 255 * 3; int minColorDistanceCategory = 0; for (int category = 0; category < encodingTable.getCategoryCount(); ++category) { Color categoryColor = categories[category].getColor(); int colorDistance = Math.abs(r - categoryColor.getRed()) + Math.abs(g - categoryColor.getGreen()) + Math.abs(b - categoryColor.getBlue()); if (colorDistance < minColorDistance) { minColorDistance = colorDistance; minColorDistanceCategory = category; } } m_encodedPixelData[x * height + y] = encodingTable.encode(minColorDistanceCategory).clone(); } } } public int getPixelCount() { return m_encodedPixelData.length; } double[][] getEncodedPixelData() { return m_encodedPixelData; } } The actual image data recognition algorithm : package org.benoitdubreuil.iree.model; import org.benoitdubreuil.iree.controller.ControllerIREE; import org.benoitdubreuil.iree.utils.MathUtils; public final class ImageDataRecognition { /** * Compares two images and return how similar they are. * * @param lhs The first image to compare. * @param rhs The second image to compare. * * @return Inclusively from 0, totally different, to 1, the same. */ public static double compareImages(ImageData lhs, ImageData rhs) { double meanDistance = 0; double[][] lhsEncodedPixelData = lhs.getEncodedPixelData(); double[][] rhsEncodedPixelData = rhs.getEncodedPixelData(); if (lhsEncodedPixelData != null && rhsEncodedPixelData != null) { EquilateralEncodingTable table = ControllerIREE.getInstance().getEncodingTable(); int maxPixelCount = Math.max(lhs.getPixelCount(), rhs.getPixelCount()); double emptyPixelMeanValue = 1.0; for (int pixel = 0; pixel < maxPixelCount; ++pixel) { double[] lhsVector; double[] rhsVector; if (pixel < lhs.getPixelCount()) { lhsVector = lhsEncodedPixelData[pixel]; } else { meanDistance += emptyPixelMeanValue; continue; } if (pixel < rhs.getPixelCount()) { rhsVector = rhsEncodedPixelData[pixel]; } else { meanDistance += emptyPixelMeanValue; continue; } double rhsDotLhs = MathUtils.dotProductVector(rhsVector, lhsVector); double[] rhsProjectedOntoLhs = MathUtils.multVector(lhsVector.clone(), rhsDotLhs); meanDistance += MathUtils.computeDistance(lhsVector, rhsProjectedOntoLhs) / table.getValueRange() / maxPixelCount; } } return meanDistance; } private ImageDataRecognition() { } }
  14. I'm trying to figure out how to design the vegetation/detail system for my procedural chunked-lod based planet renderer. While I've found a lot of papers talking about how to homogeneously scatter things over a planar or spherical surface, I couldn't find too much info about how the location of objects is actually encoded. There seems to be a common approach that involves using some sort of texture mapping where different layers of vegetation/rocks/trees etc. define what and where things are placed, but can't figure out how this is actually rendered. I guess that for billboards, these textures could be sampled from the vertex shader and then use a geometry shader to draw a texture onto a generated quad? what about near trees or rocks that need a 3D model instead? Is this handled from the CPU? Is there a specific solution that works better with a chunked-lod approach? Thanks in advance!
  15. I want to be able to display an amount of gold at the top of the screen that changes based on how much the player buys. I can just remove and reload the Pane object in my screen controller class, but it seems like there must be a way for me to only update one part of a Pane without needing to reload the entire thing. How can I modify a variable and change it's displayed value in a window, while not reloading the entire Pane object? Main Class package application; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.image.Image; import javafx.scene.image.ImageView; import javafx.scene.layout.Pane; import javafx.scene.text.Font; import javafx.scene.text.Text; import javafx.stage.Stage; public class Main extends Application { // Fonts public static final String IMPER_FONT = "./res/fonts/imperator.ttf"; // Global Images // used for screen titles public static final ImageView TITLE_IMG = makeImage("./res/images/title.png"); public static final ImageView HIDEOUT_IMG = makeImage("./res/images/hideout.png"); public static final ImageView TREASURY_IMG = makeImage("./res/images/treasury.png"); // Global Game Stats public static int Gold = 1250; public static LinkList<Object> Equipment = new LinkList<>(); // Panes private static Pane mainMenu = MainMenu.getPane(); private static Pane hideout = Hideout.getPane(); private static Pane treasury = Treasury.getPane(); // Screen Controller private static Scene scene = new Scene(mainMenu, 1600, 900); private static ScreenController control = new ScreenController(scene); /** * Creates an ImageView object * * @param path A String that represents the filepath to the image * @return ImageView An Imageview object with the appropriate image */ private static ImageView makeImage(String path) { FileInputStream input; try { input = new FileInputStream(path); ImageView title = new ImageView(new Image(input)); return title; } catch (FileNotFoundException e) { e.printStackTrace(); } return null; } /** * Creates a Text object with specified font file-path, message, and size. * * @param String path the file path of the font file * @param String message the text of the Text Object * @param double size the size of the font * @return Text The appropriate Text object. */ public static Text makeText(String path, String message, double size) { FileInputStream input; try { input = new FileInputStream(new File(path)); Font font = Font.loadFont(input, size); Text msg = new Text(message); msg.setFont(font); return msg; } catch (FileNotFoundException e) { e.printStackTrace(); } return null; } @Override public void start (Stage stage) { // Add panes to ScreenController object control.addScreen("main menu", mainMenu); control.addScreen("hideout", hideout); control.addScreen("treasury", treasury); // create the scene and display it stage.setScene(scene); stage.show(); } /** * Allows other classes to use the same ScreenController object. * * @return ScreenController The object used to switch screens */ public static ScreenController getControl() { return control; } public static void main(String[] args) { launch(args); } } ScreenController class package application; import java.util.HashMap; import javafx.scene.Scene; import javafx.scene.layout.Pane; public class ScreenController { private HashMap<String, Pane> screenMap = new HashMap<>(); private Scene main; public ScreenController(Scene main) { this.main = main; } protected void addScreen(String name, Pane pane) { screenMap.put(name, pane); } protected void removeScreen(String name) { screenMap.remove(name); } protected void activate(String name) { main.setRoot(screenMap.get(name)); } }
  16. Sup dudes and dudettes! I'm in the process of implementing an animation state machine and am currently making a 2D blendspace state for it. I think I've figured out how to blend the different clips together given an [x,y] coordinate but I have one problem I'm not sure how to solve; matching the different blended clips' animation speed. Say you have your run-o-the-mill twin-stick character locomotion blendspace, where max Y, zero X means running straight forward, and max Y, max X (in either direction) means running at an angle (thus blending run_forward with run_strafe). In this case the animations' speed probably match, so there's no worry. However, say I'm halfway up Y, meaning I'm "jogging", in the sense I'm halfway between walk_forward and run_forward, and my X is at some arbitrary point. How would I blend these animations together so that their speeds match? Would it be as simple as 'lerping' the animation speed of the walk towards the speed of the run and scaling the speeds of all the clips to match this speed? Sorry if the question is poorly written.
  17. I am stuck on a bit of a body motion problem and I was wondering if anyone would be able to offer a fresh perspective on it. I have a body in motion, governed by a realistic physics engine. It is a rocket, during ascent (launch) into orbit. Rocket has main engine thrusters on its end, 4 of them (just like Apollo Saturn V) and I am able to gimbal (pivot) these engines to change the thrust geometry. One of these modes is a "roll" configuration, where 2 engines pivot slightly forward, and 2 engines slightly backward, thus producing torque and an axial (longitudinal) roll on the rocket body. Everything works well, as expected. The problem I have is, I need an algorithm that will let me produce an exact amount of degrees in roll. So for example, I'd like to roll the rocket by 60 degrees. At first, this seemed trivial: Initiate roll by gimbaling engines The rocket starts rolling At half way point, 30 degrees, reverse gimbal positions so that the roll can be counter-acted The rocket slows down the roll The rocket stops at exactly 60 degrees (since equal but opposite torque was applied at exactly half way point) Under ideal conditions, this would work. I tried the algorithm above, and the rocket stops the roll, as expected, but way off the 60 degree mark, more like around 47 degrees. Then I realized that I had several other factors to contend with: Rocket is losing mass - the fuel is being expended as the launch progresses. This means that the moment of inertia is changing with time. There is small but not negligible damping created by the atmosphere, and that is changing as well, as the atmosphere is getting thinner. So, the results I got were correct, since the rocket lost a bit of mass while it was turning, and less torque was required to stop the roll. Since I applied the same amount of torque, the rocket stopped rolling earlier than at 60 degree mark. I am not sure how to tackle this. Somehow, I need to calculate that "point" in the roll, in degrees, where to switch the thrust geometry, to wind up exactly at my desired roll amount. In 0-60 degree roll example above, that mark would be somewhere higher than 30 degrees. Any thoughts on this?
  18. So I am trying to prototype a 2D platformer like game in Unity and trying to figure out the best method that I should be investigating for controlling the player character. I see generally 2 different ways to handle this which is either using Rigidbody2D or using raycasts. The list of things that I am looking to do right now are: has some sort of gravity like effect moving side to side jumping double jumping being able to walking up slopes of a certain angle (but prevent it after a certain angle and have the user slide down if they are on that angle or larger) be able to hang off the edge and pull up I sure some things might come up later in development if I get that far but these are what I want to get working at least at a very basic level before I move on from the character controller for the prototype (if you watch this video for about 15 - 20 seconds, you will see generally most of the functionality I am looking to achieve: https://www.youtube.com/watch?v=6rzTnx6IHqM&start=375). Now I started with the Rigidbody2D since I do want to have some level of gravity in the game (not just the player but items, enemy, etc.) and I can get the first 4 working pretty easy but the 5th is more troublesome (and have not gotten to the 6th). I can move up angles (I am setting Rigidbody2D.velocity for movement as it seems to be the most recommended) but for example if I am walking up an angle and then stop, my character jumps up a little (I am guess because of the forward velocity that is has when I stop applying hortizontal velocity for moving side to side, there is still extra vertical velocity). So I have 2 questions: Should I be looking at Rigidbody2D or manual raycasts? Either way that you recommend going, do you have any resources that you would recommend looking at for reference (video tutorials, articles, etc.)?
  19. So I have a program that involves rendering a few models and some tessellated terrain, and allowing the user to navigate through it using wasd (R and F for vertical) keys and a mouselook camera. I also have it rendering two triangles at the front of the screen that I am attempting to use as the basis for some post-processing raymarching stuff. I also have a depth framebuffer texture that I render from the player's camera perspective to get a sense of how far each pixel's ray must be cast. For reference, here's the part of my fragment shader that deals with the raymarching: float LinearizeDepth(in vec2 uv) { float zNear = 0.1; // TODO: Replace by the zNear of your perspective projection float zFar = 250.0; // TODO: Replace by the zFar of your perspective projection float depth = texture2D(texture0, uv).x; float z_n = 2.0 * depth - 1.0; float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear)); return z_e; } float mapSphere(vec3 eyePos) { return length(eyePos) - 1.0; } float doMap(vec3 eyePos) { float total = 999999; for(int x = 0; x<5; x++) { for(int y = 0; y<5; y++) { eyePos += vec3(x*10,y*10,0); total = min(total,mapSphere(eyePos)); eyePos -= vec3(x*10,y*10,0); } } return total; } Above: Some functions to help set things up. Below: The part of main that deals with the two triangles applied to the screen. if(skyDrawingSky == 1) { vec2 uv = (vert.xy + 1.0) * 0.5; //vert is the 2d coords from -1 to 1 of the two triangles rendered to the screen for the raymarching shader float depth = LinearizeDepth(uv); //get the depth of a particular pixel's view from a depth buffer rendered to texture0 vec4 r = normalize(vec4(-vert.x,-vert.y,1.0,0.0)); r = r * camAngleMain; //I thought adding a 3rd dimension and the normalizing would be equivalent to a perspective matrix //At which point I just multiply those 'perspective coordinates' by the inverse of my actual camera view matrix that I use for the polygonal graphics vec3 rayDirection = r.xyz; vec3 rayOrigin = -camPositionMain; float distance = 0; float total = 0; for(int i = 0; i < 64; i++) { vec3 pos = rayOrigin + rayDirection * distance; float value = doMap(pos); distance += value; if(distance >= depth) break; else total += clamp(1-value,0,1); } color = vec4(1,1,1,total); return; } edit: Changing the raymarching camera glsl code to: vec4 r = normalize(vec4(vert.x,vert.y,1.0,0.0)); r = r * camProjectionMain; r = r * camAngleMain; vec3 rayDirection = r.xyz; vec3 rayOrigin = camPositionMain; Doesn't really change anything, but it's easier to read. My main problem is that for some reason vertically panning the camera results in vertical distortion of objects as they approach the top and bottom of the view/screen. If you look at the attached image, the white orbs are part of the raymarching shader, everything else is tesselated/polygonal graphics. In the first two images, the orbs can be clearly seen to be above or below the sand colored heightmap. In the last image, the position of the orbs is a bit off such that they now clip through the terrain. I can't figure out why this is. My camera seems to *almost* work, but not quite. My aspect ratio at the moment is a perfect square, and I multiply my raymarching screen coordinates (2d from -1 to 1) by the inverse of my camera view matrix so I don't really know why this would happen. Is there anything I'm doing wrong with my implementation here?
  20. Gnollrunner

    Bumpy World

    After a LOT of bug fixes, and some algorithm changes changes, our octree marching prisms algorithm is now in a much better state. We added a completely new way of determine chunk level transitions, but before discussing it we will first talk a bit more about our voxel octree. Our octree is very explicit. By that we mean it is built up of actual geometric components. First we have voxel vertexes (not to be confused with mesh vertexes) for the corners of voxels. Then we have voxel edges that connect them. Then we have voxel faces which are made up of a set of voxel edges (either three or four) and finally we have the voxels themselves which reference our faces. Currently we support prismatic voxels. since they make the best looking world, however the lower level constructs are designed to also support the more common cubic voxels. In addition to our octree of voxels, voxel faces are kept in a quadtrees, while voxel edges are organized into binary trees. Everything is pretty much interconnected and there is a reference counting system that handles deallocation of unused objects. So why go though all this trouble? The answer by doing things this way we can avoid traversing the octree when building meshes using our marching prisms algorithms. For instance, If there is a mesh edge on a voxel face, since that face is referenced by the voxels on either side of it, we can easily connect together mesh triangles generated in both voxels. The same goes for voxel edges. A mesh vertex on a voxel edge is shared by all voxels that reference it. So in short, seamless meshes are built in place with little effort. This is important since meshes will be constantly recalculated for LOD as a player moves around. This brings us to chunking. As we talked about in our first entry, a chunk is nothing more than a sub-section of the octree. Most importantly we need to know where there are up and down chunk transitions. Here our face quadtrees, and edge binary tress help us out. From the top of any chunk we can quickly traverse the quadtrees and binary trees and tag faces and edges as transition objects. The algorithm is quite simple since we know there will only be one level difference between chunks, and therefore if there is a level difference, one level will be odd and the other even. So we can tag our edges and faces with up to two chunk levels in a 2 element array indexed by the last bit of the chunk level. After going down the borders of each chunk, border objects will now have one of two states. They will be tagged with a single level or a two levels one being one higher than the other. From this we can now generate transition voxels with no more need to look at a voxel's neighboring voxels. One more note about our explicit voxels, since they are in fact explicit there is no requirement that they form a regular grid. As we said before our world grid is basically wrapped around a sphere which gives us fairly uniform terrain no matter where you are on the globe. Hopefully in he future we can also use this versatility to build trees. Ok so it's picture time......... We added some 3D simplex nose to get something that isn't a simple sphere. Hopefully in our next entry we will try a multi-fractal.
  21. Hello, I am currently drawing an FFT ocean into a texture including Dx, Dy and Dz. As i need the real height at a point for another algorithm, i am passing the points through a vertex shader as following: #version 330 core layout (location = 0) in vec3 position; layout (location = 1) in vec2 texCoords; // Displacement and normal map uniform sampler2D displacementMap; uniform mat4 MVPMatrix; uniform int N; uniform vec2 gridLowerLeftCorner; out float displacedHeight; void main() { // Displace the original position by the amount in the texture vec3 displacedVertex = texture(displacementMap, texCoords).xyz + position; // Scale vertex to the 0 -> 1 range vec2 waterCellIndices = vec2((displacedVertex.x - gridLowerLeftCorner.x)/N, (displacedVertex.z - gridLowerLeftCorner.y)/N); // scale it to -1 -> 1 waterCellIndices = (waterCellIndices * 2.0) - 1.0; displacedHeight = displacedVertex.y; gl_Position = vec4(waterCellIndices, 0, 1); } This works correctly (it writes the correct height at a given point). The issue is that some points due to the Dx and Dz displacement will get outside the clip space. This points should instead wrap around as the ocean is a collection of tiles. As you can see in the attached file the edges fit together perfectly inside white square if they would wrap around (this is the clip space dimensions from RenderDoc). Is there any way i could wrap around this texture (in reality wrap around the clip space positions) so it stays all inside the viewport correctly? I tried to wrap around in the vertex shader by checking the boundaries and wrapping around but it doesnt work when a triangle has a least one vertice inside of the viewport and others outside. Many thanks, André
  22. Hello everyone, my name is Valerio and I'm an expert in business development and product innovation. At the moment I'm working for a big gambling company and I'm also working on a side project to launch a startup aimed at the whole gaming world. I have an idea and I would like the help of this community to validate it. Thanks to machine learning it is possible to predict user behavior. For example, from the tests I have done, I can assure you that it is possible to predict. with an accuracy between 80 and 90 percent (depending on the quality of the data), which users will use a certain app next week. My idea of the service is to create a Softwere as a Service, with a monthly fee, which allows developers and business owners of a game to load tables of data into the software and receive the desired predictive output. For example, thanks to this softwere you might know which users will play and which ones will not play next week, or analyze more specific metrics, like who will make a purchase and who does not, who will use a particular feature and who does not, and so on. With this information, the team that manages the app can set up marketing activities to increase the engagment, such as sending push notifications only to those who know you will not play next week, or special offers to those who will not make a purchase. Here are the questions I need you to answer: - Do you find this service useful? - If so, how much would you pay per month for such a service? Thank you all for your participation, Valerio.
  23. Hello! I'm currently developing a top-down RPG styled game with an Entity Component System architecture and, as the game grows in features, so does my game entities, that is, item models, enemy prototypes, etc. Those definitions are currently in JSON files but at the end of the day I still have long factory classes that read from those files and populate the entities with their respective components and properties in the most naive way. Reading through a presentation about Techniques and Strategies for Data-driven design in Game Development (slides 80–93) (warning: big pdf file) there is this "prototyping approach" where you can build up each game entity from multiple prototypes. I find this really interesting, however, the presentation doesn't mention any implementation details and I'm totally in the dark. I don't know how powerful should this system be. By the way, I'm using Java and LibGDX's engine. My first idea is making a robust prototype-instancing factory where, given a JSON file, it will be able to return an entity populated with its corresponding components. For example: Enemies.json { "skeleton" : { "id" : 0, "components" : { "HealthComponent" : { "totalHealth" : 100 }, "TextureComponent" : { "pathToTexture" : "assets/skeleton.png" } } } } If I wanted to instantiate a Skeleton entity, I would read it's prototype, iterate over it's components and somehow I would instantiate them correctly. With this approach I have the following issues: It will most likely involve using Java Reflection to instance entities from a JSON file. This is a topic that I know little about and will probably end up in dark magic code. Some instances properties can't be prototyped and will have to be passed as parameters to the factory. For example, when creating an enemy entity, an (x, y) position will have to be provided. Suddenly creating instances is not so straight forward. How powerful should this system be? Should it have this "recursive" behavior where you can extend a prototype with an other and so on? This sounds a little bit like dependency injection. Am I reinventing the wheel? Is there anything done already I can make us of? Even though it's still in it's infancy, here is a short demo (under one minute) of my game. Thank you!
  24. While making a roguelike game, procedural generation have to be quick and yet intriguing enough for the generated level to be fun to just pick up and play. There are many ways to generate and laying out procedurally generated rooms. In The Binding Of Isaac, for example, you have many different types of regular room presets. The generator just picks a preset based on things like room placement and size. Because those rooms are always of fixed size, this is a nice compromise. By having handmade presets the generated level is somewhat believable (i.e. there are no gaps or obstacles below a room door or secret room and whatnot). Another example would be Nuclear Throne. The game takes a different approach to procedural generation by keeping it relatively simple. Because it's not room-based like The Binding Of Isaac, there are more things like caves and large open area. The gameplay also plays into this, as the player needs to eliminate every enemy to spawn a portal to the next level. Because my game is somehow more inspired by The Binding of Isaac, the right way to procedurally generate rooms would be to use presets, and this is how I make special rooms. However, there's a big difference between The Binding Of Isaac and my game: my regular rooms aren't always the same size. This means that rather than having presets of regular rooms as well as special rooms I need something more flexible and, most importantly, dynamic.. The anatomy of a Room In my game, as I've said in a previous post, levels are big two-dimensional arrays from which the level geometry is generated. Every room of a level is made using a BSP tree. I won't go in details much on how rooms are generated, but in essence, we create a grid from which we trace a path between two rooms and sparsely attach bonus rooms along the way. Because I already have rooms sizes and whatnot with that level generation, I could reuse the same idea for room layouts. Within rooms, I've also set special anchor points from which props (or more precisely, prop formations, more on that later...) could be generated. Basic Layouts The idea here is to have room layout presets. Within those, presets are an array of prop formations, and each of these formations is linked to a specific anchor point. A formation has a two-dimensional boolean array that indicates whenever or not there should be a prop here. Let's take, for example, a diamond array: The dimension of the array depends on its rooms' dimensions. Here's how it's done: \( size = \left \lceil \frac{2(max(RoomSize_{x},RoomSize_{y}))) }{ 3 } \right \rceil\) In order to change the array's content we actually use common image manipulation algorithms... Bresenham's Line Algorithm The first used algorithm is the Bresenham's Line Algorithm. Its purpose is to simply render a line describe by two bitmap points onto a raster image. To put it simply, we get the deviation (delta, or "d" for short) in both X and Y of each point of the described line and compare both of them. Depending on the biggest, we simply incremate the point on that axis and colour it in. Here's the implementation: public void TraceLine(Vector2Int p0, Vector2Int p1) { int dx = Mathf.Abs(p1.x - p0.x), sx = p0.x < p1.x ? 1 : -1; int dy = Mathf.Abs(p1.y - p0.y), sy = p0.y < p1.y ? 1 : -1; int err = (dx > dy ? dx : -dy) / 2, e2; while (true) { m_propArray[p0.x][p0.y] = true; if (p0.x == p1.x && p0.y == p1.y) { break; } e2 = err; if (e2 > -dx) { err -= dy; p0.x += sx; } if (e2 < dy) { err += dx; p0.y += sy; } } } Midpoint Circle Algorithm The midpoint circle algorithm is an algorithm used to render a circle onto an image. The idea is somehow similar to Bresenham's Line Algorithm, but rather than drawing a line we draw a circle. To do this we also need, for simplicity sakes, to divide the circle into 8 pieces, called octants. We can do this because circles are always symmetric. (It's also a nice way to unroll loops) Here's the implementation: private void TraceCircle(Vector2Int center, int r, AbstractPropFormation formation) { int d = (5 - r * 4) / 4; int x = 0; int y = r; do { // ensure index is in range before setting (depends on your image implementation) // in this case we check if the pixel location is within the bounds of the image before setting the pixel if (IsValidPoint(center + new Vector2Int(x,y)) { formation.m_propArray[center.x + x][center.y + y] = true; } if (IsValidPoint(center + new Vector2Int(x,-y)) { formation.m_propArray[center.x + x][center.y - y] = true; } if (IsValidPoint(center + new Vector2Int(-x,y)) { formation.m_propArray[center.x - x][center.y + y] = true; } if (IsValidPoint(center + new Vector2Int(-x,-y)) { formation.m_propArray[center.x - x][center.y - y] = true; } if (IsValidPoint(center + new Vector2Int(y,x)) { formation.m_propArray[center.x + y][center.y + x] = true; } if (IsValidPoint(center + new Vector2Int(y,-x)) { formation.m_propArray[center.x + y][center.y - x] = true; } if (IsValidPoint(center + new Vector2Int(-y,x)) { formation.m_propArray[center.x - y][center.y + x] = true; } if (IsValidPoint(center + new Vector2Int(-y,-x)) { formation.m_propArray[center.x - y][center.y - x] = true; } if (d < 0) { d += 2 * x + 1; } else { d += 2 * (x - y) + 1; y--; } x++; } while (x <= y); } Flood Fill Algorithm This is quite a classic, but it's still useful nevertheless. The idea is to progressively fill a section of an image with a specific colour while The implementation is using a coordinate queue rather than recursion for optimization sakes. We also try to fill the image using west-east orientation. Basically, we fill the westmost pixel first, eastmost second and finally go north-south. Here's the implementation: public void Fill(Vector2Int point) { Queue<Vector2Int> q = new Queue<Vector2Int>(); q.Enqueue(point); while (q.Count > 0) { Vector2Int currentPoint = q.Dequeue(); if (!m_propArray[currentPoint.x][currentPoint.y]) { Vector2Int westPoint = currentPoint, eastPoint = new Vector2Int(currentPoint.x + 1, currentPoint.y); while ((westPoint.x >= 0) && !m_propArray[westPoint.x][westPoint.y]) { m_propArray[westPoint.x][westPoint.y] = true; if ((westPoint.y > 0) && !m_propArray[westPoint.x][westPoint.y - 1]) { q.Enqueue(new Vector2Int(westPoint.x, westPoint.y - 1)); } if ((westPoint.y < m_propArray[westPoint.x].Length - 1) && !m_propArray[westPoint.x][westPoint.y + 1]) { q.Enqueue(new Vector2Int(westPoint.x, westPoint.y + 1)); } westPoint.x--; } while ((eastPoint.x <= m_propArray.Length - 1) && !m_propArray[eastPoint.x][eastPoint.y]) { m_propArray[eastPoint.x][eastPoint.y] = true; if ((eastPoint.y > 0) && !m_propArray[eastPoint.x][eastPoint.y - 1]) { q.Enqueue(new Vector2Int(eastPoint.x, eastPoint.y - 1)); } if ((eastPoint.y < m_propArray[eastPoint.x].Length - 1) && !m_propArray[eastPoint.x][eastPoint.y + 1]) { q.Enqueue(new Vector2Int(eastPoint.x, eastPoint.y + 1)); } eastPoint.x++; } } } } Formation Shapes Each formation also has a specific shape. These shapes simply define the content of the formation array. We can build these shapes using the previously mentioned algorithms. There are 9 different types of shapes as of now. Vertical line A simple vertical line of a width of one Horizontal line A simple horizontal line of a width of one Diamond A rather nice diamond shape, especially pretty in corners Circle The circle is rendered using the Midpoint circle algorithm. Especially pretty in the center of rooms Cross A simple cross shape, i.e a vertical and horizontal line align at the center. X Shape An "X" shaped cross, i.e two perpendicular diagonal lines align at the center. Triangle An Isocele triangle. Square A solid block. Every cell of the formation is essentially true. Checkers A nice variation of the square shape. Every other cell is false. There might be more types of shapes as time goes by, but it's about it for now. Placing props Once the array is set, we simply need to place the actual props in the room. Each formation is of an actual type, i.e. rocks, ferns, etc. (For simplicity sakes, let's say that every prop is a 1x1x1m cube. This would simplify future steps.) In order to find their position, we simply align the array's center to the formations' specified anchor point. For each prop formation, we then instantiate props for each true cells while checking whenever or not the prop would be outside its room. Afterwards, we do a precise position check to make sure no props are either partially or fully outside a room. Finally, we make sure every room connections aren't obstructed with props. And voilà, we have a nicely decorated room In Game Screenshots Here's a couple of screenshots of what it looks like in-game
  25. I have programmed an implementation of the Separating Axis Theorem to handle collisions between 2D convex polygons. It is written in Processing and can be viewed on Github here. There are a couple of issues with it that I would like some help in resolving. In the construction of Polygon objects, you specify the width and height of the polygon and the initial rotation offset by which the vertices will be placed around the polygon. If the rotation offset is 0, the first vertex is placed directly to the right of the object. If higher or lower, the first vertex is placed clockwise or counter-clockwise, respectively, around the circumference of the object by the rotation amount. The rest of the vertices follow by a consistent offset of TWO_PI / number of vertices. While this places the vertices at the correct angle around the polygon, the problem is that if the rotation is anything other than 0, the width and height of the polygon are no longer the values specified. They are reduced because the vertices are placed around the polygon using the sin and cos functions, which often return values other than 1 or -1. Of course, when the half width and half height are multiplied by a sin or cos value other than 1 or -1, they are reduced. This is my issue. How can I place an arbitrary number of vertices at an arbitrary rotation around the polygon, while maintaining both the intended shape specified by the number of vertices (triangle, hexagon, octagon), and the intended width and height of the polygon as specified by the parameter values in the constructor? The Polygon code: class Polygon { PVector position; PShape shape; int w, h, halfW, halfH; color c; ArrayList<PVector> vertexOffsets; Polygon(PVector position, int numVertices, int w, int h, float rotation) { this.position = position; this.w = w; this.h = h; this.halfW = w / 2; this.halfH = h / 2; this.c = color(255); vertexOffsets = new ArrayList<PVector>(); if(numVertices < 3) numVertices = 3; shape = createShape(); shape.beginShape(); shape.fill(255); shape.stroke(255); for(int i = 0; i < numVertices; ++i) { PVector vertex = new PVector(position.x + cos(rotation) * halfW, position.y + sin(rotation) * halfH); shape.vertex(vertex.x, vertex.y); rotation += TWO_PI / numVertices; PVector vertexOffset = vertex.sub(position); vertexOffsets.add(vertexOffset); } shape.endShape(CLOSE); } void move(float x, float y) { position.set(x, y); for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void rotate(float angle) { for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); vertexOffset.rotate(angle); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void setColour(color c) { this.c = c; } void render() { shape.setFill(c); shape(shape); } } My other issue is that when two polygons with three vertices each collide, they are not always moved out of collision smoothly by the Minimum Translation Vector returned by the SAT algorithm. The polygon moved out of collision by the MTV does not rest against the other polygon as it should, it instead jumps back a small distance. I find this very strange as I have been unable to replicate this behaviour when resolving collisions between polygons of other vertex quantities and I cannot find the flaw in the implementation, though it must be there. What could be causing this incorrect collision resolution, which from my testing appears to only occur between polygons of three vertices? Any help you can provide on these issues would be greatly appreciated. Thank you.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!