
Advertisement
Search the Community
Showing results for tags 'Algorithm'.
Found 170 results

Algorithm Raycasts or Rigidbody2D for 2D Platformer in Unity
3dmodelerguy posted a topic in General and Gameplay Programming
So I am trying to prototype a 2D platformer like game in Unity and trying to figure out the best method that I should be investigating for controlling the player character. I see generally 2 different ways to handle this which is either using Rigidbody2D or using raycasts. The list of things that I am looking to do right now are: has some sort of gravity like effect moving side to side jumping double jumping being able to walking up slopes of a certain angle (but prevent it after a certain angle and have the user slide down if they are on that angle or larger) be able to hang off the edge and pull up I sure some things might come up later in development if I get that far but these are what I want to get working at least at a very basic level before I move on from the character controller for the prototype (if you watch this video for about 15  20 seconds, you will see generally most of the functionality I am looking to achieve: https://www.youtube.com/watch?v=6rzTnx6IHqM&start=375). Now I started with the Rigidbody2D since I do want to have some level of gravity in the game (not just the player but items, enemy, etc.) and I can get the first 4 working pretty easy but the 5th is more troublesome (and have not gotten to the 6th). I can move up angles (I am setting Rigidbody2D.velocity for movement as it seems to be the most recommended) but for example if I am walking up an angle and then stop, my character jumps up a little (I am guess because of the forward velocity that is has when I stop applying hortizontal velocity for moving side to side, there is still extra vertical velocity). So I have 2 questions: Should I be looking at Rigidbody2D or manual raycasts? Either way that you recommend going, do you have any resources that you would recommend looking at for reference (video tutorials, articles, etc.)? 
OpenGL Wrap around clip space points created in the vexter shader
Andr posted a topic in Graphics and GPU Programming
Hello, I am currently drawing an FFT ocean into a texture including Dx, Dy and Dz. As i need the real height at a point for another algorithm, i am passing the points through a vertex shader as following: #version 330 core layout (location = 0) in vec3 position; layout (location = 1) in vec2 texCoords; // Displacement and normal map uniform sampler2D displacementMap; uniform mat4 MVPMatrix; uniform int N; uniform vec2 gridLowerLeftCorner; out float displacedHeight; void main() { // Displace the original position by the amount in the texture vec3 displacedVertex = texture(displacementMap, texCoords).xyz + position; // Scale vertex to the 0 > 1 range vec2 waterCellIndices = vec2((displacedVertex.x  gridLowerLeftCorner.x)/N, (displacedVertex.z  gridLowerLeftCorner.y)/N); // scale it to 1 > 1 waterCellIndices = (waterCellIndices * 2.0)  1.0; displacedHeight = displacedVertex.y; gl_Position = vec4(waterCellIndices, 0, 1); } This works correctly (it writes the correct height at a given point). The issue is that some points due to the Dx and Dz displacement will get outside the clip space. This points should instead wrap around as the ocean is a collection of tiles. As you can see in the attached file the edges fit together perfectly inside white square if they would wrap around (this is the clip space dimensions from RenderDoc). Is there any way i could wrap around this texture (in reality wrap around the clip space positions) so it stays all inside the viewport correctly? I tried to wrap around in the vertex shader by checking the boundaries and wrapping around but it doesnt work when a triangle has a least one vertice inside of the viewport and others outside. Many thanks, André 
After a LOT of bug fixes, and some algorithm changes changes, our octree marching prisms algorithm is now in a much better state. We added a completely new way of determine chunk level transitions, but before discussing it we will first talk a bit more about our voxel octree. Our octree is very explicit. By that we mean it is built up of actual geometric components. First we have voxel vertexes (not to be confused with mesh vertexes) for the corners of voxels. Then we have voxel edges that connect them. Then we have voxel faces which are made up of a set of voxel edges (either three or four) and finally we have the voxels themselves which reference our faces. Currently we support prismatic voxels. since they make the best looking world, however the lower level constructs are designed to also support the more common cubic voxels. In addition to our octree of voxels, voxel faces are kept in a quadtrees, while voxel edges are organized into binary trees. Everything is pretty much interconnected and there is a reference counting system that handles deallocation of unused objects. So why go though all this trouble? The answer by doing things this way we can avoid traversing the octree when building meshes using our marching prisms algorithms. For instance, If there is a mesh edge on a voxel face, since that face is referenced by the voxels on either side of it, we can easily connect together mesh triangles generated in both voxels. The same goes for voxel edges. A mesh vertex on a voxel edge is shared by all voxels that reference it. So in short, seamless meshes are built in place with little effort. This is important since meshes will be constantly recalculated for LOD as a player moves around. This brings us to chunking. As we talked about in our first entry, a chunk is nothing more than a subsection of the octree. Most importantly we need to know where there are up and down chunk transitions. Here our face quadtrees, and edge binary tress help us out. From the top of any chunk we can quickly traverse the quadtrees and binary trees and tag faces and edges as transition objects. The algorithm is quite simple since we know there will only be one level difference between chunks, and therefore if there is a level difference, one level will be odd and the other even. So we can tag our edges and faces with up to two chunk levels in a 2 element array indexed by the last bit of the chunk level. After going down the borders of each chunk, border objects will now have one of two states. They will be tagged with a single level or a two levels one being one higher than the other. From this we can now generate transition voxels with no more need to look at a voxel's neighboring voxels. One more note about our explicit voxels, since they are in fact explicit there is no requirement that they form a regular grid. As we said before our world grid is basically wrapped around a sphere which gives us fairly uniform terrain no matter where you are on the globe. Hopefully in he future we can also use this versatility to build trees. Ok so it's picture time......... We added some 3D simplex nose to get something that isn't a simple sphere. Hopefully in our next entry we will try a multifractal.

Hello everyone, my name is Valerio and I'm an expert in business development and product innovation. At the moment I'm working for a big gambling company and I'm also working on a side project to launch a startup aimed at the whole gaming world. I have an idea and I would like the help of this community to validate it. Thanks to machine learning it is possible to predict user behavior. For example, from the tests I have done, I can assure you that it is possible to predict. with an accuracy between 80 and 90 percent (depending on the quality of the data), which users will use a certain app next week. My idea of the service is to create a Softwere as a Service, with a monthly fee, which allows developers and business owners of a game to load tables of data into the software and receive the desired predictive output. For example, thanks to this softwere you might know which users will play and which ones will not play next week, or analyze more specific metrics, like who will make a purchase and who does not, who will use a particular feature and who does not, and so on. With this information, the team that manages the app can set up marketing activities to increase the engagment, such as sending push notifications only to those who know you will not play next week, or special offers to those who will not make a purchase. Here are the questions I need you to answer:  Do you find this service useful?  If so, how much would you pay per month for such a service? Thank you all for your participation, Valerio.

 Algorithm
 Management

(and 1 more)
Tagged with:

Algorithm Scalable Data Driven design for RPG
babyjesus posted a topic in General and Gameplay Programming
Hello! I'm currently developing a topdown RPG styled game with an Entity Component System architecture and, as the game grows in features, so does my game entities, that is, item models, enemy prototypes, etc. Those definitions are currently in JSON files but at the end of the day I still have long factory classes that read from those files and populate the entities with their respective components and properties in the most naive way. Reading through a presentation about Techniques and Strategies for Datadriven design in Game Development (slides 80–93) (warning: big pdf file) there is this "prototyping approach" where you can build up each game entity from multiple prototypes. I find this really interesting, however, the presentation doesn't mention any implementation details and I'm totally in the dark. I don't know how powerful should this system be. By the way, I'm using Java and LibGDX's engine. My first idea is making a robust prototypeinstancing factory where, given a JSON file, it will be able to return an entity populated with its corresponding components. For example: Enemies.json { "skeleton" : { "id" : 0, "components" : { "HealthComponent" : { "totalHealth" : 100 }, "TextureComponent" : { "pathToTexture" : "assets/skeleton.png" } } } } If I wanted to instantiate a Skeleton entity, I would read it's prototype, iterate over it's components and somehow I would instantiate them correctly. With this approach I have the following issues: It will most likely involve using Java Reflection to instance entities from a JSON file. This is a topic that I know little about and will probably end up in dark magic code. Some instances properties can't be prototyped and will have to be passed as parameters to the factory. For example, when creating an enemy entity, an (x, y) position will have to be provided. Suddenly creating instances is not so straight forward. How powerful should this system be? Should it have this "recursive" behavior where you can extend a prototype with an other and so on? This sounds a little bit like dependency injection. Am I reinventing the wheel? Is there anything done already I can make us of? Even though it's still in it's infancy, here is a short demo (under one minute) of my game. Thank you! 
Algorithm Flexible Room Layout algorithm
jbdev posted a blog entry in Projects Of Some Degree Of Interest
While making a roguelike game, procedural generation have to be quick and yet intriguing enough for the generated level to be fun to just pick up and play. There are many ways to generate and laying out procedurally generated rooms. In The Binding Of Isaac, for example, you have many different types of regular room presets. The generator just picks a preset based on things like room placement and size. Because those rooms are always of fixed size, this is a nice compromise. By having handmade presets the generated level is somewhat believable (i.e. there are no gaps or obstacles below a room door or secret room and whatnot). Another example would be Nuclear Throne. The game takes a different approach to procedural generation by keeping it relatively simple. Because it's not roombased like The Binding Of Isaac, there are more things like caves and large open area. The gameplay also plays into this, as the player needs to eliminate every enemy to spawn a portal to the next level. Because my game is somehow more inspired by The Binding of Isaac, the right way to procedurally generate rooms would be to use presets, and this is how I make special rooms. However, there's a big difference between The Binding Of Isaac and my game: my regular rooms aren't always the same size. This means that rather than having presets of regular rooms as well as special rooms I need something more flexible and, most importantly, dynamic.. The anatomy of a Room In my game, as I've said in a previous post, levels are big twodimensional arrays from which the level geometry is generated. Every room of a level is made using a BSP tree. I won't go in details much on how rooms are generated, but in essence, we create a grid from which we trace a path between two rooms and sparsely attach bonus rooms along the way. Because I already have rooms sizes and whatnot with that level generation, I could reuse the same idea for room layouts. Within rooms, I've also set special anchor points from which props (or more precisely, prop formations, more on that later...) could be generated. Basic Layouts The idea here is to have room layout presets. Within those, presets are an array of prop formations, and each of these formations is linked to a specific anchor point. A formation has a twodimensional boolean array that indicates whenever or not there should be a prop here. Let's take, for example, a diamond array: The dimension of the array depends on its rooms' dimensions. Here's how it's done: \( size = \left \lceil \frac{2(max(RoomSize_{x},RoomSize_{y}))) }{ 3 } \right \rceil\) In order to change the array's content we actually use common image manipulation algorithms... Bresenham's Line Algorithm The first used algorithm is the Bresenham's Line Algorithm. Its purpose is to simply render a line describe by two bitmap points onto a raster image. To put it simply, we get the deviation (delta, or "d" for short) in both X and Y of each point of the described line and compare both of them. Depending on the biggest, we simply incremate the point on that axis and colour it in. Here's the implementation: public void TraceLine(Vector2Int p0, Vector2Int p1) { int dx = Mathf.Abs(p1.x  p0.x), sx = p0.x < p1.x ? 1 : 1; int dy = Mathf.Abs(p1.y  p0.y), sy = p0.y < p1.y ? 1 : 1; int err = (dx > dy ? dx : dy) / 2, e2; while (true) { m_propArray[p0.x][p0.y] = true; if (p0.x == p1.x && p0.y == p1.y) { break; } e2 = err; if (e2 > dx) { err = dy; p0.x += sx; } if (e2 < dy) { err += dx; p0.y += sy; } } } Midpoint Circle Algorithm The midpoint circle algorithm is an algorithm used to render a circle onto an image. The idea is somehow similar to Bresenham's Line Algorithm, but rather than drawing a line we draw a circle. To do this we also need, for simplicity sakes, to divide the circle into 8 pieces, called octants. We can do this because circles are always symmetric. (It's also a nice way to unroll loops) Here's the implementation: private void TraceCircle(Vector2Int center, int r, AbstractPropFormation formation) { int d = (5  r * 4) / 4; int x = 0; int y = r; do { // ensure index is in range before setting (depends on your image implementation) // in this case we check if the pixel location is within the bounds of the image before setting the pixel if (IsValidPoint(center + new Vector2Int(x,y)) { formation.m_propArray[center.x + x][center.y + y] = true; } if (IsValidPoint(center + new Vector2Int(x,y)) { formation.m_propArray[center.x + x][center.y  y] = true; } if (IsValidPoint(center + new Vector2Int(x,y)) { formation.m_propArray[center.x  x][center.y + y] = true; } if (IsValidPoint(center + new Vector2Int(x,y)) { formation.m_propArray[center.x  x][center.y  y] = true; } if (IsValidPoint(center + new Vector2Int(y,x)) { formation.m_propArray[center.x + y][center.y + x] = true; } if (IsValidPoint(center + new Vector2Int(y,x)) { formation.m_propArray[center.x + y][center.y  x] = true; } if (IsValidPoint(center + new Vector2Int(y,x)) { formation.m_propArray[center.x  y][center.y + x] = true; } if (IsValidPoint(center + new Vector2Int(y,x)) { formation.m_propArray[center.x  y][center.y  x] = true; } if (d < 0) { d += 2 * x + 1; } else { d += 2 * (x  y) + 1; y; } x++; } while (x <= y); } Flood Fill Algorithm This is quite a classic, but it's still useful nevertheless. The idea is to progressively fill a section of an image with a specific colour while The implementation is using a coordinate queue rather than recursion for optimization sakes. We also try to fill the image using westeast orientation. Basically, we fill the westmost pixel first, eastmost second and finally go northsouth. Here's the implementation: public void Fill(Vector2Int point) { Queue<Vector2Int> q = new Queue<Vector2Int>(); q.Enqueue(point); while (q.Count > 0) { Vector2Int currentPoint = q.Dequeue(); if (!m_propArray[currentPoint.x][currentPoint.y]) { Vector2Int westPoint = currentPoint, eastPoint = new Vector2Int(currentPoint.x + 1, currentPoint.y); while ((westPoint.x >= 0) && !m_propArray[westPoint.x][westPoint.y]) { m_propArray[westPoint.x][westPoint.y] = true; if ((westPoint.y > 0) && !m_propArray[westPoint.x][westPoint.y  1]) { q.Enqueue(new Vector2Int(westPoint.x, westPoint.y  1)); } if ((westPoint.y < m_propArray[westPoint.x].Length  1) && !m_propArray[westPoint.x][westPoint.y + 1]) { q.Enqueue(new Vector2Int(westPoint.x, westPoint.y + 1)); } westPoint.x; } while ((eastPoint.x <= m_propArray.Length  1) && !m_propArray[eastPoint.x][eastPoint.y]) { m_propArray[eastPoint.x][eastPoint.y] = true; if ((eastPoint.y > 0) && !m_propArray[eastPoint.x][eastPoint.y  1]) { q.Enqueue(new Vector2Int(eastPoint.x, eastPoint.y  1)); } if ((eastPoint.y < m_propArray[eastPoint.x].Length  1) && !m_propArray[eastPoint.x][eastPoint.y + 1]) { q.Enqueue(new Vector2Int(eastPoint.x, eastPoint.y + 1)); } eastPoint.x++; } } } } Formation Shapes Each formation also has a specific shape. These shapes simply define the content of the formation array. We can build these shapes using the previously mentioned algorithms. There are 9 different types of shapes as of now. Vertical line A simple vertical line of a width of one Horizontal line A simple horizontal line of a width of one Diamond A rather nice diamond shape, especially pretty in corners Circle The circle is rendered using the Midpoint circle algorithm. Especially pretty in the center of rooms Cross A simple cross shape, i.e a vertical and horizontal line align at the center. X Shape An "X" shaped cross, i.e two perpendicular diagonal lines align at the center. Triangle An Isocele triangle. Square A solid block. Every cell of the formation is essentially true. Checkers A nice variation of the square shape. Every other cell is false. There might be more types of shapes as time goes by, but it's about it for now. Placing props Once the array is set, we simply need to place the actual props in the room. Each formation is of an actual type, i.e. rocks, ferns, etc. (For simplicity sakes, let's say that every prop is a 1x1x1m cube. This would simplify future steps.) In order to find their position, we simply align the array's center to the formations' specified anchor point. For each prop formation, we then instantiate props for each true cells while checking whenever or not the prop would be outside its room. Afterwards, we do a precise position check to make sure no props are either partially or fully outside a room. Finally, we make sure every room connections aren't obstructed with props. And voilà, we have a nicely decorated room In Game Screenshots Here's a couple of screenshots of what it looks like ingame 
For this entry we implemented the ubiquitous Ridged Multifractal function. It's not so interesting in and of itself, but it does highlight a few features that were included in our voxel engine. First as we mentioned, being a voxel engine, it supports full 3D geometry (caves, overhangs and so forth) and not just heightmaps. However if we look at a typical world these features are the exception rather than the rule. It therefor makes sense to optimize the heightmap portion of our terrain functions. This is especially true since our voxels are vertically aligned. This means that there will be many places where the same height calculation is repeated. Even if we look at a single voxel, nearly the same calculation is used for a lower corner and it's corresponding upper corner. The only difference been the subtraction from the voxel vertex position. ...... Enter the unit sphere! In our last entry we talked about explicit voxels, with edges and faces and vertexes. However all edges and faces are not created equal. Horizontal faces (in our case the triangular faces), and horizontal edges contain a special pointer that references their corresponding parts in a unit sphere, The unit sphere can be thought of as residing in the center of each planet. Like our world octree, it is formed from a subdivided icosahedron, only it is not extruded and is organized into a quadtree instead of an octree, being more 2D in nature. Vertexes in our unit sphere can be used to cache heightmap function values to avoid repeated calculations. We also use our unit sphere to help the horizontal part of our voxel subdivision operation. By referencing the unit sphere we only have to multiply a unit sphere vertex by a height value to generate voxel vertex coordinates. Finally our unitsphere is also used to provide coordinates during the ghostwalking process we talked about in our first entry. Without it, our ghostwalking would be more computationally expensive as it would have to calculate spherical coordinates on each iteration instead of just calculating heights, which are quite simple to calculate as they are all generated by simply averaging two other heights. Ownership of units sphere faces is a bit complex. Ostensibly they are owned by all voxel faces that reference them (and therefore add to their reference counter) . However this presents a bit of a problem as they are also used in ghostwalking which happens every LOD/rechunking iteration, and it fact they may or may not end up being referenced by voxels faces, depending on whether mesh geometry is found. Even if no geometry is found we may want to keep them for the next ghostwalk search. To solve this problem, we implemented undeadobjects. Unit sphere faces can become undead and can even be created that way if they are built by the ghostwalker. When they are undead they are kept in a special list which keeps them psudoalive. They also have an undead life value associated with them. When they are touched by the ghostwalker that value is renewed. However if after a few iterations they are untouched, they become truly dead and are destroyed. Picture time again..... So here is our Ridged MultiFractal in wire frame. We'll flip it around to show our level transition........ Here's a place that needs a bit of work. The chunk level transitions are correct but they are probably a bit more complex than they need to be. We use a very general voxel tessellation algorithm since we have to handle various combinations of vertical and horizontal transitions. We will probably optimize this later, especially for the common cases but for now it serves it's purpose. Next up we are going to try to add threads. We plan to use a separate thread(s) for the LOD/rechunk operations, and another one for the graphics .

I have programmed an implementation of the Separating Axis Theorem to handle collisions between 2D convex polygons. It is written in Processing and can be viewed on Github here. There are a couple of issues with it that I would like some help in resolving. In the construction of Polygon objects, you specify the width and height of the polygon and the initial rotation offset by which the vertices will be placed around the polygon. If the rotation offset is 0, the first vertex is placed directly to the right of the object. If higher or lower, the first vertex is placed clockwise or counterclockwise, respectively, around the circumference of the object by the rotation amount. The rest of the vertices follow by a consistent offset of TWO_PI / number of vertices. While this places the vertices at the correct angle around the polygon, the problem is that if the rotation is anything other than 0, the width and height of the polygon are no longer the values specified. They are reduced because the vertices are placed around the polygon using the sin and cos functions, which often return values other than 1 or 1. Of course, when the half width and half height are multiplied by a sin or cos value other than 1 or 1, they are reduced. This is my issue. How can I place an arbitrary number of vertices at an arbitrary rotation around the polygon, while maintaining both the intended shape specified by the number of vertices (triangle, hexagon, octagon), and the intended width and height of the polygon as specified by the parameter values in the constructor? The Polygon code: class Polygon { PVector position; PShape shape; int w, h, halfW, halfH; color c; ArrayList<PVector> vertexOffsets; Polygon(PVector position, int numVertices, int w, int h, float rotation) { this.position = position; this.w = w; this.h = h; this.halfW = w / 2; this.halfH = h / 2; this.c = color(255); vertexOffsets = new ArrayList<PVector>(); if(numVertices < 3) numVertices = 3; shape = createShape(); shape.beginShape(); shape.fill(255); shape.stroke(255); for(int i = 0; i < numVertices; ++i) { PVector vertex = new PVector(position.x + cos(rotation) * halfW, position.y + sin(rotation) * halfH); shape.vertex(vertex.x, vertex.y); rotation += TWO_PI / numVertices; PVector vertexOffset = vertex.sub(position); vertexOffsets.add(vertexOffset); } shape.endShape(CLOSE); } void move(float x, float y) { position.set(x, y); for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void rotate(float angle) { for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); vertexOffset.rotate(angle); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void setColour(color c) { this.c = c; } void render() { shape.setFill(c); shape(shape); } } My other issue is that when two polygons with three vertices each collide, they are not always moved out of collision smoothly by the Minimum Translation Vector returned by the SAT algorithm. The polygon moved out of collision by the MTV does not rest against the other polygon as it should, it instead jumps back a small distance. I find this very strange as I have been unable to replicate this behaviour when resolving collisions between polygons of other vertex quantities and I cannot find the flaw in the implementation, though it must be there. What could be causing this incorrect collision resolution, which from my testing appears to only occur between polygons of three vertices? Any help you can provide on these issues would be greatly appreciated. Thank you.

Algorithm Reading RVA memory address of a given exported function demangled name
Iris_Technologies posted a topic in General and Gameplay Programming
Suppose i don't have any linker at hand but i am calling an exported function from a C++ DLL Windows, i.e. sqrt from mvcrt14.dll, how would i get just and only just the Relative Virtual Address of sqrt from that dll to simulate what linker does and convert this call to a call to such RVA on the hexcoded generated .exe file? Either, how would i read the RVA of Mac, Android, iOS and Linux library formats? 
I'm struggling to find the correct way to make ray intersection with curve that are swept along an axis.In theory I thought it should be simple, I decompose the problem into component: find intersection on the curve (cross section), which is easy extrude that intersection point into an axis aligned line, solve the slope intersection to that line to get the final offset.To be sure I got it right, I'm starting with a swept 45° line centered on origin (lineplane intersection with dot product would be more efficient, but remember I'm trying to validating swiping a long a line). line equation is origine + directionVector * t line to line intersection equation is t = (origine1  origine2)/(directionVector2  directionVector1) >> assuming they are never parallel.So let line2dIntersection(directionVector1,origine1,directionVector2,origine2)Assuming the ray start on the xy plane (pseudo code): I first compute the cross section into xzintersection1 = line2dIntersection(rayDir.xz, vector2(origine.x,0), vector2(1,1).normalize(), vector2(0,0));result.xz = raydir.xz * intersection1;Then find the slope swipe offset into yzintersection2 = line2dIntersection(rayDir.yz, vector2(origine.y,0), vector2(1,0).normalize(), vector2(0,result.z));result.y = raydir.y * intersection2;But all my result are garbage. What am I doing wrong? where is the leap of logic I did?

Pathfinding Navigation Mesh : Wall Collision Avoidance
thecheeselover posted a blog entry in 3D, AI, procedural generation and black jack
Subscribe to our subreddit to get all the updates from the team! Introduction In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls. Configuration 3D or 2D navigation mesh, as long as the 3D one is not cyclic. Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals. An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh. The Algorithm The agent is the pink lowpoly humanoid and the final destination is the flag. The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game. Psss, check this article if you haven't seen it yet. The algorithm is the following : For each waypoint, pick the current one and the next one until the next one is the last. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one. Convert each edge to a 2D line by discarding the Y component (UP vector). For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6. Here's a video of the algorithm in action : 
Simple organic and brute force dungeon generation
thecheeselover posted a blog entry in 3D, AI, procedural generation and black jack
Subscribe to our subreddit to get all the updates from the team! Last month, I made a pretty simple dungeon generator algorithm. It's an organic brute force algorithm, in the sense that the rooms and corridors aren't carved into a grid and that it stops when an area doesn't fit in the graph. Here's the algorithm : Start from the center (0, 0) in 2D Generate a room Choose a side to extend to Attach a corridor to that side If it doesn't fit, stop the generation Attach a room at the end of the corridor If it doesn't fit, stop the generation Repeat from steps 3 to 7 until enough rooms are generated It allowed us to test out our pathfinding algorithm (A* & String pulling). Here are some pictures of the output in 2D and 3D : 
Subscribe to our subreddit to get all the updates from the team! A friend and I are making a roguelite retro procedural game. As in many procedural roguelite games, it will have rooms to complete but also the notion of zones. The difference between a zone and a room is that a zone is open air whilst a room is not. Rooms are connected mainly by corridors while zones are mostly naturally connected / separated by rivers and mountains. Because we want levels with zones to be generated, we need to tame the beast that is procedural generation. How can we generate each zone itself and also clearly divide them? Until now, I had only been using the Java noise library called Joise, which is the Java community port of JTippetts' Accidental Noise Library. I needed the zone data to be generated with basis function modules, i.e. Perlin noise, but in contrast I needed a more structured approach for the zone division. Joise library does have a cell noise module that is a Worley noise. It looks like this depending on its 4 parameters (1, 0, 0, 0) : Using math modules, I was able to morph that noise into something that looks like a Voronoi diagram. Here's what a Voronoi diagram should look like (never mind the colors, the important parts are the cell edges and the cell centers) : A more aesthetic version : The Worley noise that I had morphed into a Voronoilike diagram did not include the cell centers, did not include metadata about the edges and was not enough deterministic in a sense that sometimes, the edges would around 60 pixels large. I then searched for a Java Voronoi library and found this one called VoronoiJava. With this, I was able to generate simple Voronoi diagrams : Relaxed : 1 iteration Relaxed : 2 iterations The relaxation concept is actually the Lloyd's algorithm fortunately included within the library. Now how can I make that diagram respect my level generation mechanics? Well, if we can limit an approximated number of cells within a certain resolution, that would be a good start. The biggest problem here, is that the relaxation reduces the number of cells within a restricted resolution (contrary to the global resolution) and so we need to keep that in mind. To do that, I define a constant for the total number of sites / cells. Here's my code : private Voronoi createVoronoiDiagram(int resolution) { Random random = new Random(); Stream<Point> gen = Stream.generate(() > new Point(random.nextDouble() * resolution, random.nextDouble() * resolution)); return new Voronoi(gen.limit(VORONOI_SITE_COUNT).collect(Collectors.toList())).relax().relax().relax(); } A brief pseudocode of the algorithm would be the following : Create the Voronoi diagram Find the centermost zone Selects X number of zones while there are zones that respect the selection criteria Draw the border map Draw the smoothed border map The selection criteria is applied for each edge that is connected only to one selected zone. Here's the selection criteria : Is connected to a closed zone, i.e. that all its edges form a polygon Does have two vertices Is inclusively in the resolution's boundaries Here's the result of a drawn border map! In this graph, I have a restricted number of cells that follow multiple criteria and I know each edge and each cell center point. To draw the smoothed border map, the following actions must be taken : emit colors from already drawn pixels and then apply a gaussian blur. Personally, I use the JH Labs Java Image Filters library for the gaussian blur. With color emission only : With color emission and a gaussian blur : You may ask yourself why have we created a smoothed border map? There's a simple reason for this, which is that we want the borders to be gradual instead of abrupt. Let's say we want rivers or streams between zones. This gradual border will allow us to progressively increase the depth of the river and making it look more natural in contrast with the adjacent zones. All that's left is to flood each selected cell and apply that to a zone map.

Haha! Take a look what I uncovered the other day while digging through my books.... It even came with a good old floppy disk! (I couldn't even use it if I wanted to now)... This book was my very first on 3D graphics programming, before 3D graphics card were even a thing (for the mainstream masses anyway). Published in 1994, it's a book that served as a really good primer to understanding 3D geometry and programming even if I never bothered with the later chapters on curved geometry. All the code samples were in C (not even C++) and there was no such mention of things such as texture mapping (although it did cover shadows, reflections and refraction!). It took you right from the beginning; from the equation of a straight line, translations, matrices etc to projections (viewing pyramid) and clipping algorithms. Great stuff! I still find it useful now as I reread the introductory chapters and refresh myself on a few basics. My other goto book is "Real Time Collision Detection" by Christer Ericson  an absolute diamond of a book! Do you guys and girls have any books that you rely on or hold in high regard for your game development? It would be interesting to know if anyone has ever seen or read the Georg Glaeser book above. Anyway, I'll soon have another update on The Berg, but it's bed time for me now. Night all.

Why doesn't my flood fill work with all the sprites?
Naitsirc posted a topic in For Beginners's Forum
I wrote a flood fill algorithm in Java to get the bounds of each sprite in a spritesheet. The problem is that it works fine with some sprites but with the rest of them it doesn't. For example: https://i.gyazo.com/cdbdb0ce40a46445ca8e7b62176ab442.png I put a red square surrounding some of the errors. This is the flood fill method: private static Rectangle floodFill(int[] pixels, int x0, int y0, int width, int height) { Rectangle frame = new Rectangle(x0,y0,1,1); Deque<Point> queue = new LinkedList<>(); queue.addLast(new Point(x0,y0)); while(!queue.isEmpty()) { Point p = queue.poll(); final int x = p.x; final int y = p.y; if(x < 0  x >= width  y < 0  y >= height) { continue; } // Is not part of a sprite, or has been visited before if(pixels[x+y*width] == 0) { continue; } pixels[x+y*width] = 0; // Mark as visited // Update bounds if(x < frame.x) { frame.x = x; } else if(x > frame.x + frame.width) { frame.width++; } if(y < frame.y) { frame.y = y; } else if(y > frame.y + frame.height) { frame.height++; } queue.add(new Point(x1,y)); queue.add(new Point(x1,y1)); queue.add(new Point(x1,y+1)); queue.add(new Point(x+1,y)); queue.add(new Point(x+1,y1)); queue.add(new Point(x+1,y+1)); queue.add(new Point(x,y1)); queue.add(new Point(x,y+1)); } return frame; } The flood fill method is called from here: private static List<Rectangle> split(BufferedImage image) { List<Rectangle> sprites = new ArrayList<>(); int[] pixels = ((DataBufferInt)image.getRaster().getDataBuffer()).getData().clone(); final int width = image.getWidth(); final int height = image.getHeight(); for(int y = 0;y < height;y++) { for(int x = 0;x < width;x++) { if(pixels[x+y*width] != 0) { Rectangle r = floodFill(pixels,x,y,width,height); sprites.add(r); } } } return sprites; } What it does is visit each pixel, and if it is not equal to zero (background color), then it is a sprite, and the flood fill method gets it bounds. I have searched for a solution and I've tried many times with different implementations but I couldn't found a solution. Can someone help me? I think I am missing something but I can't see it. Thanks! 
How to do projective Texturing and save the piece of projected image into the mesh's texture?
multiappple posted a topic in Graphics and GPU Programming
Hi,guys. I need to project a picture from a projector(maybe a camera) onto some meshes and save those into the mesh texture according to the mesh's unfolded UV.It just like the light map which encode the lightinginfo into the texture instead of the projectinfo. The following picture is an example(But it just project without writting into texture).I noticed blender actually has this function that allow you to draw a texture on to a mesh.But i have no idea on how to save those project pixel into the mesh's texture. I think maybe i can finish this function if i have a better understanding about how to produce Light map.Any advises or matertials can help me out?(any idea,any platform,or reference)>? 
Algorithm Reverse Engineering .RAW files from PS2 game
Adeilton Alves posted a topic in General and Gameplay Programming
Hello everyone, I'm new here and sorry if this isn't the right place to ask but i asked in a few forums around the internet and no one yet help with it.. I'm have been trying to mod this game for years, but I still stuck with the raw files from RACJIN games, Raw Files [ Mod edit: Removed ] I would like to identify the compression algorithm used to compress these files so that they can be decompressed and analyzed. Game : Naruto Uzumaki Chronicles 2... A.K.A Naruto Konoha Spirits in Japan. 
I'm looking for an algorithm that I can use to remove vertices that are close to each other within a margin of error in a triangular mesh. Pretty much similar to Blender's "Remove Doubles" feature, if anyone is familiar with it. I think the issue isn't just removing the doubles, but also how would I handle the face indices once I remove "duplicate" vertices?

3D What's the real tech behind the GPU triangles Clipping
iGrfx posted a topic in Graphics and GPU Programming
I've learned that the triangle clipping in the rasterization process usually using Sutherland–Hodgman algorithm. I also found an algorithm called "Guardband". I'm writing a software raster so I want to know what technical the GPU use, I want to implement it for study. Thanks! updated: what's the more proper triangulate algorithm? 
Effect: Area Light Shadows Part 1: PCSS
Vilem Otte posted an article in Graphics and GPU Programming
Welcome to the first part of multiple effect articles about soft shadows. In recent days I've been working on area light support in my own game engine, which is critical for one of the game concepts I'd like to eventually do (if time will allow me to do so). For each area light, it is crucial to have proper soft shadows with proper penumbra. For motivation, let's have the following screenshot with 3 area lights with various sizes: Fig. 01  PCSS variant that allows for perfectly smooth, largearea light shadows Let's start the article by comparison of the following 2 screenshots  one with shadows and one without: Fig. 02  Scene from default viewpoint lit with light without any shadows (left) and with shadows (right) This is the scene we're going to work with, and for the sake of simplicity, all of the comparison screenshots will be from this exact same viewpoint with 2 different scene configurations. Let's start with the definition of how shadows are created. Given a scene and light which we're viewing. Shadow umbra will be present at each position where there is no direct visibility between given position and any existing point on the light. Shadow penumbra will be present at each position where there is visibility of any point on the light, yet not all of them. No shadow is everywhere where there is full direct visibility between each point on the light and position. Most of the games tend to simplify, instead of defining a light as area or volume, it gets defined as an infinitely small point, this gives us few advantages: For single point, it is possible to define visibility in a binary way  either in shadow or not in shadow From single point, a projection of the scene can be easily constructed in such way, that definition of shadow becomes trivial (either position is occluded by other objects in the scene from lights point of view, or it isn't) From here, one can follow into the idea of shadow mapping  which is a basic technique for all others used here. Standard Shadow Mapping Trivial, yet should be mentioned here. inline float ShadowMap(Texture2D<float2> shadowMap, SamplerState shadowSamplerState, float3 coord) { return shadowMap.SampleLevel(shadowSamplerState, coord.xy, 0.0f).x < coord.z ? 0.0f : 1.0f; } Fig. 03  code snippet for standard shadow mapping, where depth map (stored 'distance' from lights point of view) is compared against calculated 'distance' between point we're computing right now and given light position. Word 'distance' may either mean actual distance, or more likely just value on zaxis for light point of view basis. Which is well known to everyone here, giving us basic results, that we all well know, like: Fig. 04  Standard Shadow Mapping This can be simply explained with the following image: Fig. 05  Each rendered pixel calculates whether its 'depth' from light point is greater than what is written in 'depth' map from light point (represented as yellow dot), white lines represent computation for each pixel. PercentageCloseFiltering (PCF) To make shadow more visually appealing, adding softedge is a must. This is done by simply performing NxN tests with offsets. For the sake of improved visual quality I've used shadow mapping with bilinear filter (which requires resolving 4 samples), along with 5x5 PCF filtering: Fig. 06  Percentage close filtering (PCF) results in nice softedged shadows, sadly the shadow is uniformly soft everywhere Clearly, none of the above techniques does any penumbra/umbra calculation, and therefore they're not really useful for area lights. For the sake of completeness, I'm adding basic PCF source code (for the sake of optimization, feel free to improve for your uses): inline float ShadowMapPCF(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, int filterSize) { float shadow = 0.0f; float2 grad = frac(projCoord.xy * resolution + 0.5f); for (int i = filterSize; i <= filterSize; i++) { for (int j = filterSize; j <= filterSize; j++) { float4 tmp = tex.Gather(state, projCoord.xy + float2(i, j) * float2(pixelSize, pixelSize)); tmp.x = tmp.x < projCoord.z ? 0.0f : 1.0f; tmp.y = tmp.y < projCoord.z ? 0.0f : 1.0f; tmp.z = tmp.z < projCoord.z ? 0.0f : 1.0f; tmp.w = tmp.w < projCoord.z ? 0.0f : 1.0f; shadow += lerp(lerp(tmp.w, tmp.z, grad.x), lerp(tmp.x, tmp.y, grad.x), grad.y); } } return shadow / (float)((2 * filterSize + 1) * (2 * filterSize + 1)); } Fig. 07  PCF filtering source code Representing this with image: Fig. 08  Image representing PCF, specifically a pixel with straight line and star in the end also calculates shadow in neighboring pixels (e.g. performing additional samples). The resulting shadow is then weighted sum of the results of all the samples for a given pixel. While the idea is quite basic, it is clear that using larger kernels would end up in slow computation. There are ways how to perform separable filtering of shadow maps using different approach to resolve where the shadow is (Variance Shadow Mapping for example). They do introduce additional problems though. PercentageCloser Soft Shadows To understand problem in both previous techniques let's replace point light with area light in our sketch image. Fig. 09  Using Area light introduces penumbra and umbra. The size of penumbra is dependent on multiple factors  distance between receiver and light, distance between blocker and light and light size (shape). To calculate plausible shadows like in the schematic image, we need to calculate distance between receiver and blocker, and distance between receiver and light. PCSS is a 2pass algorithm that does calculate average blocker distance as the first step  using this value to calculate penumbra size, and then performing some kind of filtering (often PCF, or jitteredPCF for example). In short, PCSS computation will look similar to this: float ShadowMapPCSS(...) { float averageBlockerDistance = PCSS_BlockerDistance(...); // If there isn't any average blocker distance  it means that there is no blocker at all if (averageBlockerDistance < 1.0) { return 1.0f; } else { float penumbraSize = estimatePenumbraSize(averageBlockerDistance, ...) float shadow = ShadowPCF(..., penumbraSize); return shadow; } } Fig. 10  Pseudocode of PCSS shadow mapping The first problem is to determine correct average blocker calculation  and as we want to limit search size for average blocker, we simply pass in additional parameter that determines search size. Actual average blocker is calculated by searching shadow map with depth value smaller than of receiver. In my case I used the following estimation of blocker distance: // Input parameters are: // tex  Input shadow depth map // state  Sampler state for shadow depth map // projCoord  holds projection UV coordinates, and depth for receiver (~further compared against shadow depth map) // searchUV  input size for blocker search // rotationTrig  input parameter for random rotation of kernel samples inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotationTrig) { // Perform N samples with predefined offset and random rotation, scale by input search size int blockers = 0; float avgBlocker = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { // Calculate sample offset (technically anything can be used here  standard NxN kernel, random samples with scale, etc.) float2 offset = PCSS_Samples[i] * searchUV; offset = PCSS_Rotate(offset, rotationTrig); // Compare given sample depth with receiver depth, if it puts receiver into shadow, this sample is a blocker float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x; if (z < projCoord.z) { blockers++; avgBlockerDistance += z; } } // Calculate average blocker depth avgBlocker /= blockers; // To solve cases where there are no blockers  we output 2 values  average blocker depth and no. of blockers return float2(avgBlocker, (float)blockers); } Fig. 11  Average blocker estimation for PCSS shadow mapping For penumbra size calculation  first  we assume that blocker and receiver are plannar and parallel. This makes actual penumbra size is then based on similar triangles. Determined as: penmubraSize = lightSize * (receiverDepth  averageBlockerDepth) / averageBlockerDepth This size is then used as input kernel size for PCF (or similar) filter. In my case I again used rotated kernel samples. Note.: Depending on the samples positioning one can achieve different area light shapes. The result gives quite correct shadows, with the downside of requiring a lot of processing power to do noiseless shadows (a lot of samples) and large kernel sizes (which also requires large blocker search size). Generally this is very good technique for small to midsized area lights, yet largesized area lights will cause problems. Fig. 12  PCSS shadow mapping in practice As currently the article is quite large and describing 2 other techniques which I allow in my current game engine build (first of them is a variant of PCSS that utilizes mip maps and allows for slightly larger light size without impacting the performance that much, and second of them is sort of backprojection technique), I will leave those two for another article which may eventually come out. Anyways allow me to at least show a short video of the first technique in action: Note: This article was originally published as a blog entry right here at GameDev.net, and has been reproduced here as a featured article with the kind permission of the author. You might also be interested in our recently featured article on Contacthardening Soft Shadows Made Fast. 
Hi guys i'm new here, i really hope my question won't sound utterly stupid.. I'd like to know whether it's better to use a PRNG or a regular RNG, if you are trying to program your own video slot machine. Actually i don't even have clearly understood the difference between the two =D 2nd question is: which developer i should rely on? I'm following this guide, they talk about RNG but not which one or where to find it. Thank you in advance :)

First a brief description of what we're trying to achieve. We're basically trying for a single shard MMO that will accommodate all players on a giant planet, say 2000 km radius. In order to achieve this we will need .... well .... a giant plant. This presents a problem since the classical way to store worlds is triangles on disk and a world this big simply wouldn't fit on a players local computer. Therefore we won't generate world geometry on disk, we will instead define the majority of the terrain with functions and realize them into a voxel engine to get our meshes using a classic marching algorithm....... well actually a not so classic marching algorithm since it uses prisms instead of cubes. Our voxel engine will necessarily make use of an octrees to allow for the many levels of LOD and relatively low memory usage we need to achieve this. The work so far...... So below we have some test code that generates a sphere using the initial code of our Dragon Lands game engine..... void CDLClient::InitTest2() { // Create virtual heap m_pHeap = new(MDL_VHEAP_MAX, MDL_VHEAP_INIT, MDL_VHEAP_HASH_MAX) CDLVHeap(); CDLVHeap *pHeap = m_pHeap.Heap(); // Create the universe m_pUniverse = new(pHeap) CDLUniverseObject(pHeap); // Create the graphics interface CDLDXWorldInterface *pInterface = new(pHeap) CDLDXWorldInterface(this); // Create spherical world double fRad = 4.0; CDLValuatorSphere *pNV = new(pHeap) CDLValuatorSphere(fRad); CDLSphereObjectView *pSO = new(pHeap) CDLSphereObjectView( pHeap, fRad, 1.1 , 10, 7, pNV ); //pSO>SetViewType(EDLViewType::Voxels); pSO>SetGraphicsInterface(pInterface); // Create an astral reference from the universe to the world and attach it to the universe CDLReferenceAstral *pRef = new(pHeap) CDLReferenceAstral(m_pUniverse(),pSO); m_pUniverse>PushReference(pRef); // Create the camera m_pCamera = new(pHeap) CDLCameraObject(pHeap, FDL_PI/4.0, this>GetWidth(), this>GetHeight()); m_pCamera>SetGraphicsInterface(pInterface); // Create a world tracking reference from the unverse to the camera double fMinDist = 0.0; double fMaxDist = 32.0; m_pBoom = new(pHeap) CDLReferenceFollow(m_pUniverse(),m_pCamera(),pSO,16.0,fMinDist,fMaxDist); m_pUniverse>PushReference(m_pBoom()); // Set zoom speed in the client this>SetZoom(fMinDist,fMaxDist,3.0); // Create the god object (Build point for LOD calculations) m_pGod = new(pHeap) CDLGodObject(pHeap); // Create a reference for the god opbject and attach it to the universe CDLReferenceDirected *pGodRef = new(pHeap) CDLReferenceDirected(m_pUniverse(), m_pGod()); pGodRef>SetPosition(CDLDblPoint(0.0, 0.0, 6.0), CDLDblVector(0.0, 0.0, 1.0), CDLDblVector(0.0, 1.0, 0.0)); m_pCamera>PushReference(pGodRef); // Set the main camera and god object for the universe' m_pUniverse>SetMainCamera(m_pCamera()); m_pUniverse>SetMainGod(m_pGod()); // Load and compile the vertex shader CDLUString clVShaderName = L"VS_DLDX_Test.hlsl"; m_pVertexShader = new(pHeap) CDLDXShaderVertexPC(this,clVShaderName,false,0,1); // Attach the Camera to the vertex shader m_pVertexShader>UseConstantBuffer(0,static_cast<CDLDXConstantBuffer *>(m_pCamera>GetViewData())); // Create the pixel shader CDLUString clPShaderName = L"PS_DLDX_Test.hlsl"; m_pPixelShader = new(pHeap) CDLDXShaderPixelGeneral(this,clPShaderName,false,0,0); // Create a rasterizer state and set to wireframe m_pRasterizeState = new(pHeap) CDLDXRasterizerState(this); m_pRasterizeState>ModifyState().FillMode = D3D11_FILL_WIREFRAME; m_pUniverse()>InitFromMainCamera(); m_pUniverse()>WorldUpdateFromMainGod(); } And here's what it generates: Ok so it looks pretty basic, but it actually is a lot more complex than it looks.,,,, What it looks like is a subdivided icosahedron. This is is only partially true. The icosahedron is extruded to form 20 prisms. Then each each of those is subdivided to 8 subprisms and it's children are subdivided and so forth. Subdivision only takes place where there is function data (i.e. the function changes sign). This presents a big problem. How do we know where sign changes exist? Well we don't, so we have to go down the tree and look for them at the resolution we are currently displaying. Normally that would require us to build the trees all the way down just to look for sign changes, but since this is expensive we don't want to do it, so instead we do a "ghost walk". We build a lightweight tree structure which we use to find where data exists. We then use that structure to build the actual tree where voxels are needed. So I guess the obvious question is why is this any better than simply subdividing an icosohedron and perturbing the vertexes to get mountains, oceans and the like. The answer is this system will support full on 3D geometry. You can have caves, tunnels, overhangs, really whatever you can come up with. It's not just limited to height maps. Here's what our octree (or rather 20 octrees) looks like: Ok it's a bit odd looking but so far so good. Now we come to the dreaded voxel LOD and .....gulp .... chunking. .... So as would be expected we need a higher density of voxels close to the camera and lower density father away. We implement this by defining chunks. Each chunk is a prism somewhere in the tree(s) an contains subprism voxels. Chunks father way from the camera are larger and those closer to the camera are smaller, the same as with the voxels they contain. In the following picture we set the god object (the object that determiners our point for LOD calculation) to a fixed point so we can turn the world around and look at the chunk transitions. However normally we would attach it to the camera or the player's character so all chunking and LOD would be based on a players perspective. As you can see there are transitions from higher level chunks to lower level chunks and their corresponding voxels. In order to do this efficiently we divide the voxels in each chunk into two groups: center voxels and border voxels, and so each chunk can have two meshes that it builds. We do this so that if one chunk is subdivided into smaller chunks, we only need update the border mesh of neighboring chunks and so there is less work to be done and less data to send back down to the GPU. Actual voxel level transitions are done by special algorithms that handle each possible case. The whole transition code was probably the hardest thing to implement since there are a lot of possible cases. What else..... Oh yeah normals....... You can't see it with wire fame but we do generate normals. We basically walk around mesh vertexes in the standard way and add up the normals of the faces. There are a couple of complexities however. First we have to account for vertexes on along chunk boundaries, and second we wanted some way to support ridge lines, like you might find in a mountain range. So each edge in a mesh can be set to sharp which causes duplicate vertexes with different normals to be generated. Guess that's it for now. We'll try to post more stuff when we have simplex noise in and can generate mountains and such. Also we have some ideas for trees and houses but we'll get to that later.

Simply put, my function compares two AABBs for collision detection, and when they are around the same size it works fine, but I noticed that if I greatly decreased the size of one of them (or augmented the size of the other), then the collision wouldn't be detected correctly; I would have to have them intersect for a collision to be registered rather than having it register when they are at least in direct contact, which is the intended behavior. Below is my code. local function DetectCollision(a, b)  AABB to AABB local collisionX = (a.Position.X + a.Size.X) >= b.Position.X and (b.Position.X + b.Size.X) >= a.Position.X local collisionY = (a.Position.Y + a.Size.Y) >= b.Position.Y and (b.Position.Y + b.Size.Y) >= a.Position.Y local collisionZ = (a.Position.Z + a.Size.Z) >= b.Position.Z and (b.Position.Z + b.Size.Z) >= a.Position.Z return collisionX and collisionY and collisionZ end EDIT  To be more specific, the issues start to occur when I cut the size of one of the AABBs in half. For instance, if I had two cubes where one's size is 12 on all axes and the other is six on all axes, then the collision will not register. Upon debugging, I noticed that only one of the collision bools will become false. This seems to depend on what axis the smaller bounding box moves from in relation to the bigger one, so if I moved the smaller AABB away from the bigger one on the yaxis, then collisionY will be false.

Linear interpolation (sometimes called 'lerp' or 'mix') is a really handy function for creative coding, game development and generative art. The function interpolates within the range [start..end] based on a 't' parameter, where 't' is typically within a [0..1] range. For example, divide 'loop time' by 'loop duration' and you get a 't' value between 0.0 and 1.0. Now you can map this 't' value to a new range, such as `lerp(20, 50, t)` to gradually increase a circle's radius, or `lerp(20, 10, t)` to gradually decrease its line thickness. Another example: you can use linear interpolation to smoothly animate from one coordinate to another. Define a start point (x1, y1) and end point (x2, y2), then interpolate the 'x' and 'y' dimensions separately to find the computed point in between. Or use linear interpolation to spring toward a moving target. Each frame, interpolate from the current value to the target value with a small 't' parameter, such as 0.05. It's like saying: walk 5% toward the target each frame. A more advanced example, but built on the same concept, is interpolating from one color (red) to another (blue). To do this, we interpolate the (R, G, B) or (H, S, L) channels of the color individually, just like we would with a 2D or 3D coordinate. Another quick example is to choose a random point along a line segment. There are lots of ways to use linear interpolation, and lots more types of interpolation (cubic, bilinear, etc). These concepts also lead nicely into areas like: curves, splines and parametric equations. Source code for each of these examples is available here: https://gist.github.com/mattdesl/3675c85a72075557dbb6b9e3e04a53d9 About the author: Matt DesLauriers is a creative coder and generative artist based in London. He combines code and emergent systems to make art for the web, print media, and physical installations. Note: This brief introduction to lerp was originally published as a Twitter thread and is republished here with the kind permission of the original author. [Wayback Machine Archive]

Hi! How is XP calculated? For example, to reach level 4, the player should reach 200 XP and to reach level 5 player requires 450 XP. Is there a formula? Is there a tutorial available online? Any source where I can learn how XP is calculated? Can you give me a simple example from any existing game where the XP increases whether the player wins or loses? Thanks!

Advertisement