
Advertisement
Search the Community
Showing results for tags 'Algorithm'.
Found 158 results

I have programmed an implementation of the Separating Axis Theorem to handle collisions between 2D convex polygons. It is written in Processing and can be viewed on Github here. There are a couple of issues with it that I would like some help in resolving. In the construction of Polygon objects, you specify the width and height of the polygon and the initial rotation offset by which the vertices will be placed around the polygon. If the rotation offset is 0, the first vertex is placed directly to the right of the object. If higher or lower, the first vertex is placed clockwise or counterclockwise, respectively, around the circumference of the object by the rotation amount. The rest of the vertices follow by a consistent offset of TWO_PI / number of vertices. While this places the vertices at the correct angle around the polygon, the problem is that if the rotation is anything other than 0, the width and height of the polygon are no longer the values specified. They are reduced because the vertices are placed around the polygon using the sin and cos functions, which often return values other than 1 or 1. Of course, when the half width and half height are multiplied by a sin or cos value other than 1 or 1, they are reduced. This is my issue. How can I place an arbitrary number of vertices at an arbitrary rotation around the polygon, while maintaining both the intended shape specified by the number of vertices (triangle, hexagon, octagon), and the intended width and height of the polygon as specified by the parameter values in the constructor? The Polygon code: class Polygon { PVector position; PShape shape; int w, h, halfW, halfH; color c; ArrayList<PVector> vertexOffsets; Polygon(PVector position, int numVertices, int w, int h, float rotation) { this.position = position; this.w = w; this.h = h; this.halfW = w / 2; this.halfH = h / 2; this.c = color(255); vertexOffsets = new ArrayList<PVector>(); if(numVertices < 3) numVertices = 3; shape = createShape(); shape.beginShape(); shape.fill(255); shape.stroke(255); for(int i = 0; i < numVertices; ++i) { PVector vertex = new PVector(position.x + cos(rotation) * halfW, position.y + sin(rotation) * halfH); shape.vertex(vertex.x, vertex.y); rotation += TWO_PI / numVertices; PVector vertexOffset = vertex.sub(position); vertexOffsets.add(vertexOffset); } shape.endShape(CLOSE); } void move(float x, float y) { position.set(x, y); for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void rotate(float angle) { for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); vertexOffset.rotate(angle); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void setColour(color c) { this.c = c; } void render() { shape.setFill(c); shape(shape); } } My other issue is that when two polygons with three vertices each collide, they are not always moved out of collision smoothly by the Minimum Translation Vector returned by the SAT algorithm. The polygon moved out of collision by the MTV does not rest against the other polygon as it should, it instead jumps back a small distance. I find this very strange as I have been unable to replicate this behaviour when resolving collisions between polygons of other vertex quantities and I cannot find the flaw in the implementation, though it must be there. What could be causing this incorrect collision resolution, which from my testing appears to only occur between polygons of three vertices? Any help you can provide on these issues would be greatly appreciated. Thank you.

Algorithm Reverse Engineering .RAW files from PS2 game
Adeilton Alves posted a topic in General and Gameplay Programming
Hello everyone, I'm new here and sorry if this isn't the right place to ask but i asked in a few forums around the internet and no one yet help with it.. I'm have been trying to mod this game for years, but I still stuck with the raw files from RACJIN games, Raw Files [ Mod edit: Removed ] I would like to identify the compression algorithm used to compress these files so that they can be decompressed and analyzed. Game : Naruto Uzumaki Chronicles 2... A.K.A Naruto Konoha Spirits in Japan. 
I'm looking for an algorithm that I can use to remove vertices that are close to each other within a margin of error in a triangular mesh. Pretty much similar to Blender's "Remove Doubles" feature, if anyone is familiar with it. I think the issue isn't just removing the doubles, but also how would I handle the face indices once I remove "duplicate" vertices?

3D What's the real tech behind the GPU triangles Clipping
iGrfx posted a topic in Graphics and GPU Programming
I've learned that the triangle clipping in the rasterization process usually using Sutherland–Hodgman algorithm. I also found an algorithm called "Guardband". I'm writing a software raster so I want to know what technical the GPU use, I want to implement it for study. Thanks! updated: what's the more proper triangulate algorithm? 
Effect: Area Light Shadows Part 1: PCSS
Vilem Otte posted an article in Graphics and GPU Programming
Welcome to the first part of multiple effect articles about soft shadows. In recent days I've been working on area light support in my own game engine, which is critical for one of the game concepts I'd like to eventually do (if time will allow me to do so). For each area light, it is crucial to have proper soft shadows with proper penumbra. For motivation, let's have the following screenshot with 3 area lights with various sizes: Fig. 01  PCSS variant that allows for perfectly smooth, largearea light shadows Let's start the article by comparison of the following 2 screenshots  one with shadows and one without: Fig. 02  Scene from default viewpoint lit with light without any shadows (left) and with shadows (right) This is the scene we're going to work with, and for the sake of simplicity, all of the comparison screenshots will be from this exact same viewpoint with 2 different scene configurations. Let's start with the definition of how shadows are created. Given a scene and light which we're viewing. Shadow umbra will be present at each position where there is no direct visibility between given position and any existing point on the light. Shadow penumbra will be present at each position where there is visibility of any point on the light, yet not all of them. No shadow is everywhere where there is full direct visibility between each point on the light and position. Most of the games tend to simplify, instead of defining a light as area or volume, it gets defined as an infinitely small point, this gives us few advantages: For single point, it is possible to define visibility in a binary way  either in shadow or not in shadow From single point, a projection of the scene can be easily constructed in such way, that definition of shadow becomes trivial (either position is occluded by other objects in the scene from lights point of view, or it isn't) From here, one can follow into the idea of shadow mapping  which is a basic technique for all others used here. Standard Shadow Mapping Trivial, yet should be mentioned here. inline float ShadowMap(Texture2D<float2> shadowMap, SamplerState shadowSamplerState, float3 coord) { return shadowMap.SampleLevel(shadowSamplerState, coord.xy, 0.0f).x < coord.z ? 0.0f : 1.0f; } Fig. 03  code snippet for standard shadow mapping, where depth map (stored 'distance' from lights point of view) is compared against calculated 'distance' between point we're computing right now and given light position. Word 'distance' may either mean actual distance, or more likely just value on zaxis for light point of view basis. Which is well known to everyone here, giving us basic results, that we all well know, like: Fig. 04  Standard Shadow Mapping This can be simply explained with the following image: Fig. 05  Each rendered pixel calculates whether its 'depth' from light point is greater than what is written in 'depth' map from light point (represented as yellow dot), white lines represent computation for each pixel. PercentageCloseFiltering (PCF) To make shadow more visually appealing, adding softedge is a must. This is done by simply performing NxN tests with offsets. For the sake of improved visual quality I've used shadow mapping with bilinear filter (which requires resolving 4 samples), along with 5x5 PCF filtering: Fig. 06  Percentage close filtering (PCF) results in nice softedged shadows, sadly the shadow is uniformly soft everywhere Clearly, none of the above techniques does any penumbra/umbra calculation, and therefore they're not really useful for area lights. For the sake of completeness, I'm adding basic PCF source code (for the sake of optimization, feel free to improve for your uses): inline float ShadowMapPCF(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, int filterSize) { float shadow = 0.0f; float2 grad = frac(projCoord.xy * resolution + 0.5f); for (int i = filterSize; i <= filterSize; i++) { for (int j = filterSize; j <= filterSize; j++) { float4 tmp = tex.Gather(state, projCoord.xy + float2(i, j) * float2(pixelSize, pixelSize)); tmp.x = tmp.x < projCoord.z ? 0.0f : 1.0f; tmp.y = tmp.y < projCoord.z ? 0.0f : 1.0f; tmp.z = tmp.z < projCoord.z ? 0.0f : 1.0f; tmp.w = tmp.w < projCoord.z ? 0.0f : 1.0f; shadow += lerp(lerp(tmp.w, tmp.z, grad.x), lerp(tmp.x, tmp.y, grad.x), grad.y); } } return shadow / (float)((2 * filterSize + 1) * (2 * filterSize + 1)); } Fig. 07  PCF filtering source code Representing this with image: Fig. 08  Image representing PCF, specifically a pixel with straight line and star in the end also calculates shadow in neighboring pixels (e.g. performing additional samples). The resulting shadow is then weighted sum of the results of all the samples for a given pixel. While the idea is quite basic, it is clear that using larger kernels would end up in slow computation. There are ways how to perform separable filtering of shadow maps using different approach to resolve where the shadow is (Variance Shadow Mapping for example). They do introduce additional problems though. PercentageCloser Soft Shadows To understand problem in both previous techniques let's replace point light with area light in our sketch image. Fig. 09  Using Area light introduces penumbra and umbra. The size of penumbra is dependent on multiple factors  distance between receiver and light, distance between blocker and light and light size (shape). To calculate plausible shadows like in the schematic image, we need to calculate distance between receiver and blocker, and distance between receiver and light. PCSS is a 2pass algorithm that does calculate average blocker distance as the first step  using this value to calculate penumbra size, and then performing some kind of filtering (often PCF, or jitteredPCF for example). In short, PCSS computation will look similar to this: float ShadowMapPCSS(...) { float averageBlockerDistance = PCSS_BlockerDistance(...); // If there isn't any average blocker distance  it means that there is no blocker at all if (averageBlockerDistance < 1.0) { return 1.0f; } else { float penumbraSize = estimatePenumbraSize(averageBlockerDistance, ...) float shadow = ShadowPCF(..., penumbraSize); return shadow; } } Fig. 10  Pseudocode of PCSS shadow mapping The first problem is to determine correct average blocker calculation  and as we want to limit search size for average blocker, we simply pass in additional parameter that determines search size. Actual average blocker is calculated by searching shadow map with depth value smaller than of receiver. In my case I used the following estimation of blocker distance: // Input parameters are: // tex  Input shadow depth map // state  Sampler state for shadow depth map // projCoord  holds projection UV coordinates, and depth for receiver (~further compared against shadow depth map) // searchUV  input size for blocker search // rotationTrig  input parameter for random rotation of kernel samples inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotationTrig) { // Perform N samples with predefined offset and random rotation, scale by input search size int blockers = 0; float avgBlocker = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { // Calculate sample offset (technically anything can be used here  standard NxN kernel, random samples with scale, etc.) float2 offset = PCSS_Samples[i] * searchUV; offset = PCSS_Rotate(offset, rotationTrig); // Compare given sample depth with receiver depth, if it puts receiver into shadow, this sample is a blocker float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x; if (z < projCoord.z) { blockers++; avgBlockerDistance += z; } } // Calculate average blocker depth avgBlocker /= blockers; // To solve cases where there are no blockers  we output 2 values  average blocker depth and no. of blockers return float2(avgBlocker, (float)blockers); } Fig. 11  Average blocker estimation for PCSS shadow mapping For penumbra size calculation  first  we assume that blocker and receiver are plannar and parallel. This makes actual penumbra size is then based on similar triangles. Determined as: penmubraSize = lightSize * (receiverDepth  averageBlockerDepth) / averageBlockerDepth This size is then used as input kernel size for PCF (or similar) filter. In my case I again used rotated kernel samples. Note.: Depending on the samples positioning one can achieve different area light shapes. The result gives quite correct shadows, with the downside of requiring a lot of processing power to do noiseless shadows (a lot of samples) and large kernel sizes (which also requires large blocker search size). Generally this is very good technique for small to midsized area lights, yet largesized area lights will cause problems. Fig. 12  PCSS shadow mapping in practice As currently the article is quite large and describing 2 other techniques which I allow in my current game engine build (first of them is a variant of PCSS that utilizes mip maps and allows for slightly larger light size without impacting the performance that much, and second of them is sort of backprojection technique), I will leave those two for another article which may eventually come out. Anyways allow me to at least show a short video of the first technique in action: Note: This article was originally published as a blog entry right here at GameDev.net, and has been reproduced here as a featured article with the kind permission of the author. You might also be interested in our recently featured article on Contacthardening Soft Shadows Made Fast. 
Hi guys i'm new here, i really hope my question won't sound utterly stupid.. I'd like to know whether it's better to use a PRNG or a regular RNG, if you are trying to program your own video slot machine. Actually i don't even have clearly understood the difference between the two =D 2nd question is: which developer i should rely on? I'm following this guide, they talk about RNG but not which one or where to find it. Thank you in advance :)

First a brief description of what we're trying to achieve. We're basically trying for a single shard MMO that will accommodate all players on a giant planet, say 2000 km radius. In order to achieve this we will need .... well .... a giant plant. This presents a problem since the classical way to store worlds is triangles on disk and a world this big simply wouldn't fit on a players local computer. Therefore we won't generate world geometry on disk, we will instead define the majority of the terrain with functions and realize them into a voxel engine to get our meshes using a classic marching algorithm....... well actually a not so classic marching algorithm since it uses prisms instead of cubes. Our voxel engine will necessarily make use of an octrees to allow for the many levels of LOD and relatively low memory usage we need to achieve this. The work so far...... So below we have some test code that generates a sphere using the initial code of our Dragon Lands game engine..... void CDLClient::InitTest2() { // Create virtual heap m_pHeap = new(MDL_VHEAP_MAX, MDL_VHEAP_INIT, MDL_VHEAP_HASH_MAX) CDLVHeap(); CDLVHeap *pHeap = m_pHeap.Heap(); // Create the universe m_pUniverse = new(pHeap) CDLUniverseObject(pHeap); // Create the graphics interface CDLDXWorldInterface *pInterface = new(pHeap) CDLDXWorldInterface(this); // Create spherical world double fRad = 4.0; CDLValuatorSphere *pNV = new(pHeap) CDLValuatorSphere(fRad); CDLSphereObjectView *pSO = new(pHeap) CDLSphereObjectView( pHeap, fRad, 1.1 , 10, 7, pNV ); //pSO>SetViewType(EDLViewType::Voxels); pSO>SetGraphicsInterface(pInterface); // Create an astral reference from the universe to the world and attach it to the universe CDLReferenceAstral *pRef = new(pHeap) CDLReferenceAstral(m_pUniverse(),pSO); m_pUniverse>PushReference(pRef); // Create the camera m_pCamera = new(pHeap) CDLCameraObject(pHeap, FDL_PI/4.0, this>GetWidth(), this>GetHeight()); m_pCamera>SetGraphicsInterface(pInterface); // Create a world tracking reference from the unverse to the camera double fMinDist = 0.0; double fMaxDist = 32.0; m_pBoom = new(pHeap) CDLReferenceFollow(m_pUniverse(),m_pCamera(),pSO,16.0,fMinDist,fMaxDist); m_pUniverse>PushReference(m_pBoom()); // Set zoom speed in the client this>SetZoom(fMinDist,fMaxDist,3.0); // Create the god object (Build point for LOD calculations) m_pGod = new(pHeap) CDLGodObject(pHeap); // Create a reference for the god opbject and attach it to the universe CDLReferenceDirected *pGodRef = new(pHeap) CDLReferenceDirected(m_pUniverse(), m_pGod()); pGodRef>SetPosition(CDLDblPoint(0.0, 0.0, 6.0), CDLDblVector(0.0, 0.0, 1.0), CDLDblVector(0.0, 1.0, 0.0)); m_pCamera>PushReference(pGodRef); // Set the main camera and god object for the universe' m_pUniverse>SetMainCamera(m_pCamera()); m_pUniverse>SetMainGod(m_pGod()); // Load and compile the vertex shader CDLUString clVShaderName = L"VS_DLDX_Test.hlsl"; m_pVertexShader = new(pHeap) CDLDXShaderVertexPC(this,clVShaderName,false,0,1); // Attach the Camera to the vertex shader m_pVertexShader>UseConstantBuffer(0,static_cast<CDLDXConstantBuffer *>(m_pCamera>GetViewData())); // Create the pixel shader CDLUString clPShaderName = L"PS_DLDX_Test.hlsl"; m_pPixelShader = new(pHeap) CDLDXShaderPixelGeneral(this,clPShaderName,false,0,0); // Create a rasterizer state and set to wireframe m_pRasterizeState = new(pHeap) CDLDXRasterizerState(this); m_pRasterizeState>ModifyState().FillMode = D3D11_FILL_WIREFRAME; m_pUniverse()>InitFromMainCamera(); m_pUniverse()>WorldUpdateFromMainGod(); } And here's what it generates: Ok so it looks pretty basic, but it actually is a lot more complex than it looks.,,,, What it looks like is a subdivided icosahedron. This is is only partially true. The icosahedron is extruded to form 20 prisms. Then each each of those is subdivided to 8 subprisms and it's children are subdivided and so forth. Subdivision only takes place where there is function data (i.e. the function changes sign). This presents a big problem. How do we know where sign changes exist? Well we don't, so we have to go down the tree and look for them at the resolution we are currently displaying. Normally that would require us to build the trees all the way down just to look for sign changes, but since this is expensive we don't want to do it, so instead we do a "ghost walk". We build a lightweight tree structure which we use to find where data exists. We then use that structure to build the actual tree where voxels are needed. So I guess the obvious question is why is this any better than simply subdividing an icosohedron and perturbing the vertexes to get mountains, oceans and the like. The answer is this system will support full on 3D geometry. You can have caves, tunnels, overhangs, really whatever you can come up with. It's not just limited to height maps. Here's what our octree (or rather 20 octrees) looks like: Ok it's a bit odd looking but so far so good. Now we come to the dreaded voxel LOD and .....gulp .... chunking. .... So as would be expected we need a higher density of voxels close to the camera and lower density father away. We implement this by defining chunks. Each chunk is a prism somewhere in the tree(s) an contains subprism voxels. Chunks father way from the camera are larger and those closer to the camera are smaller, the same as with the voxels they contain. In the following picture we set the god object (the object that determiners our point for LOD calculation) to a fixed point so we can turn the world around and look at the chunk transitions. However normally we would attach it to the camera or the player's character so all chunking and LOD would be based on a players perspective. As you can see there are transitions from higher level chunks to lower level chunks and their corresponding voxels. In order to do this efficiently we divide the voxels in each chunk into two groups: center voxels and border voxels, and so each chunk can have two meshes that it builds. We do this so that if one chunk is subdivided into smaller chunks, we only need update the border mesh of neighboring chunks and so there is less work to be done and less data to send back down to the GPU. Actual voxel level transitions are done by special algorithms that handle each possible case. The whole transition code was probably the hardest thing to implement since there are a lot of possible cases. What else..... Oh yeah normals....... You can't see it with wire fame but we do generate normals. We basically walk around mesh vertexes in the standard way and add up the normals of the faces. There are a couple of complexities however. First we have to account for vertexes on along chunk boundaries, and second we wanted some way to support ridge lines, like you might find in a mountain range. So each edge in a mesh can be set to sharp which causes duplicate vertexes with different normals to be generated. Guess that's it for now. We'll try to post more stuff when we have simplex noise in and can generate mountains and such. Also we have some ideas for trees and houses but we'll get to that later.

Simply put, my function compares two AABBs for collision detection, and when they are around the same size it works fine, but I noticed that if I greatly decreased the size of one of them (or augmented the size of the other), then the collision wouldn't be detected correctly; I would have to have them intersect for a collision to be registered rather than having it register when they are at least in direct contact, which is the intended behavior. Below is my code. local function DetectCollision(a, b)  AABB to AABB local collisionX = (a.Position.X + a.Size.X) >= b.Position.X and (b.Position.X + b.Size.X) >= a.Position.X local collisionY = (a.Position.Y + a.Size.Y) >= b.Position.Y and (b.Position.Y + b.Size.Y) >= a.Position.Y local collisionZ = (a.Position.Z + a.Size.Z) >= b.Position.Z and (b.Position.Z + b.Size.Z) >= a.Position.Z return collisionX and collisionY and collisionZ end EDIT  To be more specific, the issues start to occur when I cut the size of one of the AABBs in half. For instance, if I had two cubes where one's size is 12 on all axes and the other is six on all axes, then the collision will not register. Upon debugging, I noticed that only one of the collision bools will become false. This seems to depend on what axis the smaller bounding box moves from in relation to the bigger one, so if I moved the smaller AABB away from the bigger one on the yaxis, then collisionY will be false.

Linear interpolation (sometimes called 'lerp' or 'mix') is a really handy function for creative coding, game development and generative art. The function interpolates within the range [start..end] based on a 't' parameter, where 't' is typically within a [0..1] range. For example, divide 'loop time' by 'loop duration' and you get a 't' value between 0.0 and 1.0. Now you can map this 't' value to a new range, such as `lerp(20, 50, t)` to gradually increase a circle's radius, or `lerp(20, 10, t)` to gradually decrease its line thickness. Another example: you can use linear interpolation to smoothly animate from one coordinate to another. Define a start point (x1, y1) and end point (x2, y2), then interpolate the 'x' and 'y' dimensions separately to find the computed point in between. Or use linear interpolation to spring toward a moving target. Each frame, interpolate from the current value to the target value with a small 't' parameter, such as 0.05. It's like saying: walk 5% toward the target each frame. A more advanced example, but built on the same concept, is interpolating from one color (red) to another (blue). To do this, we interpolate the (R, G, B) or (H, S, L) channels of the color individually, just like we would with a 2D or 3D coordinate. Another quick example is to choose a random point along a line segment. There are lots of ways to use linear interpolation, and lots more types of interpolation (cubic, bilinear, etc). These concepts also lead nicely into areas like: curves, splines and parametric equations. Source code for each of these examples is available here: https://gist.github.com/mattdesl/3675c85a72075557dbb6b9e3e04a53d9 About the author: Matt DesLauriers is a creative coder and generative artist based in London. He combines code and emergent systems to make art for the web, print media, and physical installations. Note: This brief introduction to lerp was originally published as a Twitter thread and is republished here with the kind permission of the original author. [Wayback Machine Archive]

Hi! How is XP calculated? For example, to reach level 4, the player should reach 200 XP and to reach level 5 player requires 450 XP. Is there a formula? Is there a tutorial available online? Any source where I can learn how XP is calculated? Can you give me a simple example from any existing game where the XP increases whether the player wins or loses? Thanks!

Algorithm Assembler endiannes and Linkers  How do they work in detail
Iris_Technologies posted a topic in General and Gameplay Programming
I had some doubts about hex formats(assembler output) and linkers: 1. So, I disassembly a raw binary(no ELF, PE, etc... headers) X64 assembly code and i got that result: 0: 66 89 c8 mov ax,cx 3: e8 00 00 00 00 call 8 <gh> 0000000000000008 <gh>: 8: 66 89 c2 mov dx,ax I understand how Byte Offset works('66' is the byte ID 0, '89' is 1, 'c8' is 2 and on 3 the call instruction starts(that is why '3:' is there)) but, by that logic, shouldn't 'call gh' be translated to 'e8 00 00 00 00 00 00 00 08' instead of 'e8 00 00 00 00' since the byte offset of the first instruction of gh, which is 'mov dx, ax' is 8 and the output is 64 bits? 2. Using the example of above, if endianness is little end., how the assembler would swap the bytes, by each instruction? Like: Original, no endiannes { 66 89 c8 e8 00 00 00 00(in case that would be correct and i'm wrong in the question 1.) 66 89 c2 } to { c8 89 66 00 00 00 00 e8 c2 89 66 } 3. And then, the big end. would be like the original, without endiannes, code of the question 2.? 4. Suppose that i mark gh as .globl, then, would the assembler create a map table file where gh is in 'e8 00 00 00 00'(again, in case that would be correct and i'm wrong in question 1.), and linker will look into such map file, and if another object file calls gh, the linker will then translate call gh as either 'e8 00 00 00 00'? 
Algorithm Game collectible highlighting
GameEngineer_gi posted a topic in General and Gameplay Programming
Games often highlight collectibles with halos or in the case of Wolfenstein with a cycling white stripe. Its hard to see here with a still image but picture the white stripe moving across the object texture. I assume its done in the collectible's shader code and perhaps its much more simple than I thought but am I on the right track here? (images from "Wolfenstein New Colossus") 
Algorithm The power of the vector cross product, or how to make a realistic vision field
jbdev posted a blog entry in Projects Of Some Degree Of Interest
In the previous iteration of our game, we decided to use an actual cone as a way to make an AI "see". This implementation was hazardous, and it quickly became one of the hardest things to implement. We eventually were able to code it all, but the results were really static and not really realistic. Because of the reboot, I took the time to actually identify what constraint one's vision has. The visual field First of all, a cone isn't really the best in therm of collision checking. It required a special collider and could have potentially been a bottleneck in the future when multiple AI would roam the level. In actuality, the visual field can be represented as a 3D piece of a sphere (or more like a sector of a sphere). So we're gonna need to use a sphere in the new version. It's cleaner and more efficient that way. Here's how I've done it: foreach (Collider otherCollider in Physics.OverlapSphere(m_head.transform.position, m_visionDistance / 2, ~LayerMask.GetMask("Entity", "Ignore Raycast"), QueryTriggerInteraction.Ignore)) { // Do your things here } Pretty simple, really... Afterwards (not unlike our previous endeavour), we can just do a simple ray cast to see if the AI's vision is obstructed: // Do a raycast RaycastHit hit; if (Physics.Raycast(m_head.position, (otherPosition  m_head.position).normalized, out hit, m_visionDistance, ~LayerMask.GetMask("Entity", "Ignore Raycast"), QueryTriggerInteraction.Ignore) && hit.collider == otherCollider) { // We can see the other without any obstacles } But with that came another problem: if we use a sphere as a visual field, then the AI can surely see behind his back. Enters the cross product. Vectorial cross product The cross product is a vectorial operation that is quite useful. Here's the actual operation that takes place: \(\mathbf{c} = \mathbf{a} \times \mathbf{b} = ( \mathbf{a}_{y}\mathbf{b}_{z} \mathbf{a}_{z}\mathbf{b}_{y}, \mathbf{a}_{z}\mathbf{b}_{x} \mathbf{a}_{x}\mathbf{b}_{z}, \mathbf{a}_{x}\mathbf{b}_{y} \mathbf{a}_{y}\mathbf{b}_{x} )\) This actually makes a third vector. This third vector is said to be "orthogonal" to the two others. This is a visual representation of that vector: As you can see, this is pretty cool. It looks like the translation gizmo of many 3D editors. But this operation is more useful than creating 3D gizmos. It can actually help us in our objective. Interesting Properties One of the most interesting properties of the cross product is actually its magnitude. Depending on the angle between our two a and b vectors, the magnitude of the resulting vector changes. Here's a nice visualization of it: As you can see, this property can be useful for many things... Including determining the position of a third vector compared to two other vectors. But, however, there's a catch: the order of our a and b vector matters. We need to make sure that we don't make a mistake, as this can easily induce many bugs in our code. The funnel algorithm In one of my articles, I've actually explained how pathfinding kinda works. I've said that the navigational mesh algorithm is actually an amalgamation of different algorithms. One of these algorithms is the Funnel algorithm, with which we actually do the string pulling. When the Funnel algorithm is launched, we basically do a variation of the cross product operation in order to find if a certain point lay inside a given triangle described by a left and right apexes. This is particularly useful, as we can actually apply a nice string pulling on the identified path. Here's the actual code: public static float FunnelCross2D(Vector3 tip, Vector3 vertexA, Vector3 vertexB) { return (vertexB.x  tip.x) * (vertexA.z  tip.z)  (vertexA.x  tip.x) * (vertexB.z  tip.z); } With this function, we get a float. The float in question (or more particularly its sign) can indicate whether the tip is to the left or to the right of the line described by vertexA and vertexB. (As long as the order of those vectors are counterclockwise, otherwise, the sign is inverted) Application Now, with that FunelCross2D function, we can actually attack our problem headon. With the function, we can essentially tell whether or not a given point is behind or in front of an AI. Here's how I've managed to do it: if ( FunnelCross2D((otherTransform.position  m_head.position).normalized, m_head.right, m_head.right) > 0 ) { // otherTransform is in front of us } Because this is Unity, we have access to directional vectors for each Transform objects. This is useful because we can then plug these vectors into our FunnelCross2D function and voilà: we now have a way to tell if another entity is behind or in front of our AI. But wait, there's more! Limit the visual angle Most people are aware that our visual field has a limited viewing angle. It happens that, for humans, the viewing angle is about 114°. The problem is that, right now, our AI viewing angle is actually 180°. Not really realistic if you ask me. Thankfully, we have our trusty FunnelCross2D function to help with that. Let's take another look at the nice cross product animation from before: If you noticed, the magnitude is actually cyclic in its property: when the angle between a and b is 90°, then the magnitude of the resulting vector of the cross product is literally 1. The closet the angle gets to 180° or 0°, the closest our magnitude get to 0. This means that for a given magnitude (except for 1), there are actually 2 possible a and b vector configurations. So, we can then try to find the actual magnitude of the cross given a certain angle. Afterwards, we can store the result in memory. m_visionCrossLimit = FunnelCross2D(new Vector3(Mathf.Cos((Mathf.PI / 2)  (m_visionAngle / 2)), 0, Mathf.Sin((Mathf.PI / 2)  (m_visionAngle / 2))).normalized, m_head.right, m_head.right); Now we can just go back to our if and change some things: if ( FunnelCross2D((otherTransform.position  m_head.position).normalized, m_head.right, m_head.right) > m_visionCrossLimit ) { // otherTransform is in our visual field } Then we did it! the AI only reacts to enemies in their visual field. Conclusion In conclusion, you can see how I've managed to simulate a 3D visual field using the trustworthy cross product. But the fun doesn't end there! We can apply this to many different situations. For example, I've implemented the same thing but in order to limit neck rotations. it's just like previously, but with another variable and some other fancy codes and what not... The cross product is indeed a valuable tool in the game developer's toolset. No doubt about it. 
C++ Physics  How to calculate the next position of a ball rolling against a slope?
jeanmilost posted a topic in Math and Physics
For a small C/C++ game project, I want to add some physics on a ball, to allow it to roll against the slopes of a 3d world, in the same way as the example visible in the image below: However I have very few skills in physics, and I have some trouble to understand which physical laws should be applied, and how concepts like energy, force and acceleration should be used here. As a beginner, I imagined the situation as follow: Where: N (in green) is the normal of the current polygon above the ball Fg (in red) is the gravitational force, which should be constant Ff (also in red) is the frictional force, which should be constant P (in blue) is the current ball position P' (also in blue) is the next point to calculate I also wrote a dedicated function to calculate the physics on my ball, which is called every time a frame is about to be rendered. When this function is called, I know: The elapsed time The current ball position Which polygon is currently below the ball (from which I can extract the N normal shown above, and thus the slope) The constant forces (Fg and Ff shown above) From these values I tried to calculate the next ball position (P' shown above) based on the physics law F = m * a This gave the following code: #define M_CSR_Gravitation 9.81f typedef struct { CSR_Vector3 m_AccumulatedForce; float m_Mass; } CSR_Body; typedef struct { CSR_Vector3 m_Position; CSR_Sphere m_Geometry; CSR_Body m_Body; } CSR_Ball; CSR_Ball g_Ball; void csrPhysicsGravity(float mass, CSR_Vector3* pF) { // the formula for the gravitation is F = m * g pF>m_X = 0.0f; pF>m_Y = mass * M_CSR_Gravitation; pF>m_Z = 0.0f; } void ApplyPhysics(float elapsedTime, CSR_Plane groundPlane, const CSR_Vector3& frictionForce) { float dirAngleX; float dirAngleZ; float velocityAngle; CSR_Vector3 planeNormal; CSR_Vector3 gravity; CSR_Vector3 xDir; CSR_Vector3 zDir; CSR_Vector3 up; CSR_Vector3 dropForce; CSR_Vector3 motionForce; CSR_Vector3 acceleration; CSR_Vector3 prevCenter; planeNormal.m_X = groundPlane.m_A; planeNormal.m_Y = groundPlane.m_B; planeNormal.m_Z = groundPlane.m_C; // calculate the gravity force applied on the ball csrPhysicsGravity(g_Ball.m_Body.m_Mass, &gravity); xDir.m_X = 1.0f; xDir.m_Y = 0.0f; xDir.m_Z = 0.0f; // calculate the angle to drift against the x axis csrVec3Dot(&xDir, &planeNormal, &dirAngleX); zDir.m_X = 0.0f; zDir.m_Y = 0.0f; zDir.m_Z = 1.0f; // calculate the angle to drift against the z axis csrVec3Dot(&zDir, &planeNormal, &dirAngleZ); up.m_X = 0.0f; up.m_Y = 1.0f; up.m_Z = 0.0f; // calculate the drift velocity (the steeper is the slope, the higher is the fall speed) csrVec3Dot(&up, &planeNormal, &velocityAngle); // calculate the drift offsets to apply on the next frame dropForce.m_X = dirAngleX * velocityAngle * gravity.m_Y; dropForce.m_Y = 0.0f; dropForce.m_Z = dirAngleZ * velocityAngle * gravity.m_Y; motionForce.m_X = g_Ball.m_Body.m_AccumulatedForce.m_X + dropForce.m_X  frictionForce.m_X; motionForce.m_Y = g_Ball.m_Body.m_AccumulatedForce.m_Y + dropForce.m_Y  frictionForce.m_Y; motionForce.m_Z = g_Ball.m_Body.m_AccumulatedForce.m_Z + dropForce.m_Z  frictionForce.m_Z; g_Ball.m_Body.m_AccumulatedForce = motionForce; acceleration.m_X = (motionForce.m_X / g_Ball.m_Body.m_Mass) * elapsedTime; acceleration.m_Y = (motionForce.m_Y / g_Ball.m_Body.m_Mass) * elapsedTime; acceleration.m_Z = (motionForce.m_Z / g_Ball.m_Body.m_Mass) * elapsedTime; prevCenter = g_Ball.m_Geometry.m_Center; g_Ball.m_Geometry.m_Center.m_X += acceleration.m_X * elapsedTime; g_Ball.m_Geometry.m_Center.m_Y += acceleration.m_Y * elapsedTime; g_Ball.m_Geometry.m_Center.m_Z += acceleration.m_Z * elapsedTime; } It works very roughly. The ball actually rolls down the slope, but never stops. And if I try to throw the ball from the bottom of the slope, it accelerates instead of curbing. Obviously something is pretty wrong, but what? As I say above, I'm not an expert in physics and I think I tend to confuse notions such acceleration, force and energy. However I tried to resolve this system by myself, without success. So I would be thankful if somebody can explain to me: which concepts of physics are involved to resolve the above mentioned system How should I correct my code to reach the above described objective (of course as long as the code does not have to be rewritten entirely) And/Or an example of a such algorithm, even if written in pseudocode or in another programming language IMPORTANT NOTE I know that there are many physics engines, commercial or opensource, like e.g. Bullet, Box2d or Havok, which I may use to resolve the above issue. THIS IS NOT MY OBJECTIVE, I wan't to use them, at least for now. I WANT TO LEARN AND UNDERSTAND how to resolve this issue before using such engines. And I think that the proposed problem is simple enough to serve for this purpose. 
Voxel Traversal Algorithm (Ray Casting)
thecheeselover posted a blog entry in 3D, AI, procedural generation and black jack
For more information and updates about the game, which is a voxel colony simulation / survival, please subscribe to r/CheesyGames. World Generation This is a simple world made of chunks of 32³ voxels. The world itself is not static : as you can see on the top of the image, chunks only exist if there is at least one solid voxel in them. In other words, the world is dynamic and can contain as many chunks as the player's computer can handle. In this particular screenshot, the world is generated with the simple vectorial gradient equation that I invented in high school but which I suppose already existed. Here's the basic equation : \(\overrightarrow{ \textit{voxel value} } = \frac{ \overrightarrow{\textit{position} } \cdot \overrightarrow{ \textit{gradient}}}{ \overrightarrow{\textit{gradient} } \cdot \overrightarrow{ \textit{gradient}} }\) That's the equation I came up with and remembered. The gradient * gradient can be simplified for the magnitude (length) of the gradient power squared. \(\overrightarrow{ \textit{voxel value} } = \frac{ \overrightarrow{\textit{position} } \cdot \overrightarrow{ \textit{gradient}}}{ \left \ \overrightarrow{ \textit{gradient}} \right \ ^{2} }\) In conclusion, this gives an N dimensional gradient which gives a single decimal number. Voxel Traversal Algorithm As for the voxel traversal algorithm, I decided to go with the most popular one, which was made by John Amanatides and Andrew Woo. As much as I like research papers, I also despise them because they lack simplicity, examples and full source code. That's why I had to google implementations of it and later on remembered that I had actually already implemented this algorithm a few years ago. Summary The simplest way to understand the algorithm is to imagine a line in an 3D world made of blocks. Which blocks does the line touch? Then, in which order are they touched based on the line's start and end positions? The goal is to traverse iteratively the blocks that are touched by the line . More simply, the logic of the algorithm can be summed making a distinction between the ray's direction's components. Those three define the importance of their axes in terms of how many blocks need to be traversed in what direction. Think of this with integers : two people are running to reach a goal; the fastest runs a 5 km/h, while the slowest runs at 1 km/h. For each time step, i.e. an hour, how many kilometers have each runner traveled? The ratio is 5 : 1, so, to merge the analogy, a ray would traverse each step 5 blocks on the X axis and 1 block on the Y axis. Of course, it's more complicated than that, as there are more parameters to it, especially because of exceptions such as what to do when each component is equal with one another? Implementation The first thing to know about my implementation is that each voxel index is centered within the voxel itself. In other words, the voxel at the position (0, 0, 0) starts at (0.5, 0.5, 0.5) inclusively and ends at (0.5, 0.5, 0.5) exclusively. This is for a cube extent of 1, naturally. The original research paper doesn't work that way as it starts at the lowest corner, i.e. the voxel spans from (0, 0, 0) to (1, 1, 1). Without any further delay, here is the class for the VoxelRay : import com.cheesygames.colonysimulation.math.MathExt; import com.cheesygames.colonysimulation.math.vector.Vector3i; import com.cheesygames.colonysimulation.world.World; import com.jme3.math.Vector3f; import com.jme3.scene.plugins.blender.math.Vector3d; import java.util.function.Function; /** * Ray for ray casting inside a voxel world. Each voxel is considered as a cube within this ray. A ray consists of a starting position, a direction and a length. The voxel distance * is computed once the method {@link #rayCastLocal(double, Function, Vector3i)} or {@link #rayCast(double, Function)} is called. */ public class VoxelRay { private Vector3d m_start; private Vector3d m_offsettedStart; private Vector3d m_direction; private double m_length; private int m_voxelDistance; private boolean m_wasStopped; /** * Constructs an invalid {@link VoxelRay} as its direction and length are null. The setters must be called after constructing a {@link VoxelRay} with this constructors. */ public VoxelRay() { this.m_start = new Vector3d(); this.m_offsettedStart = new Vector3d(); this.m_direction = new Vector3d(); this.m_length = 0; } /** * Constructs a {@link VoxelRay} from two points : start and end. * * @param start The absolute starting position of the ray. * @param end The absolute ending position of the ray. */ public VoxelRay(Vector3d start, Vector3d end) { this.m_start = new Vector3d(start); this.m_offsettedStart = new Vector3d(); this.m_direction = end.subtract(start); this.m_length = m_direction.length(); this.m_direction.normalizeLocal(); } /** * Constructs a {@link VoxelRay} from two points : start and end. * * @param start The absolute starting position of the ray. * @param end The absolute ending position of the ray. */ public VoxelRay(Vector3f start, Vector3f end) { this.m_start = new Vector3d(start); this.m_offsettedStart = new Vector3d(); this.m_direction = new Vector3d(end).subtractLocal(m_start); this.m_length = m_direction.length(); this.m_direction.normalizeLocal(); } /** * Constructs a {@link VoxelRay} from a start, a direction and a length. * * @param start The absolute starting position of the ray. * @param direction The direction of the ray. Must be normalized. * @param length The length of the ray. */ public VoxelRay(Vector3d start, Vector3d direction, double length) { this.m_start = new Vector3d(start); this.m_offsettedStart = new Vector3d(); this.m_direction = new Vector3d(direction); this.m_length = length; } /** * Constructs a {@link VoxelRay} from a start, a direction and a length. * * @param start The absolute starting position of the ray. * @param direction The direction of the ray. Must be normalized. * @param length The length of the ray. */ public VoxelRay(Vector3f start, Vector3f direction, float length) { this.m_start = new Vector3d(start); this.m_offsettedStart = new Vector3d(); this.m_direction = new Vector3d(direction); this.m_length = length; } /** * Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed. * This allows the lambda to stop anytime the algorithm to continue its loop. * * @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The * supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if * the algorithm should stop. * * @see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf">A Fast Voxel Traversal Algorithm</a> */ public void rayCast(Function<Vector3i, Boolean> onTraversingVoxel) { rayCastLocal(World.VOXEL_HALF_EXTENT, onTraversingVoxel, new Vector3i()); } /** * Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed. * This allows the lambda to stop anytime the algorithm to continue its loop. * * @param voxelHalfExtent The half extent (radius) of a voxel. * @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The * supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if * the algorithm should stop. * * @see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf">A Fast Voxel Traversal Algorithm</a> */ public void rayCast(double voxelHalfExtent, Function<Vector3i, Boolean> onTraversingVoxel) { rayCastLocal(voxelHalfExtent, onTraversingVoxel, new Vector3i()); } /** * Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed. * This allows the lambda to stop anytime the algorithm to continue its loop. * <p> * This method is local because the parameter voxelIndex is locally changed to avoid creating a new instance of {@link Vector3i}. * * @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The * supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if * the algorithm should stop. * @param voxelIndex The voxel index to locally modify in order to traverse voxels. This parameter exists simply to avoid creating a new {@link Vector3i} instance. * * @see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf">A Fast Voxel Traversal Algorithm</a> */ public void rayCastLocal(Function<Vector3i, Boolean> onTraversingVoxel, Vector3i voxelIndex) { rayCastLocal(World.VOXEL_HALF_EXTENT, onTraversingVoxel, voxelIndex); } /** * Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed. * This allows the lambda to stop anytime the algorithm to continue its loop. * <p> * This method is local because the parameter voxelIndex is locally changed to avoid creating a new instance of {@link Vector3i}. * * @param voxelHalfExtent The half extent (radius) of a voxel. * @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The * supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if * the algorithm should stop. * @param voxelIndex The voxel index to locally modify in order to traverse voxels. This parameter exists simply to avoid creating a new {@link Vector3i} instance. * * @see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf">A Fast Voxel Traversal Algorithm</a> */ public void rayCastLocal(double voxelHalfExtent, Function<Vector3i, Boolean> onTraversingVoxel, Vector3i voxelIndex) { assert !Double.isNaN(voxelHalfExtent); assert !Double.isNaN(m_start.x); assert !Double.isNaN(m_start.y); assert !Double.isNaN(m_start.z); assert !Double.isNaN(m_direction.x); assert !Double.isNaN(m_direction.y); assert !Double.isNaN(m_direction.z); assert !Double.isNaN(m_length); m_wasStopped = false; final double voxelExtent = voxelHalfExtent * 2; // This id of the first/current voxel hit by the ray. m_offsettedStart.set(m_start).addLocal(voxelHalfExtent, voxelHalfExtent, voxelHalfExtent); VoxelWorldUtils.getVoxelIndexNoOffsetLocal(voxelExtent, m_offsettedStart, voxelIndex); computeVoxelDistance(voxelExtent, voxelIndex); assert !Double.isNaN(m_voxelDistance); // In which direction the voxel ids are incremented. int stepX = (int) MathExt.getSignZeroPositive(m_direction.x); int stepY = (int) MathExt.getSignZeroPositive(m_direction.y); int stepZ = (int) MathExt.getSignZeroPositive(m_direction.z); // Distance along the ray to the next voxel border from the current position (tMaxX, tMaxY, tMaxZ). double nextVoxelBoundaryX = (voxelIndex.x + (MathExt.getNegativeSign(stepX) + 1)) * voxelExtent; double nextVoxelBoundaryY = (voxelIndex.y + (MathExt.getNegativeSign(stepY) + 1)) * voxelExtent; double nextVoxelBoundaryZ = (voxelIndex.z + (MathExt.getNegativeSign(stepZ) + 1)) * voxelExtent; // tMaxX, tMaxY, tMaxZ  distance until next intersection with voxelborder // the value of t at which the ray crosses the first vertical voxel boundary double tMaxX = (m_direction.x != 0) ? (nextVoxelBoundaryX  m_offsettedStart.x) / m_direction.x : Double.MAX_VALUE; double tMaxY = (m_direction.y != 0) ? (nextVoxelBoundaryY  m_offsettedStart.y) / m_direction.y : Double.MAX_VALUE; double tMaxZ = (m_direction.z != 0) ? (nextVoxelBoundaryZ  m_offsettedStart.z) / m_direction.z : Double.MAX_VALUE; // tDeltaX, tDeltaY, tDeltaZ  // how far along the ray we must move for the horizontal component to equal the width of a voxel // the direction in which we traverse the grid // can only be FLT_MAX if we never go in that direction double tDeltaX = (m_direction.x != 0) ? stepX * voxelExtent / m_direction.x : Double.MAX_VALUE; double tDeltaY = (m_direction.y != 0) ? stepY * voxelExtent / m_direction.y : Double.MAX_VALUE; double tDeltaZ = (m_direction.z != 0) ? stepZ * voxelExtent / m_direction.z : Double.MAX_VALUE; if (onTraversingVoxel.apply(voxelIndex)) { m_wasStopped = true; return; } int traversedVoxelCount = 0; while (++traversedVoxelCount < m_voxelDistance) { if (tMaxX < tMaxY && tMaxX < tMaxZ) { voxelIndex.x += stepX; tMaxX += tDeltaX; } else if (tMaxY < tMaxZ) { voxelIndex.y += stepY; tMaxY += tDeltaY; } else { voxelIndex.z += stepZ; tMaxZ += tDeltaZ; } if (onTraversingVoxel.apply(voxelIndex)) { m_wasStopped = true; break; } } } /** * Computes the voxel distance, a.k.a. the number of voxel to traverse, for the ray cast. * * @param voxelExtent The extent of a voxel, which is the equivalent for a cube of a sphere's radius. * @param startIndex The starting position's index. */ private void computeVoxelDistance(double voxelExtent, Vector3i startIndex) { m_voxelDistance = 1 + MathExt.abs(VoxelWorldUtils.getVoxelIndexNoOffset(voxelExtent, m_offsettedStart.x + m_direction.x * m_length)  startIndex.x) + MathExt.abs(VoxelWorldUtils.getVoxelIndexNoOffset(voxelExtent, m_offsettedStart.y + m_direction.y * m_length)  startIndex.y) + MathExt.abs(VoxelWorldUtils.getVoxelIndexNoOffset(voxelExtent, m_offsettedStart.z + m_direction.z * m_length)  startIndex.z); } public Vector3d getStart() { return m_start; } public Vector3d getDirection() { return m_direction; } public double getLength() { return m_length; } public int getVoxelDistance() { return m_voxelDistance; } public void setStart(Vector3d start) { m_start.set(start); } public void setStart(Vector3f start) { m_start.set(start); } /** * Sets the direction. * * @param direction The direction to set to the ray. Must be normalized. */ public void setDirection(Vector3d direction) { m_direction.set(direction); } /** * Sets the direction. * * @param direction The direction to set to the ray. Must be normalized. */ public void setDirection(Vector3f direction) { m_direction.set(direction); } /** * Sets the length of the ray. * * @param length The new length of the ray. Must be positive. */ public void setLength(double length) { m_length = length; } /** * Sets the end position of the ray, which is not a real variable but a way to set the direction and the length at the same time. The start position does matter for this * method. * * @param end Where the ray ends. */ public void setEnd(Vector3d end) { m_direction.set(end).subtractLocal(m_start); m_length = m_direction.length(); m_direction.normalizeLocal(); } /** * Gets if the voxel ray cast was stopped by the "onTraversingVoxel" method call. * * @return True if the voxel ray cast was stopped by the "onTraversingVoxel" method call, false otherwise. */ public boolean wasStopped() { return m_wasStopped; } } Here are the external static methods : /** * Gets the voxel index of the specified position. This method suppose that the parameter position is already offsetted with + voxel half extent. This method local because the * supplied voxel index will be locally modified and returned. * * @param voxelExtent The extent of a voxel, which is the equivalent to a cube of a sphere's diameter. * @param position The position to get the voxel index from. Must already be offsetted with + voxel half extent * @param voxelIndex Where to store the voxel index. * * @return The voxel index parameter that is set to the supplied position's voxel index. */ public static Vector3i getVoxelIndexNoOffsetLocal(double voxelExtent, Vector3d position, Vector3i voxelIndex) { return voxelIndex.set(getVoxelIndexNoOffset(voxelExtent, position.x), getVoxelIndexNoOffset(voxelExtent, position.y), getVoxelIndexNoOffset(voxelExtent, position.z)); } /** * Gets the sign of the supplied number. The method being "zero position" means that the sign of zero is 1. * * @param number The number to get the sign from. * * @return The number's sign. */ public static long getSignZeroPositive(double number) { assert !Double.isNaN(number); return getNegativeSign(number)  1; } /** * Gets the negative sign of the supplied number. So, in other words, if the number is negative, 1 is returned but if the number is positive or zero, then zero is returned. It * does not check if the parameter is NaN. * * @param number The number to get its negative sign. * * @return 1 if the number is negative, 0 otherwise. */ public static long getNegativeSign(double number) { assert !Double.isNaN(number); return Double.doubleToRawLongBits(number) >> BIT_COUNT_EXCLUDING_SIGN_64; } The important parts to adjust the algorithm to fit my voxel boundaries are the following : m_offsettedStart.set(m_start).addLocal(voxelHalfExtent, voxelHalfExtent, voxelHalfExtent); It is mandatory to add the half extent to the starting position. double nextVoxelBoundaryX = (voxelIndex.x + (MathExt.getNegativeSign(stepX) + 1)) * voxelExtent; double nextVoxelBoundaryY = (voxelIndex.y + (MathExt.getNegativeSign(stepY) + 1)) * voxelExtent; double nextVoxelBoundaryZ = (voxelIndex.z + (MathExt.getNegativeSign(stepZ) + 1)) * voxelExtent; What the MathExt method call does could be programmed as : (stepX >= 0 ? 1 : 0). I don't know how to express how it is delightful when everything starts to fit and work properly :') Here are some screenshots : 
From the album: Voxel Colony Simulation / Survival

From the album: Voxel Colony Simulation / Survival

From the album: Voxel Colony Simulation / Survival

From the album: Voxel Colony Simulation / Survival

From the album: Voxel Colony Simulation / Survival

Algorithm BSP trees, or creating a randomly generated level
jbdev posted a blog entry in Projects Of Some Degree Of Interest
So the game I'm working on is going to use rooms that are connected to each other by little holes, creating something somehow similar to "Binding of Isaac", but with organic room disposition rather than rooms placed on a grid with specific dimensions. To do this, I needed to search for a good level generation algorithms. I've found that the best algorithm is the BSP trees algorithm, which was traditionally used in oldschools roguelike games. Binary Partition Trees The algorithm works by taking an area and split it in half, thus creating tow new area to be recursively processed by the algorithm. The algorithm can be run until we stumble upon an area of a specific dimension. This is what that algorithm can do: Traditionally, each area becomes leaves, in which a random rectangle room is drawn within these. Afterward, each of these rooms is connected to each others using longs corridors. This approach works in 2D, as the player always has a clear view of his surroundings. Here is an excellent reference on the subject: Basic BSP Dungeon generation Adaptations However, because the game is in a firstperson perspective, the corridors won't work as the player can't really know his surrounding very well. We need to adapt the algorithm so that the player can feel comfortable while navigating in our level. I've chosen to eliminate corridors and rooms altogether and instead use the leaves themselves. Instead, each leaf is connected to each other through small holes. Also, the BSP tree algorithm creates a weblike dungeon with no clear end or start, which is fine if you're making a traditional dungeon, but we want our levels to have a certain flow so that the player can quickly find its way out if they want to. The way I planned to do that is by transforming the BSP leaves into a kind of navmesh grid. Afterward, we just pick some positions and select specific leaves that makes up the path. Creating the navmesh graph First, before we can use the graph search algorithm, we need to build our graph. BSP tree is still binary trees, so using those to deal with connections are out of the question. We need to somehow get all the leaves created in the BSP tree algorithm and put them in a more flexible structure: enter the quadtree. Quadtrees are a kind of tree that can have at most 4 children. This characteristic is quite useful when it comes to 2D rectangles. Here's a visual representation of a quadtree: With these kinds of trees, it's possible to get every overlapping leaf from a specific rectangle. If for a given room, we query a slightly bigger search zone, then we'll be able to find all of the given room's neighbours. We can then connect everything up and finally launch our graph search using randomly picked rooms that are far enough from each other. Do the pathfinding I've already made a blog entry on pathfinding so the procedure is almost the same... However, there is some difference here... One of the most important difference is that we add the concept of "hot" cells. When a specific cell is deemed "hot" and that the graph search algorithm stumbles upon it then its cost will be infinite. That way, we can say to the algorithm this: "Do not pick this cell unless it's your last option". This makes for a somehow imperfect path... But in our case, imperfect is perfect. Afterwards, we add all of the chosen rooms in a final list. All rooms in this list will be part of our level and will be rendered out later on. Add more rooms After we picked the main rooms, we can then append bonus rooms to some of these main rooms if the player is lucky, not unlike hidden rooms in "Binding of Isaac"... Also, the game is going to sometime have an "alternative path". These paths will try to be shorter than the main path and overall have more bonus rooms and loot to them. I've planned that the player needs to fulfil some requirement to enter this path. Because we already created a graph of each possible rooms, it's just a matter of luck to see if a room has a special room connected to it. Rendering it out Once the rooms configurations were made, we now need to create the geometries and collisions of the level. Before going with the BSP approach, I've tried to use cellular automata to generate cavelike structures... It wasn't controllable enough, but I've kept some of the code from it (mainly its geometry generation) Here's the cellular automata tutorial Basically, we render each rooms pixel by pixel. We then cut those "holes" I've talked about earlier and voilà. Here, I've coloured each room to give a better idea of what kind of room each is which. Blue rooms are part of the alternate path I've mentioned before. The green and red rooms represent both the starting and ending room respectively. Yellow rooms are bonus rooms. Afterward, it's only a matter of placing props and enemies. This method of creating levels is cool. It can make the game more flexible and interesting. It also all depends on luck, which is a stat that can change ingame. 
Hey! I am using sweep and prune algorithm to check collision detection in a 3D environment, as broadphase, and the boxes that contain the obstacles have to be built before checking for the collisions of their projections on all the three axes. So, can someone help me with how to start for building the boxes around the obstacles whose coordinates and faces are known to me, but not the shape.

Hello Everyone, So I have built a few basic games both in C++ and in Unreal, but am wanting to do something different. I absolutely love roguelike games like Binding of Isaac. So for this next game, I wanted to try to build a game like Binding of Isaac in Unreal. I believe I understand how to do a large portion of the UI, Gameplay, Camera, etc. The part I am struggling with is the world generation. I have found a few resources regarding this topic, but I am not sure how to fully implement due to the lack of resources addressing this topic. I have a few questions: Is it better to build off of a Perlin noise type algorithm or are there simpler models to do this with? What algorithms currently exist to make sure the rooms fit together well without overlap on a generation? Should the rooms be built as blueprint segments or be built in real time using code? Lastly, are there any great resources out there that I may have missed regarding random room generation in Unreal? Thank you guys for your time!

Algorithm CatmullRom for Quaternions?
turanszkij posted a topic in General and Gameplay Programming
I am having a bit of a problem with my camera interpolations. I am using a catmullrom interpolation between several camera transforms. The transforms have different locations and quaternion rotations. The catmullrom works perfectly for the locations, and nearly perfectly for the quaternions, except when the cameras need to rotate around something in a full circle, then there is always a point at which a full backward rotation occurs, instead of the short path (after the 3/4 circle rotation point). I am using the DirectXMath library, which has a XMQuaternionSquad method, which has the exact same parameter list that the XMVectorCatmullRom has, but for quaternions. It makes use of quaternion slerp functions in its implementation. Using this, the animation starts out somewhat good, but quickly breaks (jumps) when the chain of rotations is stepped (I have a big array of camera transforms, but only 4 of them are interpolated at once, so when the interpolation factor is > 1, then the transform chain is offset by 1). The MSDN documentation for the Quaternion Squad function also says that the XMQuaternionSquadSetup function needs to be used before this. But I had no success, the animation rotation keeps breaking when the chain is offset. This is how I am trying right now: (a, b, c, d are the transforms on the spline, t is a float between 0 and 1. With catmullrom, the result location is always between b and c) XMVECTOR Q1, Q2, Q3; XMQuaternionSquadSetup( &Q1, &Q2, &Q3, XMLoadFloat4(&a>rotation), XMLoadFloat4(&b>rotation), XMLoadFloat4(&c>rotation), XMLoadFloat4(&d>rotation) ); XMVECTOR R = XMQuaternionSquad( XMQuaternionNormalize(XMLoadFloat4(&a>rotation)), XMQuaternionNormalize(Q1), XMQuaternionNormalize(Q2), XMQuaternionNormalize(Q3), t ); R = XMQuaternionNormalize(R); Anyone had any luck getting this to work? 
Algorithm Voxel meshes and degenerate triangles.
Gnollrunner posted a topic in General and Gameplay Programming
So I'm running into this problem that I sometimes get little sliver polygons when generating meshes with marching prisms. After thinking about it, it's even possible to get single point triangles. The main issue is when I try to calculate normals for such triangles, it's basically inaccurate or sometimes even impossible. I came up with this idea (which I'm sure it's not new) of simply collapsing one edge of any triangle where this happens. This would necessarily destroy the triangles on the other side of the edge as follows: I think this should work OK but before implementing it I was wondering if there was some other standard way of doing this, especially when dealing with marching cubes or algorithms of that nature.

Advertisement