
Advertisement
Search the Community
Showing results for tags 'Algorithm'.
Found 150 results

Algorithm Assembler endiannes and Linkers  How do they work in detail
Iris_Technologies posted a topic in General and Gameplay Programming
I had some doubts about hex formats(assembler output) and linkers: 1. So, I disassembly a raw binary(no ELF, PE, etc... headers) X64 assembly code and i got that result: 0: 66 89 c8 mov ax,cx 3: e8 00 00 00 00 call 8 <gh> 0000000000000008 <gh>: 8: 66 89 c2 mov dx,ax I understand how Byte Offset works('66' is the byte ID 0, '89' is 1, 'c8' is 2 and on 3 the call instruction starts(that is why '3:' is there)) but, by that logic, shouldn't 'call gh' be translated to 'e8 00 00 00 00 00 00 00 08' instead of 'e8 00 00 00 00' since the byte offset of the first instruction of gh, which is 'mov dx, ax' is 8 and the output is 64 bits? 2. Using the example of above, if endianness is little end., how the assembler would swap the bytes, by each instruction? Like: Original, no endiannes { 66 89 c8 e8 00 00 00 00(in case that would be correct and i'm wrong in the question 1.) 66 89 c2 } to { c8 89 66 00 00 00 00 e8 c2 89 66 } 3. And then, the big end. would be like the original, without endiannes, code of the question 2.? 4. Suppose that i mark gh as .globl, then, would the assembler create a map table file where gh is in 'e8 00 00 00 00'(again, in case that would be correct and i'm wrong in question 1.), and linker will look into such map file, and if another object file calls gh, the linker will then translate call gh as either 'e8 00 00 00 00'? 
Algorithm Game collectible highlighting
GameEngineer_gi posted a topic in General and Gameplay Programming
Games often highlight collectibles with halos or in the case of Wolfenstein with a cycling white stripe. Its hard to see here with a still image but picture the white stripe moving across the object texture. I assume its done in the collectible's shader code and perhaps its much more simple than I thought but am I on the right track here? (images from "Wolfenstein New Colossus") 
Algorithm The power of the vector cross product, or how to make a realistic vision field
jbdev posted a blog entry in Projects Of Some Degree Of Interest
In the previous iteration of our game, we decided to use an actual cone as a way to make an AI "see". This implementation was hazardous, and it quickly became one of the hardest things to implement. We eventually were able to code it all, but the results were really static and not really realistic. Because of the reboot, I took the time to actually identify what constraint one's vision has. The visual field First of all, a cone isn't really the best in therm of collision checking. It required a special collider and could have potentially been a bottleneck in the future when multiple AI would roam the level. In actuality, the visual field can be represented as a 3D piece of a sphere (or more like a sector of a sphere). So we're gonna need to use a sphere in the new version. It's cleaner and more efficient that way. Here's how I've done it: foreach (Collider otherCollider in Physics.OverlapSphere(m_head.transform.position, m_visionDistance / 2, ~LayerMask.GetMask("Entity", "Ignore Raycast"), QueryTriggerInteraction.Ignore)) { // Do your things here } Pretty simple, really... Afterwards (not unlike our previous endeavour), we can just do a simple ray cast to see if the AI's vision is obstructed: // Do a raycast RaycastHit hit; if (Physics.Raycast(m_head.position, (otherPosition  m_head.position).normalized, out hit, m_visionDistance, ~LayerMask.GetMask("Entity", "Ignore Raycast"), QueryTriggerInteraction.Ignore) && hit.collider == otherCollider) { // We can see the other without any obstacles } But with that came another problem: if we use a sphere as a visual field, then the AI can surely see behind his back. Enters the cross product. Vectorial cross product The cross product is a vectorial operation that is quite useful. Here's the actual operation that takes place: \(\mathbf{c} = \mathbf{a} \times \mathbf{b} = ( \mathbf{a}_{y}\mathbf{b}_{z} \mathbf{a}_{z}\mathbf{b}_{y}, \mathbf{a}_{z}\mathbf{b}_{x} \mathbf{a}_{x}\mathbf{b}_{z}, \mathbf{a}_{x}\mathbf{b}_{y} \mathbf{a}_{y}\mathbf{b}_{x} )\) This actually makes a third vector. This third vector is said to be "orthogonal" to the two others. This is a visual representation of that vector: As you can see, this is pretty cool. It looks like the translation gizmo of many 3D editors. But this operation is more useful than creating 3D gizmos. It can actually help us in our objective. Interesting Properties One of the most interesting properties of the cross product is actually its magnitude. Depending on the angle between our two a and b vectors, the magnitude of the resulting vector changes. Here's a nice visualization of it: As you can see, this property can be useful for many things... Including determining the position of a third vector compared to two other vectors. But, however, there's a catch: the order of our a and b vector matters. We need to make sure that we don't make a mistake, as this can easily induce many bugs in our code. The funnel algorithm In one of my articles, I've actually explained how pathfinding kinda works. I've said that the navigational mesh algorithm is actually an amalgamation of different algorithms. One of these algorithms is the Funnel algorithm, with which we actually do the string pulling. When the Funnel algorithm is launched, we basically do a variation of the cross product operation in order to find if a certain point lay inside a given triangle described by a left and right apexes. This is particularly useful, as we can actually apply a nice string pulling on the identified path. Here's the actual code: public static float FunnelCross2D(Vector3 tip, Vector3 vertexA, Vector3 vertexB) { return (vertexB.x  tip.x) * (vertexA.z  tip.z)  (vertexA.x  tip.x) * (vertexB.z  tip.z); } With this function, we get a float. The float in question (or more particularly its sign) can indicate whether the tip is to the left or to the right of the line described by vertexA and vertexB. (As long as the order of those vectors are counterclockwise, otherwise, the sign is inverted) Application Now, with that FunelCross2D function, we can actually attack our problem headon. With the function, we can essentially tell whether or not a given point is behind or in front of an AI. Here's how I've managed to do it: if ( FunnelCross2D((otherTransform.position  m_head.position).normalized, m_head.right, m_head.right) > 0 ) { // otherTransform is in front of us } Because this is Unity, we have access to directional vectors for each Transform objects. This is useful because we can then plug these vectors into our FunnelCross2D function and voilà: we now have a way to tell if another entity is behind or in front of our AI. But wait, there's more! Limit the visual angle Most people are aware that our visual field has a limited viewing angle. It happens that, for humans, the viewing angle is about 114°. The problem is that, right now, our AI viewing angle is actually 180°. Not really realistic if you ask me. Thankfully, we have our trusty FunnelCross2D function to help with that. Let's take another look at the nice cross product animation from before: If you noticed, the magnitude is actually cyclic in its property: when the angle between a and b is 90°, then the magnitude of the resulting vector of the cross product is literally 1. The closet the angle gets to 180° or 0°, the closest our magnitude get to 0. This means that for a given magnitude (except for 1), there are actually 2 possible a and b vector configurations. So, we can then try to find the actual magnitude of the cross given a certain angle. Afterwards, we can store the result in memory. m_visionCrossLimit = FunnelCross2D(new Vector3(Mathf.Cos((Mathf.PI / 2)  (m_visionAngle / 2)), 0, Mathf.Sin((Mathf.PI / 2)  (m_visionAngle / 2))).normalized, m_head.right, m_head.right); Now we can just go back to our if and change some things: if ( FunnelCross2D((otherTransform.position  m_head.position).normalized, m_head.right, m_head.right) > m_visionCrossLimit ) { // otherTransform is in our visual field } Then we did it! the AI only reacts to enemies in their visual field. Conclusion In conclusion, you can see how I've managed to simulate a 3D visual field using the trustworthy cross product. But the fun doesn't end there! We can apply this to many different situations. For example, I've implemented the same thing but in order to limit neck rotations. it's just like previously, but with another variable and some other fancy codes and what not... The cross product is indeed a valuable tool in the game developer's toolset. No doubt about it. 
Hi! How is XP calculated? For example, to reach level 4, the player should reach 200 XP and to reach level 5 player requires 450 XP. Is there a formula? Is there a tutorial available online? Any source where I can learn how XP is calculated? Can you give me a simple example from any existing game where the XP increases whether the player wins or loses? Thanks!

C++ Physics  How to calculate the next position of a ball rolling against a slope?
jeanmilost posted a topic in Math and Physics
For a small C/C++ game project, I want to add some physics on a ball, to allow it to roll against the slopes of a 3d world, in the same way as the example visible in the image below: However I have very few skills in physics, and I have some trouble to understand which physical laws should be applied, and how concepts like energy, force and acceleration should be used here. As a beginner, I imagined the situation as follow: Where: N (in green) is the normal of the current polygon above the ball Fg (in red) is the gravitational force, which should be constant Ff (also in red) is the frictional force, which should be constant P (in blue) is the current ball position P' (also in blue) is the next point to calculate I also wrote a dedicated function to calculate the physics on my ball, which is called every time a frame is about to be rendered. When this function is called, I know: The elapsed time The current ball position Which polygon is currently below the ball (from which I can extract the N normal shown above, and thus the slope) The constant forces (Fg and Ff shown above) From these values I tried to calculate the next ball position (P' shown above) based on the physics law F = m * a This gave the following code: #define M_CSR_Gravitation 9.81f typedef struct { CSR_Vector3 m_AccumulatedForce; float m_Mass; } CSR_Body; typedef struct { CSR_Vector3 m_Position; CSR_Sphere m_Geometry; CSR_Body m_Body; } CSR_Ball; CSR_Ball g_Ball; void csrPhysicsGravity(float mass, CSR_Vector3* pF) { // the formula for the gravitation is F = m * g pF>m_X = 0.0f; pF>m_Y = mass * M_CSR_Gravitation; pF>m_Z = 0.0f; } void ApplyPhysics(float elapsedTime, CSR_Plane groundPlane, const CSR_Vector3& frictionForce) { float dirAngleX; float dirAngleZ; float velocityAngle; CSR_Vector3 planeNormal; CSR_Vector3 gravity; CSR_Vector3 xDir; CSR_Vector3 zDir; CSR_Vector3 up; CSR_Vector3 dropForce; CSR_Vector3 motionForce; CSR_Vector3 acceleration; CSR_Vector3 prevCenter; planeNormal.m_X = groundPlane.m_A; planeNormal.m_Y = groundPlane.m_B; planeNormal.m_Z = groundPlane.m_C; // calculate the gravity force applied on the ball csrPhysicsGravity(g_Ball.m_Body.m_Mass, &gravity); xDir.m_X = 1.0f; xDir.m_Y = 0.0f; xDir.m_Z = 0.0f; // calculate the angle to drift against the x axis csrVec3Dot(&xDir, &planeNormal, &dirAngleX); zDir.m_X = 0.0f; zDir.m_Y = 0.0f; zDir.m_Z = 1.0f; // calculate the angle to drift against the z axis csrVec3Dot(&zDir, &planeNormal, &dirAngleZ); up.m_X = 0.0f; up.m_Y = 1.0f; up.m_Z = 0.0f; // calculate the drift velocity (the steeper is the slope, the higher is the fall speed) csrVec3Dot(&up, &planeNormal, &velocityAngle); // calculate the drift offsets to apply on the next frame dropForce.m_X = dirAngleX * velocityAngle * gravity.m_Y; dropForce.m_Y = 0.0f; dropForce.m_Z = dirAngleZ * velocityAngle * gravity.m_Y; motionForce.m_X = g_Ball.m_Body.m_AccumulatedForce.m_X + dropForce.m_X  frictionForce.m_X; motionForce.m_Y = g_Ball.m_Body.m_AccumulatedForce.m_Y + dropForce.m_Y  frictionForce.m_Y; motionForce.m_Z = g_Ball.m_Body.m_AccumulatedForce.m_Z + dropForce.m_Z  frictionForce.m_Z; g_Ball.m_Body.m_AccumulatedForce = motionForce; acceleration.m_X = (motionForce.m_X / g_Ball.m_Body.m_Mass) * elapsedTime; acceleration.m_Y = (motionForce.m_Y / g_Ball.m_Body.m_Mass) * elapsedTime; acceleration.m_Z = (motionForce.m_Z / g_Ball.m_Body.m_Mass) * elapsedTime; prevCenter = g_Ball.m_Geometry.m_Center; g_Ball.m_Geometry.m_Center.m_X += acceleration.m_X * elapsedTime; g_Ball.m_Geometry.m_Center.m_Y += acceleration.m_Y * elapsedTime; g_Ball.m_Geometry.m_Center.m_Z += acceleration.m_Z * elapsedTime; } It works very roughly. The ball actually rolls down the slope, but never stops. And if I try to throw the ball from the bottom of the slope, it accelerates instead of curbing. Obviously something is pretty wrong, but what? As I say above, I'm not an expert in physics and I think I tend to confuse notions such acceleration, force and energy. However I tried to resolve this system by myself, without success. So I would be thankful if somebody can explain to me: which concepts of physics are involved to resolve the above mentioned system How should I correct my code to reach the above described objective (of course as long as the code does not have to be rewritten entirely) And/Or an example of a such algorithm, even if written in pseudocode or in another programming language IMPORTANT NOTE I know that there are many physics engines, commercial or opensource, like e.g. Bullet, Box2d or Havok, which I may use to resolve the above issue. THIS IS NOT MY OBJECTIVE, I wan't to use them, at least for now. I WANT TO LEARN AND UNDERSTAND how to resolve this issue before using such engines. And I think that the proposed problem is simple enough to serve for this purpose. 
Voxel Traversal Algorithm (Ray Casting)
thecheeselover posted a blog entry in 3D, AI, procedural generation and black jack
For more information and updates about the game, which is a voxel colony simulation / survival, please subscribe to r/CheesyGames. World Generation This is a simple world made of chunks of 32³ voxels. The world itself is not static : as you can see on the top of the image, chunks only exist if there is at least one solid voxel in them. In other words, the world is dynamic and can contain as many chunks as the player's computer can handle. In this particular screenshot, the world is generated with the simple vectorial gradient equation that I invented in high school but which I suppose already existed. Here's the basic equation : \(\overrightarrow{ \textit{voxel value} } = \frac{ \overrightarrow{\textit{position} } \cdot \overrightarrow{ \textit{gradient}}}{ \overrightarrow{\textit{gradient} } \cdot \overrightarrow{ \textit{gradient}} }\) That's the equation I came up with and remembered. The gradient * gradient can be simplified for the magnitude (length) of the gradient power squared. \(\overrightarrow{ \textit{voxel value} } = \frac{ \overrightarrow{\textit{position} } \cdot \overrightarrow{ \textit{gradient}}}{ \left \ \overrightarrow{ \textit{gradient}} \right \ ^{2} }\) In conclusion, this gives an N dimensional gradient which gives a single decimal number. Voxel Traversal Algorithm As for the voxel traversal algorithm, I decided to go with the most popular one, which was made by John Amanatides and Andrew Woo. As much as I like research papers, I also despise them because they lack simplicity, examples and full source code. That's why I had to google implementations of it and later on remembered that I had actually already implemented this algorithm a few years ago. Summary The simplest way to understand the algorithm is to imagine a line in an 3D world made of blocks. Which blocks does the line touch? Then, in which order are they touched based on the line's start and end positions? The goal is to traverse iteratively the blocks that are touched by the line . More simply, the logic of the algorithm can be summed making a distinction between the ray's direction's components. Those three define the importance of their axes in terms of how many blocks need to be traversed in what direction. Think of this with integers : two people are running to reach a goal; the fastest runs a 5 km/h, while the slowest runs at 1 km/h. For each time step, i.e. an hour, how many kilometers have each runner traveled? The ratio is 5 : 1, so, to merge the analogy, a ray would traverse each step 5 blocks on the X axis and 1 block on the Y axis. Of course, it's more complicated than that, as there are more parameters to it, especially because of exceptions such as what to do when each component is equal with one another? Implementation The first thing to know about my implementation is that each voxel index is centered within the voxel itself. In other words, the voxel at the position (0, 0, 0) starts at (0.5, 0.5, 0.5) inclusively and ends at (0.5, 0.5, 0.5) exclusively. This is for a cube extent of 1, naturally. The original research paper doesn't work that way as it starts at the lowest corner, i.e. the voxel spans from (0, 0, 0) to (1, 1, 1). Without any further delay, here is the class for the VoxelRay : import com.cheesygames.colonysimulation.math.MathExt; import com.cheesygames.colonysimulation.math.vector.Vector3i; import com.cheesygames.colonysimulation.world.World; import com.jme3.math.Vector3f; import com.jme3.scene.plugins.blender.math.Vector3d; import java.util.function.Function; /** * Ray for ray casting inside a voxel world. Each voxel is considered as a cube within this ray. A ray consists of a starting position, a direction and a length. The voxel distance * is computed once the method {@link #rayCastLocal(double, Function, Vector3i)} or {@link #rayCast(double, Function)} is called. */ public class VoxelRay { private Vector3d m_start; private Vector3d m_offsettedStart; private Vector3d m_direction; private double m_length; private int m_voxelDistance; private boolean m_wasStopped; /** * Constructs an invalid {@link VoxelRay} as its direction and length are null. The setters must be called after constructing a {@link VoxelRay} with this constructors. */ public VoxelRay() { this.m_start = new Vector3d(); this.m_offsettedStart = new Vector3d(); this.m_direction = new Vector3d(); this.m_length = 0; } /** * Constructs a {@link VoxelRay} from two points : start and end. * * @param start The absolute starting position of the ray. * @param end The absolute ending position of the ray. */ public VoxelRay(Vector3d start, Vector3d end) { this.m_start = new Vector3d(start); this.m_offsettedStart = new Vector3d(); this.m_direction = end.subtract(start); this.m_length = m_direction.length(); this.m_direction.normalizeLocal(); } /** * Constructs a {@link VoxelRay} from two points : start and end. * * @param start The absolute starting position of the ray. * @param end The absolute ending position of the ray. */ public VoxelRay(Vector3f start, Vector3f end) { this.m_start = new Vector3d(start); this.m_offsettedStart = new Vector3d(); this.m_direction = new Vector3d(end).subtractLocal(m_start); this.m_length = m_direction.length(); this.m_direction.normalizeLocal(); } /** * Constructs a {@link VoxelRay} from a start, a direction and a length. * * @param start The absolute starting position of the ray. * @param direction The direction of the ray. Must be normalized. * @param length The length of the ray. */ public VoxelRay(Vector3d start, Vector3d direction, double length) { this.m_start = new Vector3d(start); this.m_offsettedStart = new Vector3d(); this.m_direction = new Vector3d(direction); this.m_length = length; } /** * Constructs a {@link VoxelRay} from a start, a direction and a length. * * @param start The absolute starting position of the ray. * @param direction The direction of the ray. Must be normalized. * @param length The length of the ray. */ public VoxelRay(Vector3f start, Vector3f direction, float length) { this.m_start = new Vector3d(start); this.m_offsettedStart = new Vector3d(); this.m_direction = new Vector3d(direction); this.m_length = length; } /** * Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed. * This allows the lambda to stop anytime the algorithm to continue its loop. * * @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The * supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if * the algorithm should stop. * * @see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf">A Fast Voxel Traversal Algorithm</a> */ public void rayCast(Function<Vector3i, Boolean> onTraversingVoxel) { rayCastLocal(World.VOXEL_HALF_EXTENT, onTraversingVoxel, new Vector3i()); } /** * Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed. * This allows the lambda to stop anytime the algorithm to continue its loop. * * @param voxelHalfExtent The half extent (radius) of a voxel. * @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The * supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if * the algorithm should stop. * * @see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf">A Fast Voxel Traversal Algorithm</a> */ public void rayCast(double voxelHalfExtent, Function<Vector3i, Boolean> onTraversingVoxel) { rayCastLocal(voxelHalfExtent, onTraversingVoxel, new Vector3i()); } /** * Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed. * This allows the lambda to stop anytime the algorithm to continue its loop. * <p> * This method is local because the parameter voxelIndex is locally changed to avoid creating a new instance of {@link Vector3i}. * * @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The * supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if * the algorithm should stop. * @param voxelIndex The voxel index to locally modify in order to traverse voxels. This parameter exists simply to avoid creating a new {@link Vector3i} instance. * * @see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf">A Fast Voxel Traversal Algorithm</a> */ public void rayCastLocal(Function<Vector3i, Boolean> onTraversingVoxel, Vector3i voxelIndex) { rayCastLocal(World.VOXEL_HALF_EXTENT, onTraversingVoxel, voxelIndex); } /** * Casts the ray from its starting position towards its direction whilst keeping in mind its length. A lambda parameter is supplied and called each time a voxel is traversed. * This allows the lambda to stop anytime the algorithm to continue its loop. * <p> * This method is local because the parameter voxelIndex is locally changed to avoid creating a new instance of {@link Vector3i}. * * @param voxelHalfExtent The half extent (radius) of a voxel. * @param onTraversingVoxel The operation to execute when traversing a voxel. This method called the same number of times as the value of {@link #getVoxelDistance()}. The * supplied {@link Vector3i} parameter is not a new instance but a local instance, so it is a reference. The return value {@link Boolean} defines if * the algorithm should stop. * @param voxelIndex The voxel index to locally modify in order to traverse voxels. This parameter exists simply to avoid creating a new {@link Vector3i} instance. * * @see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.3443&rep=rep1&type=pdf">A Fast Voxel Traversal Algorithm</a> */ public void rayCastLocal(double voxelHalfExtent, Function<Vector3i, Boolean> onTraversingVoxel, Vector3i voxelIndex) { assert !Double.isNaN(voxelHalfExtent); assert !Double.isNaN(m_start.x); assert !Double.isNaN(m_start.y); assert !Double.isNaN(m_start.z); assert !Double.isNaN(m_direction.x); assert !Double.isNaN(m_direction.y); assert !Double.isNaN(m_direction.z); assert !Double.isNaN(m_length); m_wasStopped = false; final double voxelExtent = voxelHalfExtent * 2; // This id of the first/current voxel hit by the ray. m_offsettedStart.set(m_start).addLocal(voxelHalfExtent, voxelHalfExtent, voxelHalfExtent); VoxelWorldUtils.getVoxelIndexNoOffsetLocal(voxelExtent, m_offsettedStart, voxelIndex); computeVoxelDistance(voxelExtent, voxelIndex); assert !Double.isNaN(m_voxelDistance); // In which direction the voxel ids are incremented. int stepX = (int) MathExt.getSignZeroPositive(m_direction.x); int stepY = (int) MathExt.getSignZeroPositive(m_direction.y); int stepZ = (int) MathExt.getSignZeroPositive(m_direction.z); // Distance along the ray to the next voxel border from the current position (tMaxX, tMaxY, tMaxZ). double nextVoxelBoundaryX = (voxelIndex.x + (MathExt.getNegativeSign(stepX) + 1)) * voxelExtent; double nextVoxelBoundaryY = (voxelIndex.y + (MathExt.getNegativeSign(stepY) + 1)) * voxelExtent; double nextVoxelBoundaryZ = (voxelIndex.z + (MathExt.getNegativeSign(stepZ) + 1)) * voxelExtent; // tMaxX, tMaxY, tMaxZ  distance until next intersection with voxelborder // the value of t at which the ray crosses the first vertical voxel boundary double tMaxX = (m_direction.x != 0) ? (nextVoxelBoundaryX  m_offsettedStart.x) / m_direction.x : Double.MAX_VALUE; double tMaxY = (m_direction.y != 0) ? (nextVoxelBoundaryY  m_offsettedStart.y) / m_direction.y : Double.MAX_VALUE; double tMaxZ = (m_direction.z != 0) ? (nextVoxelBoundaryZ  m_offsettedStart.z) / m_direction.z : Double.MAX_VALUE; // tDeltaX, tDeltaY, tDeltaZ  // how far along the ray we must move for the horizontal component to equal the width of a voxel // the direction in which we traverse the grid // can only be FLT_MAX if we never go in that direction double tDeltaX = (m_direction.x != 0) ? stepX * voxelExtent / m_direction.x : Double.MAX_VALUE; double tDeltaY = (m_direction.y != 0) ? stepY * voxelExtent / m_direction.y : Double.MAX_VALUE; double tDeltaZ = (m_direction.z != 0) ? stepZ * voxelExtent / m_direction.z : Double.MAX_VALUE; if (onTraversingVoxel.apply(voxelIndex)) { m_wasStopped = true; return; } int traversedVoxelCount = 0; while (++traversedVoxelCount < m_voxelDistance) { if (tMaxX < tMaxY && tMaxX < tMaxZ) { voxelIndex.x += stepX; tMaxX += tDeltaX; } else if (tMaxY < tMaxZ) { voxelIndex.y += stepY; tMaxY += tDeltaY; } else { voxelIndex.z += stepZ; tMaxZ += tDeltaZ; } if (onTraversingVoxel.apply(voxelIndex)) { m_wasStopped = true; break; } } } /** * Computes the voxel distance, a.k.a. the number of voxel to traverse, for the ray cast. * * @param voxelExtent The extent of a voxel, which is the equivalent for a cube of a sphere's radius. * @param startIndex The starting position's index. */ private void computeVoxelDistance(double voxelExtent, Vector3i startIndex) { m_voxelDistance = 1 + MathExt.abs(VoxelWorldUtils.getVoxelIndexNoOffset(voxelExtent, m_offsettedStart.x + m_direction.x * m_length)  startIndex.x) + MathExt.abs(VoxelWorldUtils.getVoxelIndexNoOffset(voxelExtent, m_offsettedStart.y + m_direction.y * m_length)  startIndex.y) + MathExt.abs(VoxelWorldUtils.getVoxelIndexNoOffset(voxelExtent, m_offsettedStart.z + m_direction.z * m_length)  startIndex.z); } public Vector3d getStart() { return m_start; } public Vector3d getDirection() { return m_direction; } public double getLength() { return m_length; } public int getVoxelDistance() { return m_voxelDistance; } public void setStart(Vector3d start) { m_start.set(start); } public void setStart(Vector3f start) { m_start.set(start); } /** * Sets the direction. * * @param direction The direction to set to the ray. Must be normalized. */ public void setDirection(Vector3d direction) { m_direction.set(direction); } /** * Sets the direction. * * @param direction The direction to set to the ray. Must be normalized. */ public void setDirection(Vector3f direction) { m_direction.set(direction); } /** * Sets the length of the ray. * * @param length The new length of the ray. Must be positive. */ public void setLength(double length) { m_length = length; } /** * Sets the end position of the ray, which is not a real variable but a way to set the direction and the length at the same time. The start position does matter for this * method. * * @param end Where the ray ends. */ public void setEnd(Vector3d end) { m_direction.set(end).subtractLocal(m_start); m_length = m_direction.length(); m_direction.normalizeLocal(); } /** * Gets if the voxel ray cast was stopped by the "onTraversingVoxel" method call. * * @return True if the voxel ray cast was stopped by the "onTraversingVoxel" method call, false otherwise. */ public boolean wasStopped() { return m_wasStopped; } } Here are the external static methods : /** * Gets the voxel index of the specified position. This method suppose that the parameter position is already offsetted with + voxel half extent. This method local because the * supplied voxel index will be locally modified and returned. * * @param voxelExtent The extent of a voxel, which is the equivalent to a cube of a sphere's diameter. * @param position The position to get the voxel index from. Must already be offsetted with + voxel half extent * @param voxelIndex Where to store the voxel index. * * @return The voxel index parameter that is set to the supplied position's voxel index. */ public static Vector3i getVoxelIndexNoOffsetLocal(double voxelExtent, Vector3d position, Vector3i voxelIndex) { return voxelIndex.set(getVoxelIndexNoOffset(voxelExtent, position.x), getVoxelIndexNoOffset(voxelExtent, position.y), getVoxelIndexNoOffset(voxelExtent, position.z)); } /** * Gets the sign of the supplied number. The method being "zero position" means that the sign of zero is 1. * * @param number The number to get the sign from. * * @return The number's sign. */ public static long getSignZeroPositive(double number) { assert !Double.isNaN(number); return getNegativeSign(number)  1; } /** * Gets the negative sign of the supplied number. So, in other words, if the number is negative, 1 is returned but if the number is positive or zero, then zero is returned. It * does not check if the parameter is NaN. * * @param number The number to get its negative sign. * * @return 1 if the number is negative, 0 otherwise. */ public static long getNegativeSign(double number) { assert !Double.isNaN(number); return Double.doubleToRawLongBits(number) >> BIT_COUNT_EXCLUDING_SIGN_64; } The important parts to adjust the algorithm to fit my voxel boundaries are the following : m_offsettedStart.set(m_start).addLocal(voxelHalfExtent, voxelHalfExtent, voxelHalfExtent); It is mandatory to add the half extent to the starting position. double nextVoxelBoundaryX = (voxelIndex.x + (MathExt.getNegativeSign(stepX) + 1)) * voxelExtent; double nextVoxelBoundaryY = (voxelIndex.y + (MathExt.getNegativeSign(stepY) + 1)) * voxelExtent; double nextVoxelBoundaryZ = (voxelIndex.z + (MathExt.getNegativeSign(stepZ) + 1)) * voxelExtent; What the MathExt method call does could be programmed as : (stepX >= 0 ? 1 : 0). I don't know how to express how it is delightful when everything starts to fit and work properly :') Here are some screenshots : 
From the album: Voxel Colony Simulation / Survival

From the album: Voxel Colony Simulation / Survival

From the album: Voxel Colony Simulation / Survival

From the album: Voxel Colony Simulation / Survival

From the album: Voxel Colony Simulation / Survival

Algorithm BSP trees, or creating a randomly generated level
jbdev posted a blog entry in Projects Of Some Degree Of Interest
So the game I'm working on is going to use rooms that are connected to each other by little holes, creating something somehow similar to "Binding of Isaac", but with organic room disposition rather than rooms placed on a grid with specific dimensions. To do this, I needed to search for a good level generation algorithms. I've found that the best algorithm is the BSP trees algorithm, which was traditionally used in oldschools roguelike games. Binary Partition Trees The algorithm works by taking an area and split it in half, thus creating tow new area to be recursively processed by the algorithm. The algorithm can be run until we stumble upon an area of a specific dimension. This is what that algorithm can do: Traditionally, each area becomes leaves, in which a random rectangle room is drawn within these. Afterward, each of these rooms is connected to each others using longs corridors. This approach works in 2D, as the player always has a clear view of his surroundings. Here is an excellent reference on the subject: Basic BSP Dungeon generation Adaptations However, because the game is in a firstperson perspective, the corridors won't work as the player can't really know his surrounding very well. We need to adapt the algorithm so that the player can feel comfortable while navigating in our level. I've chosen to eliminate corridors and rooms altogether and instead use the leaves themselves. Instead, each leaf is connected to each other through small holes. Also, the BSP tree algorithm creates a weblike dungeon with no clear end or start, which is fine if you're making a traditional dungeon, but we want our levels to have a certain flow so that the player can quickly find its way out if they want to. The way I planned to do that is by transforming the BSP leaves into a kind of navmesh grid. Afterward, we just pick some positions and select specific leaves that makes up the path. Creating the navmesh graph First, before we can use the graph search algorithm, we need to build our graph. BSP tree is still binary trees, so using those to deal with connections are out of the question. We need to somehow get all the leaves created in the BSP tree algorithm and put them in a more flexible structure: enter the quadtree. Quadtrees are a kind of tree that can have at most 4 children. This characteristic is quite useful when it comes to 2D rectangles. Here's a visual representation of a quadtree: With these kinds of trees, it's possible to get every overlapping leaf from a specific rectangle. If for a given room, we query a slightly bigger search zone, then we'll be able to find all of the given room's neighbours. We can then connect everything up and finally launch our graph search using randomly picked rooms that are far enough from each other. Do the pathfinding I've already made a blog entry on pathfinding so the procedure is almost the same... However, there is some difference here... One of the most important difference is that we add the concept of "hot" cells. When a specific cell is deemed "hot" and that the graph search algorithm stumbles upon it then its cost will be infinite. That way, we can say to the algorithm this: "Do not pick this cell unless it's your last option". This makes for a somehow imperfect path... But in our case, imperfect is perfect. Afterwards, we add all of the chosen rooms in a final list. All rooms in this list will be part of our level and will be rendered out later on. Add more rooms After we picked the main rooms, we can then append bonus rooms to some of these main rooms if the player is lucky, not unlike hidden rooms in "Binding of Isaac"... Also, the game is going to sometime have an "alternative path". These paths will try to be shorter than the main path and overall have more bonus rooms and loot to them. I've planned that the player needs to fulfil some requirement to enter this path. Because we already created a graph of each possible rooms, it's just a matter of luck to see if a room has a special room connected to it. Rendering it out Once the rooms configurations were made, we now need to create the geometries and collisions of the level. Before going with the BSP approach, I've tried to use cellular automata to generate cavelike structures... It wasn't controllable enough, but I've kept some of the code from it (mainly its geometry generation) Here's the cellular automata tutorial Basically, we render each rooms pixel by pixel. We then cut those "holes" I've talked about earlier and voilà. Here, I've coloured each room to give a better idea of what kind of room each is which. Blue rooms are part of the alternate path I've mentioned before. The green and red rooms represent both the starting and ending room respectively. Yellow rooms are bonus rooms. Afterward, it's only a matter of placing props and enemies. This method of creating levels is cool. It can make the game more flexible and interesting. It also all depends on luck, which is a stat that can change ingame. 
Hey! I am using sweep and prune algorithm to check collision detection in a 3D environment, as broadphase, and the boxes that contain the obstacles have to be built before checking for the collisions of their projections on all the three axes. So, can someone help me with how to start for building the boxes around the obstacles whose coordinates and faces are known to me, but not the shape.

Hello Everyone, So I have built a few basic games both in C++ and in Unreal, but am wanting to do something different. I absolutely love roguelike games like Binding of Isaac. So for this next game, I wanted to try to build a game like Binding of Isaac in Unreal. I believe I understand how to do a large portion of the UI, Gameplay, Camera, etc. The part I am struggling with is the world generation. I have found a few resources regarding this topic, but I am not sure how to fully implement due to the lack of resources addressing this topic. I have a few questions: Is it better to build off of a Perlin noise type algorithm or are there simpler models to do this with? What algorithms currently exist to make sure the rooms fit together well without overlap on a generation? Should the rooms be built as blueprint segments or be built in real time using code? Lastly, are there any great resources out there that I may have missed regarding random room generation in Unreal? Thank you guys for your time!

Algorithm CatmullRom for Quaternions?
turanszkij posted a topic in General and Gameplay Programming
I am having a bit of a problem with my camera interpolations. I am using a catmullrom interpolation between several camera transforms. The transforms have different locations and quaternion rotations. The catmullrom works perfectly for the locations, and nearly perfectly for the quaternions, except when the cameras need to rotate around something in a full circle, then there is always a point at which a full backward rotation occurs, instead of the short path (after the 3/4 circle rotation point). I am using the DirectXMath library, which has a XMQuaternionSquad method, which has the exact same parameter list that the XMVectorCatmullRom has, but for quaternions. It makes use of quaternion slerp functions in its implementation. Using this, the animation starts out somewhat good, but quickly breaks (jumps) when the chain of rotations is stepped (I have a big array of camera transforms, but only 4 of them are interpolated at once, so when the interpolation factor is > 1, then the transform chain is offset by 1). The MSDN documentation for the Quaternion Squad function also says that the XMQuaternionSquadSetup function needs to be used before this. But I had no success, the animation rotation keeps breaking when the chain is offset. This is how I am trying right now: (a, b, c, d are the transforms on the spline, t is a float between 0 and 1. With catmullrom, the result location is always between b and c) XMVECTOR Q1, Q2, Q3; XMQuaternionSquadSetup( &Q1, &Q2, &Q3, XMLoadFloat4(&a>rotation), XMLoadFloat4(&b>rotation), XMLoadFloat4(&c>rotation), XMLoadFloat4(&d>rotation) ); XMVECTOR R = XMQuaternionSquad( XMQuaternionNormalize(XMLoadFloat4(&a>rotation)), XMQuaternionNormalize(Q1), XMQuaternionNormalize(Q2), XMQuaternionNormalize(Q3), t ); R = XMQuaternionNormalize(R); Anyone had any luck getting this to work? 
Algorithm Voxel meshes and degenerate triangles.
Gnollrunner posted a topic in General and Gameplay Programming
So I'm running into this problem that I sometimes get little sliver polygons when generating meshes with marching prisms. After thinking about it, it's even possible to get single point triangles. The main issue is when I try to calculate normals for such triangles, it's basically inaccurate or sometimes even impossible. I came up with this idea (which I'm sure it's not new) of simply collapsing one edge of any triangle where this happens. This would necessarily destroy the triangles on the other side of the edge as follows: I think this should work OK but before implementing it I was wondering if there was some other standard way of doing this, especially when dealing with marching cubes or algorithms of that nature. 
This is my first post to this forum and I have tried to search for an answer before posting. I do apologize in advance if it has already been answered. I am coding a rather simple "breakout" type of game but need rather accurate collision also for fast moving objects. Basically the moving object is always circles and the objects that would change the directions of the circles are always AABB's. I know how to calculate the closest points from a circle to an AABB and I know how to use line segment to AABB (slabs) collision checks. My idea is to from the current circle position add the velocity vector and put a new circle in the new position and use line segment parallel with the velocity vector for the "edges" between original position and displayed position. However my question is how to get the fraction of the movement to collision accurate so I within the same time step can go to the intersection and for the rest of the movement within that time step move it in the new direction? The player will be able to increase the speed so the movement between time steps might be quite big. With a pure slab test I would get the fraction but it would be hard to describe the circle at the end of the circle with pure line segments without getting to many and getting poor performance. Would it be a decent solution to do a binary search on the velocity vector until it is not colliding but closest point is within an allowed maximum? Is there better algoritms to accomplish this?

One of the main goal for QLMesh was to add some new formats I have been working with quite often, like Photoshop files of bdf fonts. For 3D it is LDraw formats and DAZ Studio models. LDraw is one of my favourite. I am currently working on extending Assimp to support .ldr and .mpd files. One of the major challenge is actually not drawing but embedding library definitions into the plugin. Original library it is about 250MB (compressed to ~40MB). That's quite large for Quicklook plugin. I started to work on some heavy compression/optimalization and current result is: rwrr 1 piecuchp staff 40M May 12 17:18 parts.db rwrr 1 piecuchp staff 2.2M May 12 17:18 parts.db.gz That's much better. 2MB can be easily embedded into plugin, eg. using assembler module like this: bits 64 section .rodata global _ldrawlib global _ldrawlib_end global _ldrawlib_size _ldrawlib: incbin "parts.db.gz" _ldrawlib_end: _ldrawlib_size: dd $_ldrawlib and later build with e.g. nasm: /opt/local/bin/nasm fmacho64 ldraw_lib.asm o ldraw_lib.o PS1 Sometimes less is more. Working on reading gzip stream, I had to remove one of the compression optimisation. The uncompressed file is slightly bigger, but compressed one much smaller: rwrr 1 piecuchp staff 41M Jun 17 12:03 parts.db rwrr 1 piecuchp staff 1.5M Jun 17 12:03 parts.db.gz PS2 Sadly, this is not the end of the story I had to increase the precision of the float numbers in the database (it is now 17 bits  sign:8bit:8bit)  it increased the size but also significantly affected the compression ratio: rwrr 1 piecuchp staff 67M Jul 11 08:55 parts.db rwrr 1 piecuchp staff 41M Jul 11 08:55 parts.db.gz Seems like I am gonna have to live with such database for a while.

Algorithm Best solution to initialise resources and dependencies in a game engine?
Martin Brentnall posted a topic in General and Gameplay Programming
A game engine loads a game, and a game contains many resources (textures, models, sounds, scripts, game objects, etc.), and each resource may be dependent on other resources (e.g. a game object might require a model and a script). So let's say resource B requires resource A. How can we be certain that resource A is available at the moment resource B is loaded? Here are the possible solutions I've tried: 1. Only allow resource types to be dependent oneway. A game object may be dependent on a model, but a model may not be dependent on a game object. If this is strictly enforced, then we can always load the models first, so we know all the models are available once we start loading the game objects that might use them. Maybe this is all it takes for a lot of games; maybe my game just has weird requirements or a strange technical design, but I've ended up with a lot of interdependency between various resource types that make this approach impossible. 2. Sort resources in order of dependency. If resources are sorted so that a resource is only saved after its dependencies are saved, then when the project is loaded, the dependencies of a resource are always loaded before the resource that needs them. This can be a pain to implement. Aside from that, in practice I wrote a lot of my game project file by hand because my tools weren't yet developed, so I constantly had to manually sort resources. I'm also not a fan of requiring order in data files unless it serves a functional purpose (e.g. to define Zorder sorting of game objects in a 2D game). 3. Use "proxy" resources. If a resource isn't available yet, a "proxy" implementation of that resource is returned, and a reference to the real thing is added to the proxy object once the missing resource becomes available. I started doing this when I got tired of manually sorting resources as my game grew, but I hated having a lot of "proxy" classes strewn across my engine. I also doubt that constantly calling through proxy objects does wonders for performance either. 4. Repeating initialisation. This idea was to have the initialisation function on each resource return false to report lack of resource availability, then the initialisation loop would just repeat until every resource initialisation returned true. It worked, but I didn't really this solution, since repeating initialisation actions in some resources could possibly lead to bugs or unintended effects if the repetition wasn't anticipated, which made it feel very error prone. 5. Multiphase initialisation Resources are loaded in multiple phases, e.g: All resources are created in the first phase, then all resources are initialised in the second phase. This is my current approach. It sounds very simple, but I've found it somewhat more complicated in practice. There are actually five phases in my current engine: Resources such as textures, models, and game objects and registered to the engine. Game object instances (e.g. player objects) are created and registered to the engine. This is a separate step because the context in which game object instances exist is dependent on a world resource that must be known before the world contents can be parsed. The game object instances are also regarded as resources (mostly to be used by event scripts). All loaded resources are initialised with any required resource references. The actual game content is loaded such as terrain, pickups, enemies, player, etc.. By this point, all types of game object, instances, and other resources have been fully initialised. Any initialisation requiring OpenGL is performed (e.g. textures, models, etc.). In order to enable the screen to continue rendering while the previous phases are performed (which may take several seconds), the loading is performed on a second thread. But since OpenGL functions can only be called on the main thread, these operations must be deferred until this phase. So the order of resources no longer matters, proxy classes aren't required, and we have full flexibility to allow any resource type to reference any other resource type. The downside is that each relevant phase must be implemented for each resource, and perhaps this isn't very intuitive for an unfamiliar developer (the engine is intended for open source distribution eventually, so I think the API design is quite important). I guess the use of multiple phases also makes the loading time slightly longer than the first three solutions, but I haven't actually measured a difference. Anyway, I original defined interface functions to be called for each phase, but in the interest of simplicity and brevity, I've settled on a solution that uses std::function callbacks, which looks something like this (extreme simplification): /** * A texture resource that is procedurally generated using two colour resources. */ class MyTexture:public ITexture { private: // Colours used by the texture. IColour* cColourA; IColour* cColourB; GLuint cTextureId; // etc. public: MyTexture(const DOMNode& node, IProjectRegistry* registry) { // Phase 1: Make "this" object available to the project as a named texture. registry>registerTexture(node.getAttribute("name"), this); // Phase 2: Only applicable to world and game object resources. // Phase 3: Callback to initialise the texture with named colour resources registry>initReferences([this, &node](IResources* resources) { cColourA = resources>getColour(node.getAttribute("colourA")); cColourB = resources>getColour(node.getAttribute("colourB")); }); // Phase 4: Only applicable to world and game object resources. // Phase 5: Callback to make sure OpenGL stuff happens in the main thread. registry>initMainThread([this]() { // Do OpenGL stuff (allocate texture, rendertotexture, etc.) }); } /***********************\ * Implements ITexture * \***********************/ void set() { glBindTexture(GL_TEXTURE_2D, cTextureId); } }; Actually, I don't normally include the "phase" comments above, since I find it clear enough without them. /** * A texture resource that is procedurally generated using two colour resources. */ class MyTexture:public ITexture { private: // Colours used by the texture. IColour* cColourA; IColour* cColourB; GLuint cTextureId; // etc. public: MyTexture(const DOMNode& node, IProjectRegistry* registry) { registry>registerTexture(node.getAttribute("name"), this); registry>initReferences([this, &node](IResources* resources) { cColourA = resources>getColour(node.getAttribute("colourA")); cColourB = resources>getColour(node.getAttribute("colourB")); }); registry>initMainThread([this]() { // Do OpenGL stuff (allocate texture, rendertotexture, etc.) }); } /***********************\ * Implements ITexture * \***********************/ void set() { glBindTexture(GL_TEXTURE_2D, cTextureId); } }; So based on my current experience, I'm pretty satisfied with this. But I still wonder if there are better or standard ways to do this? Is this a common problem? How does your game (engine) manage resource initialisation? Do you restrict resource dependencies? Also, is this a good approach with regards to API design of a game engine intended for open source distribution? 
Hi all. I am trying to write a collision detection & response routine for my 3d game. I am approximating my player's geometry with an ellipsoid, and I am detecting the swept ellipsoid's collision with convex shapes. If there is an intersection, I do bisection test to get the time of impact. However, now I want to get the contact normal, but I'm not sure how to do that. I tried using the last search direction as the separating vector, which turns out to be inaccurate. Most people suggest EPA, but I don't know if it works with quadric surfaces. Since one of the objects is an analytical shape on my case, is there any simpler way to get the contact normal or is EPA the way to go? Thanks a lot!

Algorithm Something's wrong with my Game of Life algorithm, I just can't figure out
Master thief posted a topic in General and Gameplay Programming
I've been trying different algorithms, and just yesterday I adapted one from the Graphics Programmer's Black Book (the chapter 17), and it works... but doesn't wrap around the edges like the other algorithms do. It does vertically, but not horizontally. I'm doing the wrapping by using an extra outer border of cells all around, that each gets a copy of the opposite inner border. I've been trying for hours to figure out why it isn't wrapping around but I got nowhere so far. Meanwhile I also burned out. If someone else could take a look and see if they could figure out what's wrong, I'd appreciate it a lot. A fresh pair of eyes might see better than mine. I don't know if I should paste the code right here, as it's a little long (some 200 lines), so meanwhile it's in this repo right here. It's a simple console app, works on the windows console (don't know about the linux terminal). There's two generation algorithms there for comparison, and you can easily switch using the algo variable. The SUM works fine, the BITS is the one that doesn't. Not sure what else to say. I tried commenting the code for clarity. Well, if someone has 5 minutes to spare, I'll greatly appreciate it. Thanks in advance.  EDIT: A specific symptom that I noticed (that I didn't think to mention earlier) is that when using the BITS algorithm (which uses bit manipulation, hence the name of the flag), the cells on the outter edges don't seem to be affected by this part of the code that kills a cell or the equivalent part to revive a cell (specifically the "[i1]" and "[i+1]" lines, which should affect the cells to the sides of the cell being considered): # if it's alive if cellmaps[prev][j][i] & 0x01: # kill it if it doesn't have 2 or 3 neighbors if (n != 2) and (n != 3): cellmaps[curr][j][i] &= ~0x01 alive_cells = 1 # inform neighbors this cell is dead cellmaps[curr][ j1 ][ i1 ] = 2 cellmaps[curr][ j1 ][ i ] = 2 cellmaps[curr][ j1 ][ i+1 ] = 2 cellmaps[curr][ j ][ i1 ] = 2 cellmaps[curr][ j ][ i+1 ] = 2 cellmaps[curr][ j+1 ][ i1 ] = 2 cellmaps[curr][ j+1 ][ i ] = 2 cellmaps[curr][ j+1 ][ i+1 ] = 2 The actual effect is that the leftmost and rightmost edges are always clear (actually, as depicted, the ones on the opposite side to where the actual glider is, are affected, but not the ones next to the glider). when a glider approaches the edge, for exampple, this edge should have 1 cell And the opposite side should v like this not be like this, but this V v V V  . . . .  . . . . . . . . . . . .  . . # .  . . # . . . . . . . . .  . # . .  # # . . . . . # . . # #  . # # .  . # # . . . . # . . . #  . . . .  . . . . . . . . . . . .  . . . .  . . . . . . . . . . . . 
Hi, My apologies if I am posting in the wrong section. I want to code a card game where the player needs to make pairs, triplet or quadruple to earn points. I need some help/guidance for the AI to make it compute the best strategy based on the hand it draws and the cards on the table. to take decisions. If possible to rise difficulty by teaching the AI to count the cards in some manners to predict the player moves / mimic real players. 1 the game has 40 cards 1st round 4 cards are distributed and 3 cards for each player after that 2 points can be earned by making a match: 1point = a pair, 5points = triplet, 6point = quadruple. I don't Know if this is similar to some existing pattern, the one I found were specific to some pupular card games like black jack. thanks

This is a technical article about how I implemented the fluid in my game “Invasion of the Liquid Snatchers!” which was my entry for the fifth annual "Week of Awesome " game development competition here at GameDev.net. One of the biggest compliments I’ve received about the game is when people think the fluid simulation is some kind of softbody physics or a true fluid simulation. But it isn’t! The simulation is achieved using Box2D doing regular hardbody collisions using lots of little (nonrotating) circleshaped bodies. The illusion of softbody particles is achieved in the rendering. The Rendering Process Each particle is drawn using a texture of a white circle that is opaque in the center but fades to fully transparent at the circumference: These are drawn to a RGBA8888 offscreen texture (using a ‘framebuffer’ in OpenGL parlance) and I ‘tint’ to the intended color of the particle (tinting is something that LibGDX can do outofthebox with its default shader). It is crucial to draw each ball larger than it is represented in Box2D. Physically speaking these balls will not overlap (because it’s a hardbody simulation after all!) yet in the rendering, we do need these balls to overlap and blend together. The blending is nontrivial as there are a few requirements we have to take into account:  The RGB color channels should blend together when particles of different colors overlap.  … but we don’t want colors to saturate towards white.  … and we don’t want them to darken when we blend with the initially black background color.  The alpha channel should accumulate additively to indicate the ‘strength’ of the liquid at each pixel. All of that can be achieved in GLES2.0 using this blending technique: glClearColor(0, 0, 0, 0); glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE); Putting all that together gets us a texture of lots of blurry colored balls: Next up, is to contribute this to the main backbuffer as a fullscreen quad using a custom shader. The shader treats the alpha channel of the texture as a ‘potential field’, the higher the value the stronger the field is at that fragment. The shader compares the strength of the field to a threshold: Where this field strength is strong enough then we will snap the alpha to 1.0 to manifest some liquid. Where the field strength is too weak then we will snap the alpha to 0.0 (or we could just discard the fragment) to avoid drawing anything. For the final game I went a little further and also included a small window around that threshold to smoothly blend between 0 and 1 in the alpha channel, this softens and effectively antialiases the fluid boundary. Here’s the shader: varying vec2 v_texCoords; uniform sampler2D u_texture; // field values above this are 'inside' the fluid, otherwise we are 'outside'. const float threshold = 0.6; // +/ this window around the threshold for a smooth transition around the boundary. const float window = 0.1; void main() { vec4 col = texture2D(u_texture, v_texCoords); float fieldStrength = col.a; col.a = smoothstep(threshold  window, threshold + window, fieldStrength); gl_FragColor = col; } This gives us a solid edge boundary where pixels are either lit or not lit by the fluid. Here is the result after we apply the shader: Things are looking a lot more liquidlike now! The way this works is that when particles come within close proximity of each other their potential fields start to add up; once the field strength is high enough the shader will start lighting up pixels between the two particles. This gives us the ‘globbing together’ effect which really makes it look like a fluid. Since the fluid is comprised of thousands of rounded shapes it tends to leave gaps against the straightedged tilemap. So the fullscreen quad is, in fact, scaledup to be just a little bit larger than the screen and is draw behind the main scene elements. This helps to ensure that the liquid really fills up any corners and crevices. Here is the final result: And that’s all there is for the basic technique behind it! Extra Niceties I do a few other subtle tricks which help to make the fluids feel more believable… Each particle has an age and a current speed. I weight these together into a ‘frothfactor’ value between 0 and 1 that is used to lighten the color of a particle. This means that younger or fastermoving particles are whiter than older or stationary parts of the fluid. The idea is to allow us to see particles mixing into a larger body of fluid. The stationary ‘wells’ where fluid collects are always a slightly darker shade compared to the fluid particles. This guarantees that we can see the particles ‘mixing’ when they drop into the wells. Magma particles are all different shades of dark red selected randomly at spawn time. This started out as a bug where magma and oil particles were being accidentally mixed together but it looked so cool that I decided to make it happen deliberately! When I remove a particle from the simulation it doesn’t just pop out of existence, instead, I fade it away. This gets further disguised by the ‘potential field’ shader which makes it look like the fluid drains or shrinks away more naturally. So, on the whole, the fading is not directly observable. Performance Optimisations As mentioned in my postmortem of the game I had to dedicate some time to make the simulation CPU and Memory performant: The ‘wells’ that receive the fluid are really just colored rectangles that “fill up”. They are not simulated. It means I can remove particles from the simulation once they are captured by the wells and just increment the filllevel of the well. If particles slow down below a threshold then they are turned into nonmoving static bodies. Statics are not exactly very fluidlike but they perform much better in Box2D than thousands of dynamic bodies because they don’t respond to forces. I also trigger their decay at that point too, so they don’t hang around in this state for long enough for the player to notice. All particles will eventually decay. I set a max lifetime of 20seconds. This is also to prevent the player from just flooding the level and cheating their way through the game. To keep Java’s Garbage Collector from stalling the gameplay I had to avoid doing memory allocations perparticle where possible. Mainly this is for things like allocating temporary Vector2 objects or Color objects. So I factored these out into singular longlived instances and just (re)set their state perparticle. Note: This article was originally published on the author's blog, and is reproduced here with the author's kind permission.

Hello, I am trying to make a GeometryUtil class that has methods to draw point,line ,polygon etc. I am trying to make a method to draw circle. There are many ways to draw a circle. I have found two ways, The one way: public static void drawBresenhamCircle(PolygonSpriteBatch batch, int centerX, int centerY, int radius, ColorRGBA color) { int x = 0, y = radius; int d = 3  2 * radius; while (y >= x) { drawCirclePoints(batch, centerX, centerY, x, y, color); if (d <= 0) { d = d + 4 * x + 6; } else { y; d = d + 4 * (x  y) + 10; } x++; //drawCirclePoints(batch,centerX,centerY,x,y,color); } } private static void drawCirclePoints(PolygonSpriteBatch batch, int centerX, int centerY, int x, int y, ColorRGBA color) { drawPoint(batch, centerX + x, centerY + y, color); drawPoint(batch, centerX  x, centerY + y, color); drawPoint(batch, centerX + x, centerY  y, color); drawPoint(batch, centerX  x, centerY  y, color); drawPoint(batch, centerX + y, centerY + x, color); drawPoint(batch, centerX  y, centerY + x, color); drawPoint(batch, centerX + y, centerY  x, color); drawPoint(batch, centerX  y, centerY  x, color); } The other way: public static void drawCircle(PolygonSpriteBatch target, Vector2 center, float radius, int lineWidth, int segments, int tintColorR, int tintColorG, int tintColorB, int tintColorA) { Vector2[] vertices = new Vector2[segments]; double increment = Math.PI * 2.0 / segments; double theta = 0.0; for (int i = 0; i < segments; i++) { vertices[i] = new Vector2((float) Math.cos(theta) * radius + center.x, (float) Math.sin(theta) * radius + center.y); theta += increment; } drawPolygon(target, vertices, lineWidth, segments, tintColorR, tintColorG, tintColorB, tintColorA); } In the render loop: polygonSpriteBatch.begin(); Bitmap.drawBresenhamCircle(polygonSpriteBatch,500,300,200,ColorRGBA.Blue); Bitmap.drawCircle(polygonSpriteBatch,new Vector2(500,300),200,5,50,255,0,0,255); polygonSpriteBatch.end(); I am trying to choose one of them. So I thought that I should go with the one that does not involve heavy calculations and is efficient and faster. It is said that the use of floating point numbers , trigonometric operations etc. slows down things a bit. What do you think would be the best method to use? When I compared the code by observing the time taken by the flow from start of the method to the end, it shows that the second one is faster. (I think I am doing something wrong here ). Please help! Thank you.
 4 replies

 Algorithm
 Optimization

(and 1 more)
Tagged with:

Hey all, I've been trying to work out how LittleBigPlanet handles its objects for a while now. For those unaware, LittleBigPlanet has a building component where you can build 2Dish (there are 2  16 2D layers that you can build on) objects. There are a number of shaped brushes to do this with, from basic squares and circles to teardrops and eye shapes. There's a decent video showing this off, actually. Anyways, I've been trying to work out how this works for a while now. My current thought is that it might be along the lines of storing a list of object corners and then drawing an object within those bounds  this makes the most sense to me because the engine has a corner editor for making more advanced shapes, and because some of the restrictions in the engine are based around corners. Of course, that could also be completely wrong and it's something else entirely. What are your thoughts?

Advertisement