
Advertisement
Search the Community
Showing results for tags 'Algorithm'.
Found 138 results

Hello Everyone, So I have built a few basic games both in C++ and in Unreal, but am wanting to do something different. I absolutely love roguelike games like Binding of Isaac. So for this next game, I wanted to try to build a game like Binding of Isaac in Unreal. I believe I understand how to do a large portion of the UI, Gameplay, Camera, etc. The part I am struggling with is the world generation. I have found a few resources regarding this topic, but I am not sure how to fully implement due to the lack of resources addressing this topic. I have a few questions: Is it better to build off of a Perlin noise type algorithm or are there simpler models to do this with? What algorithms currently exist to make sure the rooms fit together well without overlap on a generation? Should the rooms be built as blueprint segments or be built in real time using code? Lastly, are there any great resources out there that I may have missed regarding random room generation in Unreal? Thank you guys for your time!

Algorithm CatmullRom for Quaternions?
turanszkij posted a topic in General and Gameplay Programming
I am having a bit of a problem with my camera interpolations. I am using a catmullrom interpolation between several camera transforms. The transforms have different locations and quaternion rotations. The catmullrom works perfectly for the locations, and nearly perfectly for the quaternions, except when the cameras need to rotate around something in a full circle, then there is always a point at which a full backward rotation occurs, instead of the short path (after the 3/4 circle rotation point). I am using the DirectXMath library, which has a XMQuaternionSquad method, which has the exact same parameter list that the XMVectorCatmullRom has, but for quaternions. It makes use of quaternion slerp functions in its implementation. Using this, the animation starts out somewhat good, but quickly breaks (jumps) when the chain of rotations is stepped (I have a big array of camera transforms, but only 4 of them are interpolated at once, so when the interpolation factor is > 1, then the transform chain is offset by 1). The MSDN documentation for the Quaternion Squad function also says that the XMQuaternionSquadSetup function needs to be used before this. But I had no success, the animation rotation keeps breaking when the chain is offset. This is how I am trying right now: (a, b, c, d are the transforms on the spline, t is a float between 0 and 1. With catmullrom, the result location is always between b and c) XMVECTOR Q1, Q2, Q3; XMQuaternionSquadSetup( &Q1, &Q2, &Q3, XMLoadFloat4(&a>rotation), XMLoadFloat4(&b>rotation), XMLoadFloat4(&c>rotation), XMLoadFloat4(&d>rotation) ); XMVECTOR R = XMQuaternionSquad( XMQuaternionNormalize(XMLoadFloat4(&a>rotation)), XMQuaternionNormalize(Q1), XMQuaternionNormalize(Q2), XMQuaternionNormalize(Q3), t ); R = XMQuaternionNormalize(R); Anyone had any luck getting this to work? 
Algorithm Voxel meshes and degenerate triangles.
Gnollrunner posted a topic in General and Gameplay Programming
So I'm running into this problem that I sometimes get little sliver polygons when generating meshes with marching prisms. After thinking about it, it's even possible to get single point triangles. The main issue is when I try to calculate normals for such triangles, it's basically inaccurate or sometimes even impossible. I came up with this idea (which I'm sure it's not new) of simply collapsing one edge of any triangle where this happens. This would necessarily destroy the triangles on the other side of the edge as follows: I think this should work OK but before implementing it I was wondering if there was some other standard way of doing this, especially when dealing with marching cubes or algorithms of that nature. 
This is my first post to this forum and I have tried to search for an answer before posting. I do apologize in advance if it has already been answered. I am coding a rather simple "breakout" type of game but need rather accurate collision also for fast moving objects. Basically the moving object is always circles and the objects that would change the directions of the circles are always AABB's. I know how to calculate the closest points from a circle to an AABB and I know how to use line segment to AABB (slabs) collision checks. My idea is to from the current circle position add the velocity vector and put a new circle in the new position and use line segment parallel with the velocity vector for the "edges" between original position and displayed position. However my question is how to get the fraction of the movement to collision accurate so I within the same time step can go to the intersection and for the rest of the movement within that time step move it in the new direction? The player will be able to increase the speed so the movement between time steps might be quite big. With a pure slab test I would get the fraction but it would be hard to describe the circle at the end of the circle with pure line segments without getting to many and getting poor performance. Would it be a decent solution to do a binary search on the velocity vector until it is not colliding but closest point is within an allowed maximum? Is there better algoritms to accomplish this?

One of the main goal for QLMesh was to add some new formats I have been working with quite often, like Photoshop files of bdf fonts. For 3D it is LDraw formats and DAZ Studio models. LDraw is one of my favourite. I am currently working on extending Assimp to support .ldr and .mpd files. One of the major challenge is actually not drawing but embedding library definitions into the plugin. Original library it is about 250MB (compressed to ~40MB). That's quite large for Quicklook plugin. I started to work on some heavy compression/optimalization and current result is: rwrr 1 piecuchp staff 40M May 12 17:18 parts.db rwrr 1 piecuchp staff 2.2M May 12 17:18 parts.db.gz That's much better. 2MB can be easily embedded into plugin, eg. using assembler module like this: bits 64 section .rodata global _ldrawlib global _ldrawlib_end global _ldrawlib_size _ldrawlib: incbin "parts.db.gz" _ldrawlib_end: _ldrawlib_size: dd $_ldrawlib and later build with e.g. nasm: /opt/local/bin/nasm fmacho64 ldraw_lib.asm o ldraw_lib.o PS1 Sometimes less is more. Working on reading gzip stream, I had to remove one of the compression optimisation. The uncompressed file is slightly bigger, but compressed one much smaller: rwrr 1 piecuchp staff 41M Jun 17 12:03 parts.db rwrr 1 piecuchp staff 1.5M Jun 17 12:03 parts.db.gz PS2 Sadly, this is not the end of the story I had to increase the precision of the float numbers in the database (it is now 17 bits  sign:8bit:8bit)  it increased the size but also significantly affected the compression ratio: rwrr 1 piecuchp staff 67M Jul 11 08:55 parts.db rwrr 1 piecuchp staff 41M Jul 11 08:55 parts.db.gz Seems like I am gonna have to live with such database for a while.

Algorithm Game collectible highlighting
GameEngineer_gi posted a topic in General and Gameplay Programming
Games often highlight collectibles with halos or in the case of Wolfenstein with a cycling white stripe. Its hard to see here with a still image but picture the white stripe moving across the object texture. I assume its done in the collectible's shader code and perhaps its much more simple than I thought but am I on the right track here? (images from "Wolfenstein New Colossus") 1 reply

1

Algorithm Best solution to initialise resources and dependencies in a game engine?
Martin Brentnall posted a topic in General and Gameplay Programming
A game engine loads a game, and a game contains many resources (textures, models, sounds, scripts, game objects, etc.), and each resource may be dependent on other resources (e.g. a game object might require a model and a script). So let's say resource B requires resource A. How can we be certain that resource A is available at the moment resource B is loaded? Here are the possible solutions I've tried: 1. Only allow resource types to be dependent oneway. A game object may be dependent on a model, but a model may not be dependent on a game object. If this is strictly enforced, then we can always load the models first, so we know all the models are available once we start loading the game objects that might use them. Maybe this is all it takes for a lot of games; maybe my game just has weird requirements or a strange technical design, but I've ended up with a lot of interdependency between various resource types that make this approach impossible. 2. Sort resources in order of dependency. If resources are sorted so that a resource is only saved after its dependencies are saved, then when the project is loaded, the dependencies of a resource are always loaded before the resource that needs them. This can be a pain to implement. Aside from that, in practice I wrote a lot of my game project file by hand because my tools weren't yet developed, so I constantly had to manually sort resources. I'm also not a fan of requiring order in data files unless it serves a functional purpose (e.g. to define Zorder sorting of game objects in a 2D game). 3. Use "proxy" resources. If a resource isn't available yet, a "proxy" implementation of that resource is returned, and a reference to the real thing is added to the proxy object once the missing resource becomes available. I started doing this when I got tired of manually sorting resources as my game grew, but I hated having a lot of "proxy" classes strewn across my engine. I also doubt that constantly calling through proxy objects does wonders for performance either. 4. Repeating initialisation. This idea was to have the initialisation function on each resource return false to report lack of resource availability, then the initialisation loop would just repeat until every resource initialisation returned true. It worked, but I didn't really this solution, since repeating initialisation actions in some resources could possibly lead to bugs or unintended effects if the repetition wasn't anticipated, which made it feel very error prone. 5. Multiphase initialisation Resources are loaded in multiple phases, e.g: All resources are created in the first phase, then all resources are initialised in the second phase. This is my current approach. It sounds very simple, but I've found it somewhat more complicated in practice. There are actually five phases in my current engine: Resources such as textures, models, and game objects and registered to the engine. Game object instances (e.g. player objects) are created and registered to the engine. This is a separate step because the context in which game object instances exist is dependent on a world resource that must be known before the world contents can be parsed. The game object instances are also regarded as resources (mostly to be used by event scripts). All loaded resources are initialised with any required resource references. The actual game content is loaded such as terrain, pickups, enemies, player, etc.. By this point, all types of game object, instances, and other resources have been fully initialised. Any initialisation requiring OpenGL is performed (e.g. textures, models, etc.). In order to enable the screen to continue rendering while the previous phases are performed (which may take several seconds), the loading is performed on a second thread. But since OpenGL functions can only be called on the main thread, these operations must be deferred until this phase. So the order of resources no longer matters, proxy classes aren't required, and we have full flexibility to allow any resource type to reference any other resource type. The downside is that each relevant phase must be implemented for each resource, and perhaps this isn't very intuitive for an unfamiliar developer (the engine is intended for open source distribution eventually, so I think the API design is quite important). I guess the use of multiple phases also makes the loading time slightly longer than the first three solutions, but I haven't actually measured a difference. Anyway, I original defined interface functions to be called for each phase, but in the interest of simplicity and brevity, I've settled on a solution that uses std::function callbacks, which looks something like this (extreme simplification): /** * A texture resource that is procedurally generated using two colour resources. */ class MyTexture:public ITexture { private: // Colours used by the texture. IColour* cColourA; IColour* cColourB; GLuint cTextureId; // etc. public: MyTexture(const DOMNode& node, IProjectRegistry* registry) { // Phase 1: Make "this" object available to the project as a named texture. registry>registerTexture(node.getAttribute("name"), this); // Phase 2: Only applicable to world and game object resources. // Phase 3: Callback to initialise the texture with named colour resources registry>initReferences([this, &node](IResources* resources) { cColourA = resources>getColour(node.getAttribute("colourA")); cColourB = resources>getColour(node.getAttribute("colourB")); }); // Phase 4: Only applicable to world and game object resources. // Phase 5: Callback to make sure OpenGL stuff happens in the main thread. registry>initMainThread([this]() { // Do OpenGL stuff (allocate texture, rendertotexture, etc.) }); } /***********************\ * Implements ITexture * \***********************/ void set() { glBindTexture(GL_TEXTURE_2D, cTextureId); } }; Actually, I don't normally include the "phase" comments above, since I find it clear enough without them. /** * A texture resource that is procedurally generated using two colour resources. */ class MyTexture:public ITexture { private: // Colours used by the texture. IColour* cColourA; IColour* cColourB; GLuint cTextureId; // etc. public: MyTexture(const DOMNode& node, IProjectRegistry* registry) { registry>registerTexture(node.getAttribute("name"), this); registry>initReferences([this, &node](IResources* resources) { cColourA = resources>getColour(node.getAttribute("colourA")); cColourB = resources>getColour(node.getAttribute("colourB")); }); registry>initMainThread([this]() { // Do OpenGL stuff (allocate texture, rendertotexture, etc.) }); } /***********************\ * Implements ITexture * \***********************/ void set() { glBindTexture(GL_TEXTURE_2D, cTextureId); } }; So based on my current experience, I'm pretty satisfied with this. But I still wonder if there are better or standard ways to do this? Is this a common problem? How does your game (engine) manage resource initialisation? Do you restrict resource dependencies? Also, is this a good approach with regards to API design of a game engine intended for open source distribution? 
Hi all. I am trying to write a collision detection & response routine for my 3d game. I am approximating my player's geometry with an ellipsoid, and I am detecting the swept ellipsoid's collision with convex shapes. If there is an intersection, I do bisection test to get the time of impact. However, now I want to get the contact normal, but I'm not sure how to do that. I tried using the last search direction as the separating vector, which turns out to be inaccurate. Most people suggest EPA, but I don't know if it works with quadric surfaces. Since one of the objects is an analytical shape on my case, is there any simpler way to get the contact normal or is EPA the way to go? Thanks a lot!

Algorithm Something's wrong with my Game of Life algorithm, I just can't figure out
Master thief posted a topic in General and Gameplay Programming
I've been trying different algorithms, and just yesterday I adapted one from the Graphics Programmer's Black Book (the chapter 17), and it works... but doesn't wrap around the edges like the other algorithms do. It does vertically, but not horizontally. I'm doing the wrapping by using an extra outer border of cells all around, that each gets a copy of the opposite inner border. I've been trying for hours to figure out why it isn't wrapping around but I got nowhere so far. Meanwhile I also burned out. If someone else could take a look and see if they could figure out what's wrong, I'd appreciate it a lot. A fresh pair of eyes might see better than mine. I don't know if I should paste the code right here, as it's a little long (some 200 lines), so meanwhile it's in this repo right here. It's a simple console app, works on the windows console (don't know about the linux terminal). There's two generation algorithms there for comparison, and you can easily switch using the algo variable. The SUM works fine, the BITS is the one that doesn't. Not sure what else to say. I tried commenting the code for clarity. Well, if someone has 5 minutes to spare, I'll greatly appreciate it. Thanks in advance.  EDIT: A specific symptom that I noticed (that I didn't think to mention earlier) is that when using the BITS algorithm (which uses bit manipulation, hence the name of the flag), the cells on the outter edges don't seem to be affected by this part of the code that kills a cell or the equivalent part to revive a cell (specifically the "[i1]" and "[i+1]" lines, which should affect the cells to the sides of the cell being considered): # if it's alive if cellmaps[prev][j][i] & 0x01: # kill it if it doesn't have 2 or 3 neighbors if (n != 2) and (n != 3): cellmaps[curr][j][i] &= ~0x01 alive_cells = 1 # inform neighbors this cell is dead cellmaps[curr][ j1 ][ i1 ] = 2 cellmaps[curr][ j1 ][ i ] = 2 cellmaps[curr][ j1 ][ i+1 ] = 2 cellmaps[curr][ j ][ i1 ] = 2 cellmaps[curr][ j ][ i+1 ] = 2 cellmaps[curr][ j+1 ][ i1 ] = 2 cellmaps[curr][ j+1 ][ i ] = 2 cellmaps[curr][ j+1 ][ i+1 ] = 2 The actual effect is that the leftmost and rightmost edges are always clear (actually, as depicted, the ones on the opposite side to where the actual glider is, are affected, but not the ones next to the glider). when a glider approaches the edge, for exampple, this edge should have 1 cell And the opposite side should v like this not be like this, but this V v V V  . . . .  . . . . . . . . . . . .  . . # .  . . # . . . . . . . . .  . # . .  # # . . . . . # . . # #  . # # .  . # # . . . . # . . . #  . . . .  . . . . . . . . . . . .  . . . .  . . . . . . . . . . . . 
Hi, My apologies if I am posting in the wrong section. I want to code a card game where the player needs to make pairs, triplet or quadruple to earn points. I need some help/guidance for the AI to make it compute the best strategy based on the hand it draws and the cards on the table. to take decisions. If possible to rise difficulty by teaching the AI to count the cards in some manners to predict the player moves / mimic real players. 1 the game has 40 cards 1st round 4 cards are distributed and 3 cards for each player after that 2 points can be earned by making a match: 1point = a pair, 5points = triplet, 6point = quadruple. I don't Know if this is similar to some existing pattern, the one I found were specific to some pupular card games like black jack. thanks

This is a technical article about how I implemented the fluid in my game “Invasion of the Liquid Snatchers!” which was my entry for the fifth annual "Week of Awesome " game development competition here at GameDev.net. One of the biggest compliments I’ve received about the game is when people think the fluid simulation is some kind of softbody physics or a true fluid simulation. But it isn’t! The simulation is achieved using Box2D doing regular hardbody collisions using lots of little (nonrotating) circleshaped bodies. The illusion of softbody particles is achieved in the rendering. The Rendering Process Each particle is drawn using a texture of a white circle that is opaque in the center but fades to fully transparent at the circumference: These are drawn to a RGBA8888 offscreen texture (using a ‘framebuffer’ in OpenGL parlance) and I ‘tint’ to the intended color of the particle (tinting is something that LibGDX can do outofthebox with its default shader). It is crucial to draw each ball larger than it is represented in Box2D. Physically speaking these balls will not overlap (because it’s a hardbody simulation after all!) yet in the rendering, we do need these balls to overlap and blend together. The blending is nontrivial as there are a few requirements we have to take into account:  The RGB color channels should blend together when particles of different colors overlap.  … but we don’t want colors to saturate towards white.  … and we don’t want them to darken when we blend with the initially black background color.  The alpha channel should accumulate additively to indicate the ‘strength’ of the liquid at each pixel. All of that can be achieved in GLES2.0 using this blending technique: glClearColor(0, 0, 0, 0); glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE); Putting all that together gets us a texture of lots of blurry colored balls: Next up, is to contribute this to the main backbuffer as a fullscreen quad using a custom shader. The shader treats the alpha channel of the texture as a ‘potential field’, the higher the value the stronger the field is at that fragment. The shader compares the strength of the field to a threshold: Where this field strength is strong enough then we will snap the alpha to 1.0 to manifest some liquid. Where the field strength is too weak then we will snap the alpha to 0.0 (or we could just discard the fragment) to avoid drawing anything. For the final game I went a little further and also included a small window around that threshold to smoothly blend between 0 and 1 in the alpha channel, this softens and effectively antialiases the fluid boundary. Here’s the shader: varying vec2 v_texCoords; uniform sampler2D u_texture; // field values above this are 'inside' the fluid, otherwise we are 'outside'. const float threshold = 0.6; // +/ this window around the threshold for a smooth transition around the boundary. const float window = 0.1; void main() { vec4 col = texture2D(u_texture, v_texCoords); float fieldStrength = col.a; col.a = smoothstep(threshold  window, threshold + window, fieldStrength); gl_FragColor = col; } This gives us a solid edge boundary where pixels are either lit or not lit by the fluid. Here is the result after we apply the shader: Things are looking a lot more liquidlike now! The way this works is that when particles come within close proximity of each other their potential fields start to add up; once the field strength is high enough the shader will start lighting up pixels between the two particles. This gives us the ‘globbing together’ effect which really makes it look like a fluid. Since the fluid is comprised of thousands of rounded shapes it tends to leave gaps against the straightedged tilemap. So the fullscreen quad is, in fact, scaledup to be just a little bit larger than the screen and is draw behind the main scene elements. This helps to ensure that the liquid really fills up any corners and crevices. Here is the final result: And that’s all there is for the basic technique behind it! Extra Niceties I do a few other subtle tricks which help to make the fluids feel more believable… Each particle has an age and a current speed. I weight these together into a ‘frothfactor’ value between 0 and 1 that is used to lighten the color of a particle. This means that younger or fastermoving particles are whiter than older or stationary parts of the fluid. The idea is to allow us to see particles mixing into a larger body of fluid. The stationary ‘wells’ where fluid collects are always a slightly darker shade compared to the fluid particles. This guarantees that we can see the particles ‘mixing’ when they drop into the wells. Magma particles are all different shades of dark red selected randomly at spawn time. This started out as a bug where magma and oil particles were being accidentally mixed together but it looked so cool that I decided to make it happen deliberately! When I remove a particle from the simulation it doesn’t just pop out of existence, instead, I fade it away. This gets further disguised by the ‘potential field’ shader which makes it look like the fluid drains or shrinks away more naturally. So, on the whole, the fading is not directly observable. Performance Optimisations As mentioned in my postmortem of the game I had to dedicate some time to make the simulation CPU and Memory performant: The ‘wells’ that receive the fluid are really just colored rectangles that “fill up”. They are not simulated. It means I can remove particles from the simulation once they are captured by the wells and just increment the filllevel of the well. If particles slow down below a threshold then they are turned into nonmoving static bodies. Statics are not exactly very fluidlike but they perform much better in Box2D than thousands of dynamic bodies because they don’t respond to forces. I also trigger their decay at that point too, so they don’t hang around in this state for long enough for the player to notice. All particles will eventually decay. I set a max lifetime of 20seconds. This is also to prevent the player from just flooding the level and cheating their way through the game. To keep Java’s Garbage Collector from stalling the gameplay I had to avoid doing memory allocations perparticle where possible. Mainly this is for things like allocating temporary Vector2 objects or Color objects. So I factored these out into singular longlived instances and just (re)set their state perparticle. Note: This article was originally published on the author's blog, and is reproduced here with the author's kind permission.

Hello, I am trying to make a GeometryUtil class that has methods to draw point,line ,polygon etc. I am trying to make a method to draw circle. There are many ways to draw a circle. I have found two ways, The one way: public static void drawBresenhamCircle(PolygonSpriteBatch batch, int centerX, int centerY, int radius, ColorRGBA color) { int x = 0, y = radius; int d = 3  2 * radius; while (y >= x) { drawCirclePoints(batch, centerX, centerY, x, y, color); if (d <= 0) { d = d + 4 * x + 6; } else { y; d = d + 4 * (x  y) + 10; } x++; //drawCirclePoints(batch,centerX,centerY,x,y,color); } } private static void drawCirclePoints(PolygonSpriteBatch batch, int centerX, int centerY, int x, int y, ColorRGBA color) { drawPoint(batch, centerX + x, centerY + y, color); drawPoint(batch, centerX  x, centerY + y, color); drawPoint(batch, centerX + x, centerY  y, color); drawPoint(batch, centerX  x, centerY  y, color); drawPoint(batch, centerX + y, centerY + x, color); drawPoint(batch, centerX  y, centerY + x, color); drawPoint(batch, centerX + y, centerY  x, color); drawPoint(batch, centerX  y, centerY  x, color); } The other way: public static void drawCircle(PolygonSpriteBatch target, Vector2 center, float radius, int lineWidth, int segments, int tintColorR, int tintColorG, int tintColorB, int tintColorA) { Vector2[] vertices = new Vector2[segments]; double increment = Math.PI * 2.0 / segments; double theta = 0.0; for (int i = 0; i < segments; i++) { vertices[i] = new Vector2((float) Math.cos(theta) * radius + center.x, (float) Math.sin(theta) * radius + center.y); theta += increment; } drawPolygon(target, vertices, lineWidth, segments, tintColorR, tintColorG, tintColorB, tintColorA); } In the render loop: polygonSpriteBatch.begin(); Bitmap.drawBresenhamCircle(polygonSpriteBatch,500,300,200,ColorRGBA.Blue); Bitmap.drawCircle(polygonSpriteBatch,new Vector2(500,300),200,5,50,255,0,0,255); polygonSpriteBatch.end(); I am trying to choose one of them. So I thought that I should go with the one that does not involve heavy calculations and is efficient and faster. It is said that the use of floating point numbers , trigonometric operations etc. slows down things a bit. What do you think would be the best method to use? When I compared the code by observing the time taken by the flow from start of the method to the end, it shows that the second one is faster. (I think I am doing something wrong here ). Please help! Thank you.
 4 replies

 Algorithm
 Optimization

(and 1 more)
Tagged with:

Hey all, I've been trying to work out how LittleBigPlanet handles its objects for a while now. For those unaware, LittleBigPlanet has a building component where you can build 2Dish (there are 2  16 2D layers that you can build on) objects. There are a number of shaped brushes to do this with, from basic squares and circles to teardrops and eye shapes. There's a decent video showing this off, actually. Anyways, I've been trying to work out how this works for a while now. My current thought is that it might be along the lines of storing a list of object corners and then drawing an object within those bounds  this makes the most sense to me because the engine has a corner editor for making more advanced shapes, and because some of the restrictions in the engine are based around corners. Of course, that could also be completely wrong and it's something else entirely. What are your thoughts?

Algorithm examples of how pickups are handled
ethancodes posted a topic in General and Gameplay Programming
I'm wondering if anyone has any examples or tips/ideas on how to handle pickups in a game. The game is an arkanoid style game. I'm going to have at least 5 different pick up types, and they are going to be in a queue, where only one can be active at a time. Once one pick up is expended, the next one should automatically start up. Some of the pick ups have an immediate effect, such as ball speed. Others will activate when the pickup is hit, but doesn't actually do anything until the ball hits the paddle. Those type of pick ups have a limited number of shots instead of a time limit. What I'm trying to figure out is what kind of structure I should have for a system like this? I'm just curious how these things are handled in other games, especially if anyone has any examples that would be great. Thank you! 
Algorithm STL and use, anyone else help me?
scullsold posted a topic in General and Gameplay Programming
Hi I read some tutorials on STL as many people on this forum say it's faster than the most selfwritten container classes and somehow i can't even declare a list in VC .net 2003...the compiler says: Compiling... VertexManager.cpp c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(33) : error C2143: syntax error : missing ';' before '<' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(33) : error C2501: 'CVertexManager::list' : missing storageclass or type specifiers c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(33) : error C2238: unexpected token(s) preceding ';' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(34) : error C2143: syntax error : missing ';' before '<' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(34) : error C2501: 'CVertexManager::list' : missing storageclass or type specifiers c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(34) : error C2238: unexpected token(s) preceding ';' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(35) : error C2143: syntax error : missing ';' before '<' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(35) : error C2501: 'CVertexManager::list' : missing storageclass or type specifiers my code: class CVertexManager { private: list<VertListEntry> VertList; list<VertGroup> VertGroup; list<int> TextureChange; CVertexManager(); public: ~CVertexManager(); void addEntry(VertListEntry Entry); static CVertexManager& Instance() { static CVertexManager TheOneAndOnly; return CVertexManager; } }; btw what does the list.insert function want as the first parameter? it says something with iterator...what is that? can i just pass an int as the number where i want to have the new entry? regards, m4gnus 
WebGL How to get RGB colors from image (with glsl)
Mihumihu posted a topic in Graphics and GPU Programming
Hi, I'm trying to solve a problem where I can get all colors from image. I see only one: walk through a loop at raster data and collect all bytes, I'm sure there is a better way to get colors from image. I'm thinking about some sort of collecting colors in result texture... Is it ordinary situation, could you help me, I didn find anithing on the internet... Thanks. 
Simple organic and brute force dungeon generation
thecheeselover posted a blog entry in 3D, AI, procedural generation and black jack
Subscribe to our subreddit to get all the updates from the team! Last month, I made a pretty simple dungeon generator algorithm. It's an organic brute force algorithm, in the sense that the rooms and corridors aren't carved into a grid and that it stops when an area doesn't fit in the graph. Here's the algorithm : Start from the center (0, 0) in 2D Generate a room Choose a side to extend to Attach a corridor to that side If it doesn't fit, stop the generation Attach a room at the end of the corridor If it doesn't fit, stop the generation Repeat from steps 3 to 7 until enough rooms are generated It allowed us to test out our pathfinding algorithm (A* & String pulling). Here are some pictures of the output in 2D and 3D : 
Subscribe to our subreddit to get all the updates from the team! A friend and I are making a roguelite retro procedural game. As in many procedural roguelite games, it will have rooms to complete but also the notion of zones. The difference between a zone and a room is that a zone is open air whilst a room is not. Rooms are connected mainly by corridors while zones are mostly naturally connected / separated by rivers and mountains. Because we want levels with zones to be generated, we need to tame the beast that is procedural generation. How can we generate each zone itself and also clearly divide them? Until now, I had only been using the Java noise library called Joise, which is the Java community port of JTippetts' Accidental Noise Library. I needed the zone data to be generated with basis function modules, i.e. Perlin noise, but in contrast I needed a more structured approach for the zone division. Joise library does have a cell noise module that is a Worley noise. It looks like this depending on its 4 parameters (1, 0, 0, 0) : Using math modules, I was able to morph that noise into something that looks like a Voronoi diagram. Here's what a Voronoi diagram should look like (never mind the colors, the important parts are the cell edges and the cell centers) : A more aesthetic version : The Worley noise that I had morphed into a Voronoilike diagram did not include the cell centers, did not include metadata about the edges and was not enough deterministic in a sense that sometimes, the edges would around 60 pixels large. I then searched for a Java Voronoi library and found this one called VoronoiJava. With this, I was able to generate simple Voronoi diagrams : Relaxed : 1 iteration Relaxed : 2 iterations The relaxation concept is actually the Lloyd's algorithm fortunately included within the library. Now how can I make that diagram respect my level generation mechanics? Well, if we can limit an approximated number of cells within a certain resolution, that would be a good start. The biggest problem here, is that the relaxation reduces the number of cells within a restricted resolution (contrary to the global resolution) and so we need to keep that in mind. To do that, I define a constant for the total number of sites / cells. Here's my code : private Voronoi createVoronoiDiagram(int resolution) { Random random = new Random(); Stream<Point> gen = Stream.generate(() > new Point(random.nextDouble() * resolution, random.nextDouble() * resolution)); return new Voronoi(gen.limit(VORONOI_SITE_COUNT).collect(Collectors.toList())).relax().relax().relax(); } A brief pseudocode of the algorithm would be the following : Create the Voronoi diagram Find the centermost zone Selects X number of zones while there are zones that respect the selection criteria Draw the border map Draw the smoothed border map The selection criteria is applied for each edge that is connected only to one selected zone. Here's the selection criteria : Is connected to a closed zone, i.e. that all its edges form a polygon Does have two vertices Is inclusively in the resolution's boundaries Here's the result of a drawn border map! In this graph, I have a restricted number of cells that follow multiple criteria and I know each edge and each cell center point. To draw the smoothed border map, the following actions must be taken : emit colors from already drawn pixels and then apply a gaussian blur. Personally, I use the JH Labs Java Image Filters library for the gaussian blur. With color emission only : With color emission and a gaussian blur : You may ask yourself why have we created a smoothed border map? There's a simple reason for this, which is that we want the borders to be gradual instead of abrupt. Let's say we want rivers or streams between zones. This gradual border will allow us to progressively increase the depth of the river and making it look more natural in contrast with the adjacent zones. All that's left is to flood each selected cell and apply that to a zone map.

Pathfinding Navigation Mesh : Wall Collision Avoidance
thecheeselover posted a blog entry in 3D, AI, procedural generation and black jack
Subscribe to our subreddit to get all the updates from the team! Introduction In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls. Configuration 3D or 2D navigation mesh, as long as the 3D one is not cyclic. Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals. An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh. The Algorithm The agent is the pink lowpoly humanoid and the final destination is the flag. The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game. Psss, check this article if you haven't seen it yet. The algorithm is the following : For each waypoint, pick the current one and the next one until the next one is the last. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one. Convert each edge to a 2D line by discarding the Y component (UP vector). For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6. Here's a video of the algorithm in action : 
Decals in tiled forward render (Forward+)
Nikita Sidorenko posted a topic in Graphics and GPU Programming
I'm making render just for fun (c++, opengl)Want to add decals support. Here what I found A couple of slides from doom http://advances.realtimerendering.com/s2016/Siggraph2016_idTech6.pdf Decals but deferred http://martindevans.me/gamedevelopment/2015/02/27/DrawingStuff… spaceDecals/ No implementation details here https://turanszkij.wordpress.com/2017/10/12/forwarddecalrendering/ As I see there should be a list of decals for each tile same as for light sources. But what to do next? Let assume that all decals are packed into a spritesheet. Decal will substitute diffuse and normal.  What data should be stored for each decal on the GPU?  Articles above describe decals as OBB. Why OBB if decals seem to be flat?  How to actually render a decal during object render pass (since it's forward)? Is it projected somehow? Don't understand this part completely. Are there any papers for this topic? 
SwingTwist Interpolation (Sterp), An Alternative to Slerp
MingLun "Allen" Chou posted a topic in Math and Physics
Here is the original blog post. Edit: Sorry, I can't get embedded LaTeX to display properly. The pinned tutorial post says I have to do it in plain HTML without embedded images? I actually tried embedding prerendered equations and they seemed fine when editing, but once I submit the post it just turned into a huge mess. So...until I can find a proper way to fix this, please refer to the original blog post for formatted formulas. I've replaced the original LaTex mess in this post with something at least more readable. Any advice on fixing this is appreciated. This post is part of my Game Math Series. Source files are on GitHub. Shortcut to sterp implementation. Shortcut to code used to generate animations in this post. An Alternative to Slerp Slerp, spherical linear interpolation, is an operation that interpolates from one orientation to another, using a rotational axis paired with the smallest angle possible. Quick note: Jonathan Blow explains here how you should avoid using slerp, if normalized quaternion linear interpolation (nlerp) suffices. Long store short, nlerp is faster but does not maintain constant angular velocity, while slerp is slower but maintains constant angular velocity; use nlerp if you’re interpolating across small angles or you don’t care about constant angular velocity; use slerp if you’re interpolating across large angles and you care about constant angular velocity. But for the sake of using a more commonly known and used building block, the remaining post will only mention slerp. Replacing all following occurrences of slerp with nlerp would not change the validity of this post. In general, slerp is considered superior over interpolating individual components of Euler angles, as the latter method usually yields orientational sways. But, sometimes slerp might not be ideal. Look at the image below showing two different orientations of a rod. On the left is one orientation, and on the right is the resulting orientation of rotating around the axis shown as a cyan arrow, where the pivot is at one end of the rod. If we slerp between the two orientations, this is what we get: Mathematically, slerp takes the “shortest rotational path”. The quaternion representing the rod’s orientation travels along the shortest arc on a 4D hyper sphere. But, given the rod’s elongated appearance, the rod’s moving end seems to be deviating from the shortest arc on a 3D sphere. My intended effect here is for the rod’s moving end to travel along the shortest arc in 3D, like this: The difference is more obvious if we compare them sidebyside: This is where swingtwist decomposition comes in. SwingTwist Decomposition SwingTwist decomposition is an operation that splits a rotation into two concatenated rotations, swing and twist. Given a twist axis, we would like to separate out the portion of a rotation that contributes to the twist around this axis, and what’s left behind is the remaining swing portion. There are multiple ways to derive the formulas, but this particular one by Michaele Norel seems to be the most elegant and efficient, and it’s the only one I’ve come across that does not involve any use of trigonometry functions. I will first show the formulas now and then paraphrase his proof later: Given a rotation represented by a quaternion R = [W_R, vec{V_R}] and a twist axis vec{V_T}, combine the scalar part from R the projection of vec{V_R} onto vec{V_T} to form a new quaternion: T = [W_R, proj_{vec{V_T}}(vec{V_R})]. We want to decompose R into a swing component and a twist component. Let the S denote the swing component, so we can write R = ST. The swing component is then calculated by multiplying R with the inverse (conjugate) of T: S= R T^{1} Beware that S and T are not yet normalized at this point. It's a good idea to normalize them before use, as unit quaternions are just cuter. Below is my code implementation of swingtwist decomposition. Note that it also takes care of the singularity that occurs when the rotation to be decomposed represents a 180degree rotation. public static void DecomposeSwingTwist ( Quaternion q, Vector3 twistAxis, out Quaternion swing, out Quaternion twist ) { Vector3 r = new Vector3(q.x, q.y, q.z); // singularity: rotation by 180 degree if (r.sqrMagnitude < MathUtil.Epsilon) { Vector3 rotatedTwistAxis = q * twistAxis; Vector3 swingAxis = Vector3.Cross(twistAxis, rotatedTwistAxis); if (swingAxis.sqrMagnitude > MathUtil.Epsilon) { float swingAngle = Vector3.Angle(twistAxis, rotatedTwistAxis); swing = Quaternion.AngleAxis(swingAngle, swingAxis); } else { // more singularity: // rotation axis parallel to twist axis swing = Quaternion.identity; // no swing } // always twist 180 degree on singularity twist = Quaternion.AngleAxis(180.0f, twistAxis); return; } // meat of swingtwist decomposition Vector3 p = Vector3.Project(r, twistAxis); twist = new Quaternion(p.x, p.y, p.z, q.w); twist = Normalize(twist); swing = q * Quaternion.Inverse(twist); } Now that we have the means to decompose a rotation into swing and twist components, we need a way to use them to interpolate the rod’s orientation, replacing slerp. SwingTwist Interpolation Replacing slerp with the swing and twist components is actually pretty straightforward. Let the Q_0 and Q_1 denote the quaternions representing the rod's two orientations we are interpolating between. Given the interpolation parameter t, we use it to find "fractions" of swing and twist components and combine them together. Such fractiona can be obtained by performing slerp from the identity quaternion, Q_I, to the individual components. So we replace: Slerp(Q_0, Q_1, t) with: Slerp(Q_I, S, t) Slerp(Q_I, T, t) From the rod example, we choose the twist axis to align with the rod's longest side. Let's look at the effect of the individual components Slerp(Q_I, S, t) and Slerp(Q_I, T, t) as t varies over time below, swing on left and twist on right: And as we concatenate these two components together, we get a swingtwist interpolation that rotates the rod such that its moving end travels in the shortest arc in 3D. Again, here is a sidebyside comparison of slerp (left) and swingtwist interpolation (right): I decided to name my swingtwist interpolation function sterp. I think it’s cool because it sounds like it belongs to the function family of lerp and slerp. Here’s to hoping that this name catches on. And here’s my code implementation: public static Quaternion Sterp ( Quaternion a, Quaternion b, Vector3 twistAxis, float t ) { Quaternion deltaRotation = b * Quaternion.Inverse(a); Quaternion swingFull; Quaternion twistFull; QuaternionUtil.DecomposeSwingTwist ( deltaRotation, twistAxis, out swingFull, out twistFull ); Quaternion swing = Quaternion.Slerp(Quaternion.identity, swingFull, t); Quaternion twist = Quaternion.Slerp(Quaternion.identity, twistFull, t); return twist * swing; } Proof Lastly, let’s look at the proof for the swingtwist decomposition formulas. All that needs to be proven is that the swing component S does not contribute to any rotation around the twist axis, i.e. the rotational axis of S is orthogonal to the twist axis. Let vec{V_{R_para}} denote the parallel component of vec{V_R} to vec{V_T}, which can be obtained by projecting vec{V_R} onto vec{V_T}: vec{V_{R_para}} = proj_{vec{V_T}}(vec{V_R}) Let vec{V_{R_perp}} denote the orthogonal component of vec{V_R} to vec{V_T}: vec{V_{R_perp}} = vec{V_R}  vec{V_{R_para}} So the scalarvector form of T becomes: T = [W_R, proj_{vec{V_T}}(vec{V_R})] = [W_R, vec{V_{R_para}}] Using the quaternion multiplication formula, here is the scalarvector form of the swing quaternion: S = R T^{1} = [W_R, vec{V_R}] [W_R, vec{V_{R_para}}] = [W_R^2  vec{V_R} ‧ (vec{V_{R_para}}), vec{V_R} X (vec{V_{R_para}}) + W_R vec{V_R} + W_R (vec{V_{R_para}})] = [W_R^2  vec{V_R} ‧ (vec{V_{R_para}}), vec{V_R} X (vec{V_{R_para}}) + W_R (vec{V_R} vec{V_{R_para}})] = [W_R^2  vec{V_R} ‧ (vec{V_{R_para}}), vec{V_R} X (vec{V_{R_para}}) + W_R vec{V_{R_perp}}] Take notice of the vector part of the result: vec{V_R} X (vec{V_{R_para}}) + W_R vec{V_{R_perp}} This is a vector parallel to the rotational axis of S. Both vec{V_R} X(vec{V_{R_para}}) and vec{V_{R_perp}} are orthogonal to the twist axis vec{V_T}, so we have shown that the rotational axis of S is orthogonal to the twist axis. Hence, we have proven that the formulas for S and T are valid for swingtwist decomposition. Conclusion That’s all. Given a twist axis, I have shown how to decompose a rotation into a swing component and a twist component. Such decomposition can be used for swingtwist interpolation, an alternative to slerp that interpolates between two orientations, which can be useful if you’d like some point on a rotating object to travel along the shortest arc. I like to call such interpolation sterp. Sterp is merely an alternative to slerp, not a replacement. Also, slerp is definitely more efficient than sterp. Most of the time slerp should work just fine, but if you find unwanted orientational sway on an object’s moving end, you might want to give sterp a try. 
two right square prism in collision(AABB), how to check which faces are colliding?
Hanseul Shin posted a topic in Math and Physics
Thanx to @Randy Gaul, I succesfully implemented cube/cube collision detection and response. 1 substract the center of each AABB = 3d vector a. 2 if x of a is the biggest, this represents a face on each AABB. 3 if x is pointing at the same(or exact opposte) direction of the normal(of a face), two AABB are colliding on those faces. But these steps only work if two colliders are cubes, because the size of each halflengths are different in a right square prism. I'd like to check which faces are collided with two right square prism, please help! Thank you! 
I've been digging around online and can't seem to find any formulas for 3D mesh simplification. I'm not sure where to start but I generally want to know how I could make a function that takes in an array of vertices, indices, and a float/double for the decimation rate. And could I preserve the general shape of the object too? Thanks for the help! P.S. I was hoping to do something with Quadric Error / Quadric Edge Collapse if that's possible.

Behavior Steering behaviors: Seeking and Arriving
miimii1205 posted a blog entry in Projects Of Some Degree Of Interest
Steering behaviors are use to maneuver IA agents in a 3D environment. With these behaviors, agents are able to better react to changes in their environment. While the navigation mesh algorithm is ideal for planning a path from one point to another, it can't really deal with dynamic objects such as other agents. This is where steering behaviors can help. What are steering behaviors? Steering behaviors are an amalgam of different behaviors that are used to organically manage the movement of an AI agent. For example, behaviors such as obstacle avoidance, pursuit and group cohesion are all steering behaviors... Steering behavior are usually applied in a 2D plane: it is sufficient, easier to implement and understand. (However, I can think of some use cases that require the behaviors to be in 3D, like in games where the agents fly to move) One of the most important behavior of all steering behaviors is the seeking behavior. We also added the arriving behavior to make the agent's movement a whole lot more organic. Steering behaviors are described in this paper. What is the seeking behavior? The seeking behavior is the idea that an AI agent "seeks" to have a certain velocity (vector). To begin, we'll need to have 2 things: An initial velocity (a vector) A desired velocity (also a vector) First, we need to find the velocity needed for our agent to reach a desired point... This is usually a subtraction of the current position of the agent and the desired position. \(\overrightarrow{d} = (x_{t},y_{t},z_{t})  (x_{a},y_{a},z_{a})\) Here, a represent our agent and t our target. d is the desired velocity Secondly, we must also find the agent's current velocity, which is usually already available in most game engines. Next, we need to find the vector difference between the desired velocity and the agent's current velocity. it literally gives us a vector that gives the desired velocity when we add it to that agent's current velocity. We will call it "steering velocity". \(\overrightarrow{s} = \overrightarrow{d}  \overrightarrow{c}\) Here, s is our steering velocity, c is the agent's current velocity and d is the desired velocity After that, we truncate our steering velocity to a length called the "steering force". Finally, we simply add the steering velocity to the agent's current velocity . // truncateVectorLocal truncate a vector to a given length Vector3f currentDirection = aiAgentMovementControl.getWalkDirection(); Vector3f wantedDirection = targetPosition.subtract(aiAgent.getWorldTranslation()).normalizeLocal().setY(0).multLocal(maxSpeed); // We steer to our wanted direction Vector3f steeringVector = truncateVectorLocal(wantedDirection.subtract(currentDirection), steeringForce); Vector3f newCurrentDirection = MathExt.truncateVectorLocal(currentDirection.addLocal(MathExt.truncateVectorLocal(wantedDirection.subtract(currentDirection), m_steeringForce).divideLocal(m_mass)), maxSpeed); This computation is done frame by frame: this means that the steering velocity becomes weaker and weaker as the agent's current velocity approaches the desired one, creating a kind of interpolation curve. What is the arriving behavior? The arrival behavior is the idea that an AI agent who "arrives" near his destination will gradually slow down until it gets there. We already have a list of waypoints returned by the navigation mesh algorithm for which the agent must cross to reach its destination. When it has passed the secondtolast point, we then activate the arriving behavior. When the behavior is active, we check the distance between the destination and the current position of the agent and change its maximum speed accordingly. // This is the initial maxSpeed float maxSpeed = unitMovementControl.getMoveSpeed(); // It's the last waypoint float distance = aiAgent.getWorldTranslation().distance(nextWaypoint.getCenter()); float rampedSpeed = aiAgentMovementControl.getMoveSpeed() * (distance / slowingDistanceThreshold); float clippedSpeed = Math.min(rampedSpeed, aiAgentMovementControl.getMoveSpeed()); // This is our new maxSpeed maxSpeed = clippedSpeed; Essentially, we slow down the agent until it gets to its destination. The future? As I'm writing this, we've chosen to split the implementation of the steering behaviors individually to implement only the bare necessities, as we have no empirical evidence that we'll need to implement al of them. Therefore, we only implemented the seeking and arriving behaviors, delaying the rest of the behaviors at an indeterminate time in the future,. So, when (or if) we'll need it, we'll already have a solid and stable foundation from which we can build upon. More links Understanding Steering Behaviors: Seek Steering Behaviors · libgdx/gdxai Wiki Understanding Steering Behaviors: Collision Avoidance 
Algorithm Javascript collision detection function isn't working
BigBadMick posted a topic in General and Gameplay Programming
Hey everybody, I'm currently working on a simple HTML5 game and my javascript collision detection function isn't working. The game features a little man that runs from side to side at the bottom of the screen, while a meteor falls from the sky. The function is supposed to detect a collision between meteor and man. In the routine, the top left corner of the man is at (player.x, player.y) and the top left corner of the meteor is at (meteor.x, meteor.y). The man is 25 pixels wide by 35 pixels tall. The meteor is 50 pixels wide by 50 pixels tall. Any idea where I've screwed in up this function? // ============================================================================= // Check for a collision between the 50 x 50 meteor and the 25 wide x 35 tall // main character // // Main character is drawn at 540 and is 35 tall, so the top of the character // is at y = 540 and the bottom is at y = 575. // // Function returns 1 if there has been a collision between the main // character and the meteor, otherwise it returns 0. // ============================================================================= function check_for_meteor_player_collision () { // edge positions for player and meteor var player_top = player.y; var player_bottom = player.y + 34; var player_left = player.x; var player_right = player.x + 24; var meteor_top = meteor.y; var meteor_bottom = meteor.y + 49; var meteor_left = meteor.x; var meteor_right = meteor.x + 49; var vertical_overlap = 0; var horizontal_overlap = 0; // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check for vertical overlap // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check if meteor bottom overlaps player if ((meteor_bottom >= player_top) && (meteor_bottom <= player_bottom)) { vertical_overlap = 1; } // Check if meteor top overlaps player if ((meteor_top >= player_top) && (meteor_top <= player_bottom)) { vertical_overlap = 1; } // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check for horizontal overlap // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check if meteor left side overlaps player if ((meteor_left >= player_left) && (meteor_left <= player_right)) { horizontal_overlap = 1; } // Check if meteor right side overlaps player if ((meteor_right >= player_left) && (meteor_right <= player_right)) { horizontal_overlap = 1; } // console.log("vertical_overlap = " + vertical_overlap); // console.log("horizontal_overlap = " + horizontal_overlap) // If we've got both a vertical overlap and a horizontal overlap, // we've got a collision if ((vertical_overlap == 1) && (horizontal_overlap == 1)) { return 1; } // if we've fallen through, we haven't detected a collision return 0; } // =============================================================================

Advertisement