
Advertisement
Search the Community
Showing results for tags 'Algorithm'.
Found 126 results

Algorithm STL and use, anyone else help me?
scullsold posted a topic in General and Gameplay Programming
Hi I read some tutorials on STL as many people on this forum say it's faster than the most selfwritten container classes and somehow i can't even declare a list in VC .net 2003...the compiler says: Compiling... VertexManager.cpp c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(33) : error C2143: syntax error : missing ';' before '<' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(33) : error C2501: 'CVertexManager::list' : missing storageclass or type specifiers c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(33) : error C2238: unexpected token(s) preceding ';' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(34) : error C2143: syntax error : missing ';' before '<' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(34) : error C2501: 'CVertexManager::list' : missing storageclass or type specifiers c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(34) : error C2238: unexpected token(s) preceding ';' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(35) : error C2143: syntax error : missing ';' before '<' c:\Dokumente und Einstellungen\gregor\Eigene Dateien\nGin\VertexManager.h(35) : error C2501: 'CVertexManager::list' : missing storageclass or type specifiers my code: class CVertexManager { private: list<VertListEntry> VertList; list<VertGroup> VertGroup; list<int> TextureChange; CVertexManager(); public: ~CVertexManager(); void addEntry(VertListEntry Entry); static CVertexManager& Instance() { static CVertexManager TheOneAndOnly; return CVertexManager; } }; btw what does the list.insert function want as the first parameter? it says something with iterator...what is that? can i just pass an int as the number where i want to have the new entry? regards, m4gnus 
Decals in tiled forward render (Forward+)
Nikita Sidorenko posted a topic in Graphics and GPU Programming
I'm making render just for fun (c++, opengl)Want to add decals support. Here what I found A couple of slides from doom http://advances.realtimerendering.com/s2016/Siggraph2016_idTech6.pdf Decals but deferred http://martindevans.me/gamedevelopment/2015/02/27/DrawingStuff… spaceDecals/ No implementation details here https://turanszkij.wordpress.com/2017/10/12/forwarddecalrendering/ As I see there should be a list of decals for each tile same as for light sources. But what to do next? Let assume that all decals are packed into a spritesheet. Decal will substitute diffuse and normal.  What data should be stored for each decal on the GPU?  Articles above describe decals as OBB. Why OBB if decals seem to be flat?  How to actually render a decal during object render pass (since it's forward)? Is it projected somehow? Don't understand this part completely. Are there any papers for this topic? 
Pathfinding Navigation Mesh : Wall Collision Avoidance
thecheeselover posted a blog entry in 3D, AI, procedural generation and black jack
Introduction In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls. Configuration 3D or 2D navigation mesh, as long as the 3D one is not cyclic. Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals. An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh. The Algorithm The agent is the pink lowpoly humanoid and the final destination is the flag. The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game. Psss, check this article if you haven't seen it yet. The algorithm is the following : For each waypoint, pick the current one and the next one until the next one is the last. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one. Convert each edge to a 2D line by discarding the Y component (UP vector). For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6. Here's a video of the algorithm in action : 
SwingTwist Interpolation (Sterp), An Alternative to Slerp
MingLun "Allen" Chou posted a topic in Math and Physics
Here is the original blog post. Edit: Sorry, I can't get embedded LaTeX to display properly. The pinned tutorial post says I have to do it in plain HTML without embedded images? I actually tried embedding prerendered equations and they seemed fine when editing, but once I submit the post it just turned into a huge mess. So...until I can find a proper way to fix this, please refer to the original blog post for formatted formulas. I've replaced the original LaTex mess in this post with something at least more readable. Any advice on fixing this is appreciated. This post is part of my Game Math Series. Source files are on GitHub. Shortcut to sterp implementation. Shortcut to code used to generate animations in this post. An Alternative to Slerp Slerp, spherical linear interpolation, is an operation that interpolates from one orientation to another, using a rotational axis paired with the smallest angle possible. Quick note: Jonathan Blow explains here how you should avoid using slerp, if normalized quaternion linear interpolation (nlerp) suffices. Long store short, nlerp is faster but does not maintain constant angular velocity, while slerp is slower but maintains constant angular velocity; use nlerp if you’re interpolating across small angles or you don’t care about constant angular velocity; use slerp if you’re interpolating across large angles and you care about constant angular velocity. But for the sake of using a more commonly known and used building block, the remaining post will only mention slerp. Replacing all following occurrences of slerp with nlerp would not change the validity of this post. In general, slerp is considered superior over interpolating individual components of Euler angles, as the latter method usually yields orientational sways. But, sometimes slerp might not be ideal. Look at the image below showing two different orientations of a rod. On the left is one orientation, and on the right is the resulting orientation of rotating around the axis shown as a cyan arrow, where the pivot is at one end of the rod. If we slerp between the two orientations, this is what we get: Mathematically, slerp takes the “shortest rotational path”. The quaternion representing the rod’s orientation travels along the shortest arc on a 4D hyper sphere. But, given the rod’s elongated appearance, the rod’s moving end seems to be deviating from the shortest arc on a 3D sphere. My intended effect here is for the rod’s moving end to travel along the shortest arc in 3D, like this: The difference is more obvious if we compare them sidebyside: This is where swingtwist decomposition comes in. SwingTwist Decomposition SwingTwist decomposition is an operation that splits a rotation into two concatenated rotations, swing and twist. Given a twist axis, we would like to separate out the portion of a rotation that contributes to the twist around this axis, and what’s left behind is the remaining swing portion. There are multiple ways to derive the formulas, but this particular one by Michaele Norel seems to be the most elegant and efficient, and it’s the only one I’ve come across that does not involve any use of trigonometry functions. I will first show the formulas now and then paraphrase his proof later: Given a rotation represented by a quaternion R = [W_R, vec{V_R}] and a twist axis vec{V_T}, combine the scalar part from R the projection of vec{V_R} onto vec{V_T} to form a new quaternion: T = [W_R, proj_{vec{V_T}}(vec{V_R})]. We want to decompose R into a swing component and a twist component. Let the S denote the swing component, so we can write R = ST. The swing component is then calculated by multiplying R with the inverse (conjugate) of T: S= R T^{1} Beware that S and T are not yet normalized at this point. It's a good idea to normalize them before use, as unit quaternions are just cuter. Below is my code implementation of swingtwist decomposition. Note that it also takes care of the singularity that occurs when the rotation to be decomposed represents a 180degree rotation. public static void DecomposeSwingTwist ( Quaternion q, Vector3 twistAxis, out Quaternion swing, out Quaternion twist ) { Vector3 r = new Vector3(q.x, q.y, q.z); // singularity: rotation by 180 degree if (r.sqrMagnitude < MathUtil.Epsilon) { Vector3 rotatedTwistAxis = q * twistAxis; Vector3 swingAxis = Vector3.Cross(twistAxis, rotatedTwistAxis); if (swingAxis.sqrMagnitude > MathUtil.Epsilon) { float swingAngle = Vector3.Angle(twistAxis, rotatedTwistAxis); swing = Quaternion.AngleAxis(swingAngle, swingAxis); } else { // more singularity: // rotation axis parallel to twist axis swing = Quaternion.identity; // no swing } // always twist 180 degree on singularity twist = Quaternion.AngleAxis(180.0f, twistAxis); return; } // meat of swingtwist decomposition Vector3 p = Vector3.Project(r, twistAxis); twist = new Quaternion(p.x, p.y, p.z, q.w); twist = Normalize(twist); swing = q * Quaternion.Inverse(twist); } Now that we have the means to decompose a rotation into swing and twist components, we need a way to use them to interpolate the rod’s orientation, replacing slerp. SwingTwist Interpolation Replacing slerp with the swing and twist components is actually pretty straightforward. Let the Q_0 and Q_1 denote the quaternions representing the rod's two orientations we are interpolating between. Given the interpolation parameter t, we use it to find "fractions" of swing and twist components and combine them together. Such fractiona can be obtained by performing slerp from the identity quaternion, Q_I, to the individual components. So we replace: Slerp(Q_0, Q_1, t) with: Slerp(Q_I, S, t) Slerp(Q_I, T, t) From the rod example, we choose the twist axis to align with the rod's longest side. Let's look at the effect of the individual components Slerp(Q_I, S, t) and Slerp(Q_I, T, t) as t varies over time below, swing on left and twist on right: And as we concatenate these two components together, we get a swingtwist interpolation that rotates the rod such that its moving end travels in the shortest arc in 3D. Again, here is a sidebyside comparison of slerp (left) and swingtwist interpolation (right): I decided to name my swingtwist interpolation function sterp. I think it’s cool because it sounds like it belongs to the function family of lerp and slerp. Here’s to hoping that this name catches on. And here’s my code implementation: public static Quaternion Sterp ( Quaternion a, Quaternion b, Vector3 twistAxis, float t ) { Quaternion deltaRotation = b * Quaternion.Inverse(a); Quaternion swingFull; Quaternion twistFull; QuaternionUtil.DecomposeSwingTwist ( deltaRotation, twistAxis, out swingFull, out twistFull ); Quaternion swing = Quaternion.Slerp(Quaternion.identity, swingFull, t); Quaternion twist = Quaternion.Slerp(Quaternion.identity, twistFull, t); return twist * swing; } Proof Lastly, let’s look at the proof for the swingtwist decomposition formulas. All that needs to be proven is that the swing component S does not contribute to any rotation around the twist axis, i.e. the rotational axis of S is orthogonal to the twist axis. Let vec{V_{R_para}} denote the parallel component of vec{V_R} to vec{V_T}, which can be obtained by projecting vec{V_R} onto vec{V_T}: vec{V_{R_para}} = proj_{vec{V_T}}(vec{V_R}) Let vec{V_{R_perp}} denote the orthogonal component of vec{V_R} to vec{V_T}: vec{V_{R_perp}} = vec{V_R}  vec{V_{R_para}} So the scalarvector form of T becomes: T = [W_R, proj_{vec{V_T}}(vec{V_R})] = [W_R, vec{V_{R_para}}] Using the quaternion multiplication formula, here is the scalarvector form of the swing quaternion: S = R T^{1} = [W_R, vec{V_R}] [W_R, vec{V_{R_para}}] = [W_R^2  vec{V_R} ‧ (vec{V_{R_para}}), vec{V_R} X (vec{V_{R_para}}) + W_R vec{V_R} + W_R (vec{V_{R_para}})] = [W_R^2  vec{V_R} ‧ (vec{V_{R_para}}), vec{V_R} X (vec{V_{R_para}}) + W_R (vec{V_R} vec{V_{R_para}})] = [W_R^2  vec{V_R} ‧ (vec{V_{R_para}}), vec{V_R} X (vec{V_{R_para}}) + W_R vec{V_{R_perp}}] Take notice of the vector part of the result: vec{V_R} X (vec{V_{R_para}}) + W_R vec{V_{R_perp}} This is a vector parallel to the rotational axis of S. Both vec{V_R} X(vec{V_{R_para}}) and vec{V_{R_perp}} are orthogonal to the twist axis vec{V_T}, so we have shown that the rotational axis of S is orthogonal to the twist axis. Hence, we have proven that the formulas for S and T are valid for swingtwist decomposition. Conclusion That’s all. Given a twist axis, I have shown how to decompose a rotation into a swing component and a twist component. Such decomposition can be used for swingtwist interpolation, an alternative to slerp that interpolates between two orientations, which can be useful if you’d like some point on a rotating object to travel along the shortest arc. I like to call such interpolation sterp. Sterp is merely an alternative to slerp, not a replacement. Also, slerp is definitely more efficient than sterp. Most of the time slerp should work just fine, but if you find unwanted orientational sway on an object’s moving end, you might want to give sterp a try. 
two right square prism in collision(AABB), how to check which faces are colliding?
Hanseul Shin posted a topic in Math and Physics
Thanx to @Randy Gaul, I succesfully implemented cube/cube collision detection and response. 1 substract the center of each AABB = 3d vector a. 2 if x of a is the biggest, this represents a face on each AABB. 3 if x is pointing at the same(or exact opposte) direction of the normal(of a face), two AABB are colliding on those faces. But these steps only work if two colliders are cubes, because the size of each halflengths are different in a right square prism. I'd like to check which faces are collided with two right square prism, please help! Thank you! 
I've been digging around online and can't seem to find any formulas for 3D mesh simplification. I'm not sure where to start but I generally want to know how I could make a function that takes in an array of vertices, indices, and a float/double for the decimation rate. And could I preserve the general shape of the object too? Thanks for the help! P.S. I was hoping to do something with Quadric Error / Quadric Edge Collapse if that's possible.

Behavior Steering behaviors: Seeking and Arriving
miimii1205 posted a blog entry in Projects Of Some Degree Of Interest
Steering behaviors are use to maneuver IA agents in a 3D environment. With these behaviors, agents are able to better react to changes in their environment. While the navigation mesh algorithm is ideal for planning a path from one point to another, it can't really deal with dynamic objects such as other agents. This is where steering behaviors can help. What are steering behaviors? Steering behaviors are an amalgam of different behaviors that are used to organically manage the movement of an AI agent. For example, behaviors such as obstacle avoidance, pursuit and group cohesion are all steering behaviors... Steering behavior are usually applied in a 2D plane: it is sufficient, easier to implement and understand. (However, I can think of some use cases that require the behaviors to be in 3D, like in games where the agents fly to move) One of the most important behavior of all steering behaviors is the seeking behavior. We also added the arriving behavior to make the agent's movement a whole lot more organic. Steering behaviors are described in this paper. What is the seeking behavior? The seeking behavior is the idea that an AI agent "seeks" to have a certain velocity (vector). To begin, we'll need to have 2 things: An initial velocity (a vector) A desired velocity (also a vector) First, we need to find the velocity needed for our agent to reach a desired point... This is usually a subtraction of the current position of the agent and the desired position. \(\overrightarrow{d} = (x_{t},y_{t},z_{t})  (x_{a},y_{a},z_{a})\) Here, a represent our agent and t our target. d is the desired velocity Secondly, we must also find the agent's current velocity, which is usually already available in most game engines. Next, we need to find the vector difference between the desired velocity and the agent's current velocity. it literally gives us a vector that gives the desired velocity when we add it to that agent's current velocity. We will call it "steering velocity". \(\overrightarrow{s} = \overrightarrow{d}  \overrightarrow{c}\) Here, s is our steering velocity, c is the agent's current velocity and d is the desired velocity After that, we truncate our steering velocity to a length called the "steering force". Finally, we simply add the steering velocity to the agent's current velocity . // truncateVectorLocal truncate a vector to a given length Vector3f currentDirection = aiAgentMovementControl.getWalkDirection(); Vector3f wantedDirection = targetPosition.subtract(aiAgent.getWorldTranslation()).normalizeLocal().setY(0).multLocal(maxSpeed); // We steer to our wanted direction Vector3f steeringVector = truncateVectorLocal(wantedDirection.subtract(currentDirection), steeringForce); Vector3f newCurrentDirection = MathExt.truncateVectorLocal(currentDirection.addLocal(MathExt.truncateVectorLocal(wantedDirection.subtract(currentDirection), m_steeringForce).divideLocal(m_mass)), maxSpeed); This computation is done frame by frame: this means that the steering velocity becomes weaker and weaker as the agent's current velocity approaches the desired one, creating a kind of interpolation curve. What is the arriving behavior? The arrival behavior is the idea that an AI agent who "arrives" near his destination will gradually slow down until it gets there. We already have a list of waypoints returned by the navigation mesh algorithm for which the agent must cross to reach its destination. When it has passed the secondtolast point, we then activate the arriving behavior. When the behavior is active, we check the distance between the destination and the current position of the agent and change its maximum speed accordingly. // This is the initial maxSpeed float maxSpeed = unitMovementControl.getMoveSpeed(); // It's the last waypoint float distance = aiAgent.getWorldTranslation().distance(nextWaypoint.getCenter()); float rampedSpeed = aiAgentMovementControl.getMoveSpeed() * (distance / slowingDistanceThreshold); float clippedSpeed = Math.min(rampedSpeed, aiAgentMovementControl.getMoveSpeed()); // This is our new maxSpeed maxSpeed = clippedSpeed; Essentially, we slow down the agent until it gets to its destination. The future? As I'm writing this, we've chosen to split the implementation of the steering behaviors individually to implement only the bare necessities, as we have no empirical evidence that we'll need to implement al of them. Therefore, we only implemented the seeking and arriving behaviors, delaying the rest of the behaviors at an indeterminate time in the future,. So, when (or if) we'll need it, we'll already have a solid and stable foundation from which we can build upon. More links Understanding Steering Behaviors: Seek Steering Behaviors · libgdx/gdxai Wiki Understanding Steering Behaviors: Collision Avoidance 
Algorithm Javascript collision detection function isn't working
BigBadMick posted a topic in General and Gameplay Programming
Hey everybody, I'm currently working on a simple HTML5 game and my javascript collision detection function isn't working. The game features a little man that runs from side to side at the bottom of the screen, while a meteor falls from the sky. The function is supposed to detect a collision between meteor and man. In the routine, the top left corner of the man is at (player.x, player.y) and the top left corner of the meteor is at (meteor.x, meteor.y). The man is 25 pixels wide by 35 pixels tall. The meteor is 50 pixels wide by 50 pixels tall. Any idea where I've screwed in up this function? // ============================================================================= // Check for a collision between the 50 x 50 meteor and the 25 wide x 35 tall // main character // // Main character is drawn at 540 and is 35 tall, so the top of the character // is at y = 540 and the bottom is at y = 575. // // Function returns 1 if there has been a collision between the main // character and the meteor, otherwise it returns 0. // ============================================================================= function check_for_meteor_player_collision () { // edge positions for player and meteor var player_top = player.y; var player_bottom = player.y + 34; var player_left = player.x; var player_right = player.x + 24; var meteor_top = meteor.y; var meteor_bottom = meteor.y + 49; var meteor_left = meteor.x; var meteor_right = meteor.x + 49; var vertical_overlap = 0; var horizontal_overlap = 0; // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check for vertical overlap // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check if meteor bottom overlaps player if ((meteor_bottom >= player_top) && (meteor_bottom <= player_bottom)) { vertical_overlap = 1; } // Check if meteor top overlaps player if ((meteor_top >= player_top) && (meteor_top <= player_bottom)) { vertical_overlap = 1; } // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check for horizontal overlap // ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Check if meteor left side overlaps player if ((meteor_left >= player_left) && (meteor_left <= player_right)) { horizontal_overlap = 1; } // Check if meteor right side overlaps player if ((meteor_right >= player_left) && (meteor_right <= player_right)) { horizontal_overlap = 1; } // console.log("vertical_overlap = " + vertical_overlap); // console.log("horizontal_overlap = " + horizontal_overlap) // If we've got both a vertical overlap and a horizontal overlap, // we've got a collision if ((vertical_overlap == 1) && (horizontal_overlap == 1)) { return 1; } // if we've fallen through, we haven't detected a collision return 0; } // ============================================================================= 
Algorithm Map recommendation for facetoface RPG application
Armando Neto posted a topic in General and Gameplay Programming
INTRODUCTION  It's the following, for a university chair I'm making an application that will be an accessory to assist in facetoface RPG tables, it has rooms in which the master and players can manage tokens, I just want to do one more thing, a map where all the players in the room could see and move their "miniatures" in it the master could put their monsters, but without animation, it would be just to replace the paper map that is normally used in facetoface sessions. PROBLEM  To do the map explained in the introduction, I looked for some tools but I did not find any good, so I would ask to be told APIs to do this, if someone has already played table RPG knows more or less how I want to do, I will leave an image of the roll 20 that is more or less similar, and some more images of how I want it to stay on the cell phone for those who never played understand , I just need to know a good tool to do this. I want the master himself to put the image that will be used when starting the map (photo or maybe tiled map). It can be solutions for web to and call the website into application, literally anything that does what I want will help. If you can help me, I'll be grateful, sorry for the awful English. 
Algorithm What's the best way to clean as much fog of war possible in the next step?
Januario posted a topic in General and Gameplay Programming
Hey guys!, So, I'm basically working on an explorer right now. It should, as the name suggests, explore the entire thing, the most efficient way possible. Your character has vision around you, of, 10x10. The map is much bigger, 100x100. However, it can't just go straight from a corner to another, because: The tiles can be an occupied, or unoccupied one. You can add weights to the tiles, so feel free to use this in your advantage (let's say, adding an extra weight to a visited tile so you can compare visited against nonvisited ones). You can use the Pathfinder I'm using, based on the A* algorithm. So, I could be wrong, but by basic logic, I assumed that the "fastest way" to explore the entire thing, is answering the question "What is the nearest tile that I can walk in, that is not occupied and that can reveal as much fogofwar (unvisited tile) as possible?"... My questions are: 1) Is my question correct? is that really the best way to explore the entire map? 2) If so, what's the best way to know "which is the tile that could reveal the most fog of war"? Once I get the tile that reveals the most fog of war possible, then I just throw the pathfinder to it. But I'm having problems doing a good way to achieve that :'( I hope you guys can help me on this one! Thank you 
Algorithm "Chiseled" random paths algorithm
Boris The Brave posted a topic in General and Gameplay Programming
Hey guys, I made a new procedural generation algorithm for laying out paths on a grid in an organic way. Maybe you'll find it interesting. https://www.boristhebrave.com/2018/04/28/randompathsviachiseling/ 
Simple organic and brute force dungeon generation
thecheeselover posted a blog entry in 3D, AI, procedural generation and black jack
Last month, I made a pretty simple dungeon generator algorithm. It's an organic brute force algorithm, in the sense that the rooms and corridors aren't carved into a grid and that it stops when an area doesn't fit in the graph. Here's the algorithm : Start from the center (0, 0) in 2D Generate a room Choose a side to extend to Attach a corridor to that side If it doesn't fit, stop the generation Attach a room at the end of the corridor If it doesn't fit, stop the generation Repeat from steps 3 to 7 until enough rooms are generated It allowed us to test out our pathfinding algorithm (A* & String pulling). Here are some pictures of the output in 2D and 3D : 
I'm working on an endless wavebased arkanoid/space invaders style hybrid. Bricks spawn above the screen and move down line by line. You must destroy all bricks before they hit the bottom of the screen. There will be multiple types of bricks and random powerup spawns. Currently I am just using a simple Log function that takes in the current wave as a parameter. This works to increase the number of bricks spawned each wave, but I want to find a way to make this much more complicated. Here is a list of everything that should be effected by the increase in difficulty: 1. Number of bricks 2. Types of bricks (1 hit bricks, 2 hit bricks, 3 hit bricks, etc.) 3. Speed that the bricks move down the screen 4. How often powerups spawn The biggest problem here is that I can't just increase all of these with each new wave. If I did that, it would quickly become far to difficult. What I would like is an algorithm that gives some amount of variance in the increase between all 4 of these. Say one wave we put 60% of the increase to number of bricks, 20% increase to powerup spawns, 10% to types of bricks and 10% to speed of bricks. But on the next wave those percentages are all moved around and we now have say 105% to work with so the overall difficulty has increased as well. The different types of bricks need to also change to the point where if someone has made it to a certain wave, such as wave 50 for example, there are no longer any 1 hit bricks. We now would have just 24 hit bricks, and if they are crazy good and make it all the way to round 100, Now it's just 35 hit bricks, etc. If anybody has any good ideas or suggestions on this, I'd appreciate anything you've got! Thanks!

I decided to implement light shafts using http://sirkan.iit.bme.hu/~szirmay/lightshaft_link.htm So far I've only managed to implement the shadow map. Can anyone help me to implement this in D3D11? (I mean steps, I can do the rest). I'm new to all these shadow maps and etc.

3D Stable way to process spline cross section orientation
51mon posted a topic in Graphics and GPU Programming
Hey I'm dealing with ribbons following the shape of multiple spline segments. It's straightforward to compute the direction at any point along the spline. However the ribbon also got a flat shape and I'm struggling with finding a way to compute the angle of the ribbon in the plane perpendicular to the direction. To illustrate what I mean here's a piece of code that almost worked: float3x3 rotMtxFromSpline; rotMtxFromSpline[1] = normalize(splineDir); rotMtxFromSpline[0] = normalize(cross(float3(1, 0, 0), rotMtxFromSpline[1])); rotMtxFromSpline[2] = cross(rotMtxFromSpline[0], rotMtxFromSpline[1]); // Rotate rotMtxFromSpline[0] in the rotMtxFromSpline[0]rotMtxFromSpline[2]plane to align with float3(0, 0, 1) dir rotMtxFromSpline[0] = normalize(dot(rotMtxFromSpline[0], float3(0, 0, 1)) * rotMtxFromSpline[0] + dot(rotMtxFromSpline[2], float3(0, 0, 1)) * rotMtxFromSpline[2]); rotMtxFromSpline[2] = cross(rotMtxFromSpline[0], rotMtxFromSpline[1]); The problem with this code is when the spline segment becomes perpendicular to (0,0,1)dir as the orientation switch from one side to the other very easily. The approach above is kind of a global approach and I'm thinking if there's a way to append some info to each spline segment to remedy the issue. Anyhow I wanted to post this question in case anyone had a similar problem that they solved or maybe anyone know some web resource dealing with this issue? Thanks! 
Geometric stiffness term in Stable Constrained Dynamics
coderchris posted a topic in Math and Physics
This is in reference to "Stable Constrained Dynamics": https://hal.inria.fr/hal01157835/document Equation (18) / (21) I'm having trouble understanding how to build this K "geometric stiffness" term. K = (∂J^T / ∂x) λ Where J is the constraints jacobian and λ is the constraint force magnitudes. What I do know  based on its usage in (21), K should be a (3n x 3n) matrix in 3D where n is the number of particles lets say. What I'm confused about  the jacobian J is a (C x 3n) matrix where C is the number of constraints. λ is (C x 1). This doesn't seem to work out in terms of the matrix dimensions... What am I missing here? If I consider only a single constraint, then it does appear to work out in terms of the dimensions  I end up with λ being a scalar and K ultimately being (3n x 3n). However, that leads to the question of how to then build K such that it contains all of the individual constraint K's (one K for each constraint I guess)? 
Hello, I am trying to make a GeometryUtil class that has methods to draw point,line ,polygon etc. I am trying to make a method to draw circle. There are many ways to draw a circle. I have found two ways, The one way: public static void drawBresenhamCircle(PolygonSpriteBatch batch, int centerX, int centerY, int radius, ColorRGBA color) { int x = 0, y = radius; int d = 3  2 * radius; while (y >= x) { drawCirclePoints(batch, centerX, centerY, x, y, color); if (d <= 0) { d = d + 4 * x + 6; } else { y; d = d + 4 * (x  y) + 10; } x++; //drawCirclePoints(batch,centerX,centerY,x,y,color); } } private static void drawCirclePoints(PolygonSpriteBatch batch, int centerX, int centerY, int x, int y, ColorRGBA color) { drawPoint(batch, centerX + x, centerY + y, color); drawPoint(batch, centerX  x, centerY + y, color); drawPoint(batch, centerX + x, centerY  y, color); drawPoint(batch, centerX  x, centerY  y, color); drawPoint(batch, centerX + y, centerY + x, color); drawPoint(batch, centerX  y, centerY + x, color); drawPoint(batch, centerX + y, centerY  x, color); drawPoint(batch, centerX  y, centerY  x, color); } The other way: public static void drawCircle(PolygonSpriteBatch target, Vector2 center, float radius, int lineWidth, int segments, int tintColorR, int tintColorG, int tintColorB, int tintColorA) { Vector2[] vertices = new Vector2[segments]; double increment = Math.PI * 2.0 / segments; double theta = 0.0; for (int i = 0; i < segments; i++) { vertices[i] = new Vector2((float) Math.cos(theta) * radius + center.x, (float) Math.sin(theta) * radius + center.y); theta += increment; } drawPolygon(target, vertices, lineWidth, segments, tintColorR, tintColorG, tintColorB, tintColorA); } In the render loop: polygonSpriteBatch.begin(); Bitmap.drawBresenhamCircle(polygonSpriteBatch,500,300,200,ColorRGBA.Blue); Bitmap.drawCircle(polygonSpriteBatch,new Vector2(500,300),200,5,50,255,0,0,255); polygonSpriteBatch.end(); I am trying to choose one of them. So I thought that I should go with the one that does not involve heavy calculations and is efficient and faster. It is said that the use of floating point numbers , trigonometric operations etc. slows down things a bit. What do you think would be the best method to use? When I compared the code by observing the time taken by the flow from start of the method to the end, it shows that the second one is faster. (I think I am doing something wrong here ). Please help! Thank you.
 3 replies

 Algorithm
 Optimization

(and 1 more)
Tagged with:

Hi, guys! I have a rather abstract question, because I don't know which side to approach to its solution. So, I would appreciate any information. I have a task to create a simple game that generates floor plans and I following by this perfect algorithm (https://www.hindawi.com/journals/ijcgt/2010/624817/). At the moment I use squarified treemaps (http://www.win.tue.nl/~vanwijk/stm.pdf) and here no problems. I create nested array in which elements are rooms with size. Problems starts when I trying to represent generated "rooms" as edges and vertexes (a, b, c, d steps in attached picture) That representation can give me access to this elements as special "entities" in future game versions. I don't have skills in graphs (and do I need graphs?) and at the moment totally stucked at this step. How can I represent room walls as trees (or graphs?) at this step? Calculate size of squares (rooms) and convert sides to a vectors? Then in loop find shared vectors (same position by "x" or "y") and determine them as shared walls? The instinct tells me that there exist more elegant and efficient ways. Anyway, thanks for any information about this.

Why the sudden boom in marching cubes? [Possible target]
Scouting Ninja posted a topic in GDNet Lounge
So what is up? What did I miss? I finally get my PC working and am bombarded with emails asking about marching cubes. Get on Gamedev.net and it's even here people are researching marching cubes. All of the gaming forums is a buzz with players theories on how marching cubes work, developers looking for teams to build a marching cube games. So why the spike? Reminds me of when Minecraft released. 
Intention This article is intended to give a brief look into the logistics of machine learning. Do not expect to become an expert on the field just by reading this. However, I hope that the article goes into just enough detail so that it sparks your interest in learning more about AI and how it can be applied to various fields such as games. Once you finish reading the article, I recommend looking at the resources posted below. If you have any questions, feel free to message me on Twitter @adityaXharsh. How Neural Networks Work Neural networks work by using a system of receiving inputs, sending outputs, and performing selfcorrections based on the difference between the output and expected output, also known as the cost. Neural networks are composed of neurons, which in turn compose layers, or collections of neurons. For example, there is an input layer and an output layer. In between the these two layers, there are layers known as hidden layers. These layers allow for more complex and nuanced behavior by the neural network. A neural network can be thought of as a multitier cake: the first tier of the cake represents the input, the tiers in between, or lack thereof, represent the hidden layers, and the last tier represents the output. The two mechanisms of learning are Forward Propagation and Backward Propagation. Forward Propagation uses linear algebra for calculating what the activation of each neuron of the next layer should be, and then pushing, or propagating, those values forward. Backward Propagation uses calculus to determine what values in the network need to be changed in order to bring the output closer to the expected output. Forward Propagation As can be seen from the gif above, each layer is composed of multiple neurons, and each neuron is connected to every other neuron of the following and previous layer, save for the input and output layers since they are not surrounding by layers from both sides. To put it simply, a neural network represents a collection of activations, weights, and biases. They can be defined as: Activation: A value representing how strongly a neuron is firing. Weight: How strong the connection is between two neurons. Affects how much of the activation is propagated onto the next layer. Bias: A minimum threshold for whether or not the current neuron's activation and weight should affect the next neuron's activation. Each neuron has an activation and a bias. Every connection to every neuron is represented as a weight. The activations, weights, biases, and connections can be represented using matrices. Activations are calculated using this formula: After the inner portion of the function has been computed, the resulting matrix gets pumped into a special function known as the Sigmoid Function. The sigmoid is defined as: The sigmoid function is handy since its output is locked between a range of zero and one. This process is repeated until the activations of the output neurons have been calculated. Backward Propagation The process of a neural network performing selfcorrection is referred to as Backward Propagation or backprop. This article will not go into detail about backprop since it can be a confusing topic. To summarize, the algorithm uses a technique in calculus known as Gradient Descent. Given a plane in an infinite number of dimensions, the direction of change that minimizes the error must be found. The goal of using gradient descent is to modify the weights and biases such that the error in the network approaches zero. Furthermore, you can find the cost, or error, of a network using this formula: Unlike forward propagation, which is done from input to output, backward propagation goes from output to input. For every activation, find the error in that neuron, how much of a role it played in the error of the output, and adjust accordingly. This technique uses concepts such as the chain rule, partial derivatives, and multivariate calculus; therefore, it's a good idea to brush up on one's calculus skills. High Level Algorithm Initialize matrices for weights and biases for all layers to a random decimal number between 1 and 1. Propagate input through the network. Compare output with the expected output. Backwards propagate the correction back into the network. Repeat this for N number of training samples. Source Code If you're interested in looking into the guts of a neural network, check out AI Chan! It's a simple to integrate library for machine learning I wrote in C++. Feel free to learn from it and use it in your own projects. https://bitbucket.org/mrsaturnsan/aichan/ Resources http://neuralnetworksanddeeplearning.com/ https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A

3D Level set method not generating correct results
Thomas Harper posted a topic in Graphics and GPU Programming
Hi Gamedev team, I am trying to reimplement the fluid simulation from the following paper . http://movement.stanford.edu/courses/cs44801spring/papers/foster.pdf. However, I am stuck with the level set method as it generates very strange resultt. The water should move much faster than the level set actually propagates, so I think there is a bug in my levelSetPropagation. However, I tried now for several months, but I just can't get it right. In the lower image you can see the problem. The scene is a breaking dam. While the inserted particles move quite fast , my level set seems to stick to the wall and does not propagate correctly. As the particles are moving correctly, I assume the error lies in my level set propagation. The level set sticks to the wall very long which looks really strange and not correct. I also inserted my level set propagation code below. If anybody could give me a hint on what I am doing wrong please let me know. If you require more information just let me know and I will reply as soon as possible. thank you for your support! void solverFedkiw::levelSetExtensionVelocity(){ int xSize = 1+newgrid>getXSize(),ySize = 1+newgrid>getYSize(),zSize = 1+newgrid>getZSize(); int i, j, k; enum{CLOSE, FAR, ACCEPTED}; std::multimap<float, Index> distances; //! distances of close cells, sorted due to multimap //! Algorithm taken from "The Fast construction of extension velocities in level set methods" //! by adalsteinsson and Sethian for(int direction=0;direction<3;direction++) { int dx=0,dy=0,dz=0; if(direction==0) dx=1; else if(direction==1) dy=1; else if(direction==2) dz=1; //! Initialize all values to FAR. These will become "all other points" > page 8 for(i=0;i<xSize;i++){ for(j=0;j<ySize;j++){ for(k=0;k<zSize;k++) sets[j][k] = FAR; } } //! Build narrow band for(i=1;i<xSize1;i++) { for(j=1;j<ySize1;j++) { for(k=1;k<zSize1;k++){ if((newgrid>getCellType(i, j, k) == grid::WALL)  (newgrid>getCellType(i, j, k) == grid::HOLDER)) continue; //! Test whether left neighbour is different (i.e. change of cell type) if(newgrid>getCellType(i, j, k) != newgrid>getCellType(idx, jdy, kdz)) { sets[j][k] = ACCEPTED; newgrid>setDistance(i, j, k, 0); //! All neighbours of the onsurface cells are set to CLOSE if(sets[i+1][j][k] == FAR) sets[i+1][j][k] = CLOSE; if(sets[i1][j][k] == FAR) sets[i1][j][k] = CLOSE; if(sets[j+1][k] == FAR) sets[j+1][k] = CLOSE; if(sets[j1][k] == FAR) sets[j1][k] = CLOSE; if(sets[j][k+1] == FAR) sets[j][k+1] = CLOSE; if(sets[j][k1] == FAR) sets[j][k1] = CLOSE; } } } } //! put the close ones in the map for(i=1;i<xSize1;++i) { for(j=1;j<ySize1;++j) { for(k=1;k<zSize1;++k) { if(sets[j][k] == CLOSE){ distances.insert(std::pair<float, Index>(1, Index(i, j, k))); newgrid>setDistance(i, j, k, 1); } } } } //! Perform marching algorithm according to page 8 while(!distances.empty()){ //! Begin gives back iterator. The second element is the direction (the first one contains a float containing the distance) Index idx = (distances.begin())>second; //! Move the point into accepted and remove it from distances sets[idx.i][idx.j][idx.k] = ACCEPTED; distances.erase(distances.begin()); if(newgrid>getCellType(idx.i, idx.j, idx.k) != grid::AIR) continue; //! Recompute the velocities according to page 13 float totalDistance=0, weightedSpeed=0, distance; //! Test whether neighbour is inside the fluid if(sets[idx.i+1][idx.j][idx.k] == ACCEPTED){ distance = fabs(newgrid>getDistance(idx.i+1, idx.j, idx.k)  newgrid>getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid>getU(idx.i+1, idx.j, idx.k) * distance; else if(direction==1) weightedSpeed += newgrid>getV(idx.i+1, idx.j, idx.k) * distance; else if(direction==2) weightedSpeed += newgrid>getW(idx.i+1, idx.j, idx.k) * distance; totalDistance += distance; } if(sets[idx.i1][idx.j][idx.k] == ACCEPTED){ distance = fabs(newgrid>getDistance(idx.i1, idx.j, idx.k)  newgrid>getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid>getU(idx.i1, idx.j, idx.k) * distance; else if(direction==1) weightedSpeed += newgrid>getV(idx.i1, idx.j, idx.k) * distance; else if(direction==2) weightedSpeed += newgrid>getW(idx.i1, idx.j, idx.k) * distance; totalDistance += distance; } if(sets[idx.i][idx.j+1][idx.k] == ACCEPTED){ distance = fabs(newgrid>getDistance(idx.i, idx.j+1, idx.k)  newgrid>getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid>getU(idx.i, idx.j+1, idx.k) * distance; else if(direction==1) weightedSpeed += newgrid>getV(idx.i, idx.j+1, idx.k) * distance; else if(direction==2) weightedSpeed += newgrid>getW(idx.i, idx.j+1, idx.k) * distance; totalDistance += distance; } if(sets[idx.i][idx.j1][idx.k] == ACCEPTED){ distance = fabs(newgrid>getDistance(idx.i, idx.j1, idx.k)  newgrid>getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid>getU(idx.i, idx.j1, idx.k) * distance; else if(direction==1) weightedSpeed += newgrid>getV(idx.i, idx.j1, idx.k) * distance; else if(direction==2) weightedSpeed += newgrid>getW(idx.i, idx.j1, idx.k) * distance; totalDistance += distance; } if(sets[idx.i][idx.j][idx.k+1] == ACCEPTED){ distance = fabs(newgrid>getDistance(idx.i, idx.j, idx.k+1)  newgrid>getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid>getU(idx.i, idx.j, idx.k+1) * distance; else if(direction==1) weightedSpeed += newgrid>getV(idx.i, idx.j, idx.k+1) * distance; else if(direction==2) weightedSpeed += newgrid>getW(idx.i, idx.j, idx.k+1) * distance; totalDistance += distance; } if(sets[idx.i][idx.j][idx.k1] == ACCEPTED){ distance = fabs(newgrid>getDistance(idx.i, idx.j, idx.k1)  newgrid>getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid>getU(idx.i, idx.j, idx.k1) * distance; else if(direction==1) weightedSpeed += newgrid>getV(idx.i, idx.j, idx.k1) * distance; else if(direction==2) weightedSpeed += newgrid>getW(idx.i, idx.j, idx.k1) * distance; totalDistance += distance; } if(direction==0) newgrid>setU(idx.i, idx.j, idx.k,weightedSpeed/totalDistance); else if(direction==1) newgrid>setV(idx.i, idx.j, idx.k,weightedSpeed/totalDistance); else if(direction==2) newgrid>setW(idx.i, idx.j, idx.k,weightedSpeed/totalDistance); //! Move all neighbouring points of trial that are in Far to Close float dist = newgrid>getDistance(idx.i, idx.j, idx.k); if(newgrid>getCellType(idx.i+1, idx.j, idx.k)!=grid::WALL && newgrid>getCellType(idx.i+1, idx.j, idx.k)!=grid::HOLDER && sets[idx.i+1][idx.j][idx.k] == FAR) { newgrid>setDistance(idx.i+1, idx.j, idx.k, dist+1); sets[idx.i+1][idx.j][idx.k] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i+1, idx.j, idx.k))); } if(newgrid>getCellType(idx.i1, idx.j, idx.k)!=grid::WALL && newgrid>getCellType(idx.i1, idx.j, idx.k)!=grid::HOLDER && sets[idx.i1][idx.j][idx.k] == FAR) { newgrid>setDistance(idx.i1, idx.j, idx.k, dist+1); sets[idx.i1][idx.j][idx.k] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i1, idx.j, idx.k))); } if(newgrid>getCellType(idx.i, idx.j+1, idx.k)!=grid::WALL && newgrid>getCellType(idx.i, idx.j+1, idx.k)!=grid::HOLDER && sets[idx.i][idx.j+1][idx.k] == FAR) { newgrid>setDistance(idx.i, idx.j+1, idx.k, dist+1); sets[idx.i][idx.j+1][idx.k] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i, idx.j+1, idx.k))); } if(newgrid>getCellType(idx.i, idx.j1, idx.k)!=grid::WALL && newgrid>getCellType(idx.i, idx.j1, idx.k)!=grid::HOLDER && sets[idx.i][idx.j1][idx.k] == FAR) { newgrid>setDistance(idx.i, idx.j1, idx.k, dist+1); sets[idx.i][idx.j1][idx.k] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i, idx.j1, idx.k))); } if(newgrid>getCellType(idx.i, idx.j, idx.k+1)!=grid::WALL && newgrid>getCellType(idx.i, idx.j, idx.k+1)!=grid::HOLDER && sets[idx.i][idx.j][idx.k+1] == FAR) { newgrid>setDistance(idx.i, idx.j, idx.k+1, dist+1); sets[idx.i][idx.j][idx.k+1] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i, idx.j, idx.k+1))); } if(newgrid>getCellType(idx.i, idx.j, idx.k1)!=grid::WALL && newgrid>getCellType(idx.i, idx.j, idx.k1)!=grid::HOLDER && sets[idx.i][idx.j][idx.k1] == FAR) { newgrid>setDistance(idx.i, idx.j, idx.k1, dist+1); sets[idx.i][idx.j][idx.k1] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i, idx.j, idx.k1))); } } } } [/CODE] 
For a RPG with online components like player(s) vs. player(s). And client–server for arenas. Any links, articles, posts, books, tips, best practices, and resources? Thank you Edit: multiplatform: win, OS10, and Linux Edit2: links https://www.reddit.com/r/PUBATTLEGROUNDS/comments/7p0n18/the_truth_of_online_game_cheating/ https://stackoverflow.com/questions/960499/howtopreventcheatinginourmultiplayergames https://stackoverflow.com/questions/13826786/howtosecureclientsideanticheat?noredirect=1&lq=1 https://stackoverflow.com/questions/15760077/androidboardlikenetworkedmultiplayergameanticheating?noredirect=1&lq=1 https://stackoverflow.com/questions/143231/protectionagainstautomation/143241#143241 https://en.wikipedia.org/wiki/PunkBuster https://en.wikipedia.org/wiki/NProtect_GameGuard https://en.wikipedia.org/wiki/Valve_AntiCheat https://en.wikipedia.org/wiki/Cheating_in_online_games#Anticheating_methods_and_limitations https://en.wikipedia.org/wiki/Executable_compression https://en.wikipedia.org/wiki/Blizzard_Entertainment#Technology https://esports.easyanticheat.net/features/ https://nakedsecurity.sophos.com/2017/03/10/howonlinegamersusemalwaretocheat/ https://www.google.ca/search?q=Anticheating+techniques+for+games&client=firefoxb&dcr=0&ei=faDGWqe7IpC6tQXqlKu4Dw&start=0&sa=N&biw=946&bih=885

Hello, It is a complicated subject I think, but how would you compute the contact area between a sphere and a triangle? I thought to use the GJK algorithm combined with the EPA algorithm, but it could be the wrong way.

For my simple pathtracer written in C I need a function which calculates a random, diffuse reflection vector (for diffuse / lambert material). This is my function header: struct Ray diffuse(const struct Sphere s, const struct Point hitPoint) where s is the intersected Sphere and hitPoint is the point a ray intersected the sphere. Of course there are several projectes and their source code available on GitHub but most of the time they are doing way more things than I can understand at once. I don't want to just copy their code (I want to understand it myself) and I did not get any usefull results from google. I don't know where to start.

3D Radiosity normal map math question
Dushan Boshnyak posted a topic in Graphics and GPU Programming
Hi, I'm trying to implement the radioisity normal mapping algorithm from the valve paper. I've made a shader, baked some light maps, and got the effect to work more or less. It showed some surfaces both lightmapped and normal mapped, and it looked kinda nice. This old demo is currently offline but you can see the results: I wanted to revisit the tangent stuff that made the sphere render janky, when i noticed that this never quite worked as expected. It's noticeable here when i have the flat (128,128,255) normal map and the lights seem to "diverge". I tried to visualize the math by mocking the light maps with 3 dot products between a directional light and the basis vectors. The n dot l on the yellow normal never seems to be the same as the result of sampling and weighting from the basis directions. The first image where the light is completely grazing the surface seems to illustrate well where it fails. The other two basis (green and blue) are getting a clamped 0, but 1/3 of the contribution comes from the red dir where there is some value. Baking a regular light map, this surface would be black. This cyan distribution plot looks just like the banding shape i get if i have a light close to a surface. A light cylinder intersecting a wall for example, wouldn't shine uniformly in all directions, but these dominant three. The results from the screenshots in the papers look much better. Either the weighting math needs to be adjusted somehow (i was thinking maybe scaling the normal somehow), or the lights need to have negative values or something... would be my wild guesses. Is there a way to render a regular light map and nonnormal mapped geometry the same way as a radioisity normal map, with a flat (128,128,255) normal map? ^actually this plot is without the clamp for the basis/normal dot products

Advertisement