
Advertisement
LifeIsGood
Member
Content count
19 
Joined

Last visited
Community Reputation
202 NeutralAbout LifeIsGood

Rank
Member
Personal Information

Role
Programmer

Interests
Art
Programming
Social

Github
https://github.com/GoGreenMI

Hi! First of all, this is still very basic. I just managed to get my first results running, but I figured I would share my thought process & what I have done so far. I'm not that well versed when it comes to Math, so I'm definitely looking forward to hear your guys opinions. This thread is quite long, so feel free to just jump to the heading that interests you. Recently, @Boris The Brave has released a really nice tutorial on Marching Cubes & Dual Contouring. A few years ago, I was already messing around with those algorithms but was lagging the required knowledge about programming back then. When Boris' tutorial came up, it again arouse my interest in this type of field. Also, let me appologize for my poor paint skills in advance Classic Dual Contouring The heart of the dual contouring algorithm is to solve the problem of finding the best fitting vertex for a surface inside of a cell. In the original paper, this problem has been solved with a quite complex solution (at least to us nonmathematicians): Solving & minimizing the Quadratic Error Funtion (QEF) Most projects simply reuse the original paper's implementation to solve the QEF and/or (slightly) adjust it to their needs. Usually, this works perfectly fine & yields pretty good results. However, I was curious if I could find a way to simplify the problem. My approach (2D) Looking at the problem in 2D significantly simplifies the problem, so let's do that first. Here's the information we have: We know the exact position of a sign change on one or more edges. We know the normal of the surface at that exact position So, how do we find the best vertex for that cell ? The normal is obviously orthogonal to our surface, so, what does our surface at that point look like ? Of course, it's simply an edge orthogonal to the normal! That means, there are only 2 possibilities in 2D! (which are essentially kind of the same as they are only the inverse of one another) One of the easiest ways to decide which of those 2 directional vectors we need is to simply move our point a bit in each of the directions. The point that's closer to the next sign change is the direction we are looking for. Now, all we need to do to find the best fitting vertex, is to find the intersection of those 2 directional vectors! (That is, we want to make lines out of those 2 vectors.) Since we know, that we have 2 or more sign changes in this cell, we know that there is either going to be an intersection of those 2 lines inside of the cell, or none at all. And that's it! That's what our actual surface looks like. So, here's all we need to do summarized: For each sign change, calculate the 2 orthogonals to the normal For each other sign change, do the same Look for an intersection between those 2 resulting lines (Essentially, we are looping over all sign changes twice.) In the end, we simply take the median of all found intersection points (required if there are more then just 2 sign changes) My approach (3D) In 3D, there is an infinite amount of orthogonal vectors to our normal, so how do we pick the one representing the edge of our surface ? In 2D, we can represent the course of a surface in a specific point with the tangent line. In 3D, the same principal exists. This time, it is a tangent plane. So, instead of 2 vectors as in 2D (the 2 orthogonal vectors to our normal) we have 4 this time. The vector u & v describing the tangent plane, as well as their inverses. We can apply the same principal as in 2D to find the one vector we are looking for, to represent our surface edge at this point. Now, all we got to do is to again find the intersection of 2 lines (or more specifically, line segments). Results in 2D Circle, from left to right: The complete surface, hermite data (sign changes & normals), resulting vertices (points of intersection) Square, from left to right: The complete surface, hermite data (sign changes & normals), resulting vertices (points of intersection) Results in 3D Left: My approach, right: The result from Boris The Brave's implementation. https://youtu.be/LtZCa359exU

What ways can I optimize a vector magnitude check?
LifeIsGood replied to Scouting Ninja's topic in Math and Physics
Yes. The dot product is defined as public static float Dot (Vector3 lhs, Vector3 rhs) { return lhs.x * rhs.x + lhs.y * rhs.y + lhs.z * rhs.z; } And the magnitude as public float magnitude { get { float result = x * x + y * y + z * z; return (float)Math.Sqrt(result); } } So, as you can see, taking the dot product of the same vector yields the same result as the squared magnitude. Btw. Unity already provides a property for it's vector types called sqrMagnitude. 
Marching Cubes and Dual Contouring Tutorial
LifeIsGood replied to Boris The Brave's topic in Math and Physics
I don't think that's accurate. Take this Example: If he generates the terrain using an SDF, the SDF is not going to be accurate anymore once you alter the mesh, is it ? So, somehow he is still able to calculate the normal without the original SDF. 
Marching Cubes and Dual Contouring Tutorial
LifeIsGood replied to Boris The Brave's topic in Math and Physics
@Boris The Brave When using Marching Cubes, one can create arbitrary meshs by altering the density data randomly. However, since we need the normal to perform dual contouring how would one go about doing that ? Right now, you're using a signed distance function & calculate the normal from that function. What if you're not using a signed distance function ? How do you go about calculating the normal at the point of a sign change ? 
Hi I'm currently reading up on GPU Voxelization as presented in the Voxel Cone Tracing paper & have a hard time understanding it. I feel like there's so much information missing. They talk about fixedsized voxel grids all the time (e.g. 512^3) & how this voxel grid is supposed to cover the whole scene. What I can't seem to find in those papers is how this voxel grid is actually created ? The paper states that the voxel grid is implemented using an octree, and every child is 1/8th in size of the parent. So, how do they generate this octree big enough to cover the whole scene geometry ? How do I determine the origin of this octree ? Is a single voxel of a fixed size (e.g. 1x1x1) if the voxel grid has a fixed resolution like 512^3 ? As far as I have understood it, the resolution of the voxel grid is a measure of detail, not of the actual size of the scene. Otherwise, there would be no guarantee that the whole scene is covered. Am I misunderstanding the concept & all they really do is, is to create (basically) a 512^3 voxel grid around the camera & only the geometry inside of this volume is voxelized ?

EntityComponent systems
LifeIsGood replied to matt77hias's topic in General and Gameplay Programming
Unity is actually reworking it's EC System right now. You might want to have a look at this. Also, the code is open source. (Or it's going to be soon) : 
Marching Cubes with Multiple Materials
LifeIsGood replied to Outliner's topic in For Beginners's Forum
The paper on Dual Contouring also provides a solution to realize multiple materials. 
Flow fields all having yaw, pitch and roll components?
LifeIsGood replied to lucky6969b's topic in Artificial Intelligence
What is the Perlin Noise used for in your implementation ?? Normally, a single Field simply holds a vector which is used as the direction to move in. I've written a basic Flow Fields implementation for Unity a few weeks ago: 
Artworks for feedback (Cows VS Vikings)
LifeIsGood replied to RedoxEnt's topic in Your Announcements
Looks great 
Pathfinding in city builders. Store the paths?
LifeIsGood replied to suliman's topic in Artificial Intelligence
Normally, it depends on the algorithm you're using (you can alter those in whatever way you want though) Let's take Flow Fields as an example. For a single path, the Grid is only calculated once. That is, as long as you don't need a different path, you just don't need to update the grid. 
Most obsessive you've been about a game?, flip side of grumpyOldDude's question.
LifeIsGood replied to Awoken's topic in GDNet Lounge
Same here. My favorite game was (and still is) Ratchet & Clank On my first PC I used to play Spell Force alot. 
I can absolutely agree. Me & my friends just talked about that, feels like the Game Industry is running out of ideas. Everything just plays similar. They constantly improve in graphics (which is cool, I really like nice graphics) but it feels like they're going backwards in creativity & gameplay. I bought a ps4 2 years ago & have only played a few hours since.

Making a Minecraft clone.
LifeIsGood replied to DeadlyForest's topic in General and Gameplay Programming
Perlin Noise is just pseudo randomness. You would basically (mostly) just generate a heightmap using a Noise algorithm which describes Caves, Flatland or Mountains. This is probably one of the best minecraftlike tutorials out there: https://forum.unity3d.com/threads/tutorialcvoxelterrainwithinfiniteterrainsavingandloading.293340/ 
Anyone here a selftaught graphics programmer?
LifeIsGood replied to ISDCaptain01's topic in Graphics and GPU Programming
@turanszkij Oh that's so cool to see you here, I've just found your videos through my youtube recommendations a few weeks ago ! Back to the topic: I guess I don't have to show as much as most others here, but I'm currently "working" on becoming a selftaught graphics programmer :) I've been messing around for 5 month now with the Vulkan API & Photorealistic rendering. However, I got no experience with the "standard" way of doing video game graphics (that is, rasterization) but rather with Ray Tracing & more specifically Path Tracing since new year. I'm currently trying to port my CPU Path Tracer to the GPU, when I got time... I hope to get some small graphics programmer jobs on upwork or similar portals in the near future ;) 
What does it mean for a group of units to reach a point?
LifeIsGood replied to Dr_Asik's topic in Artificial Intelligence
That's actually one of the prime situations of Continuum Crowds ! In CC, the space is discretized into a Grid, hence a goal can't be a single point in space afterall. Mostly, each Group's goal doesn't contain a single, but rather multiple cells. In Addition, CC defines it's Grid cells with special attributes. One of those is the density at that specific cell. Every person of a group contributes density to the cell it currently occupies as well as to the surrounding cells. So, people will tend to avoid each other, due to too high densities at specific cells.

Advertisement