GameGeazer

Members
  • Content count

    72
  • Joined

  • Last visited

Community Reputation

1090 Excellent

About GameGeazer

  • Rank
    Member
  1. C++ IDEs - a rant

    Eeeww midnight black background and neon text. Grooosssss. Pastels homey, pastels are where it's at.
  2. Tessellation On The Gpu

    Of course it's possible, that's exactly what transform feedback (stream output in D3D) does.     Oops I didn't know about transform feedback, thanks for correcting me.
  3. Tessellation On The Gpu

    No, to my knowledge it isn't possible to retrieve geometry generated by a geometry shader. Data generated in a graphics pipeline is purged after completion so keeping changes made by a vertex or a tessellation shader isn't possible either. You'll need to make your changes either using a compute shader or on the CPU.
  4. Storing Signed Distance Fields

    -are you running out of disc space? video memory?    I'm more concerned by the method then memory at this point.   -are you trying to save bandwidth for rendering? are you really bandwidth bound? or fetch bound by TMU?   Yeah, bandwidth-wise I can use a 512^3 grid in real-time at around 20 fps using a very naive approach. Now that the poc actually runs, I'm looking for ways to improve.   -are you trying to voxelize in real time? or streaming static data? transcoding involved?   Yes. As of now, for each edit, I walk over each cell of the grid and (if the cell is close enough) apply the changes. i.e. set the voxel to a material if it's close enough to the surface. I haven't heard of streaming static data or transcoding. Is either approach worth looking into?   -interactive visualization of scientific data (1-10fps)? pre-visualization of cinematic rendering (<1fps)? cad editor (>10fps)? game(60fps)?   Aiming for real-time game performance.   Thanks for the article on SVO's!  I found a great paper from NVIDIA about building tree nodes in parallel. It's taking me some time to transition into GPU programming haha, this stuff is such a crazy different way of thinking.
  5. Voxel LOD?

    Try and stay away from special case scenarios such as the blocks being in an "L" formation. Hawkblood was recommending having multiple grids, one for terrain that will have LOD and another for voxels that should keep their resolution.   From the sound of it you're generating cubic terrain as well.. cubes. What you should be doing is stepping over chunks of voxels (16 x 16 x 16 bricks or something) and extracting geometry along the isosurface. Or rather, only generate faces of the cube that are visible. The link I posted last explains this step by step.   Here's a few goals. Shoot for them in order and you'll learn a lot in the process!   Extract Voxels as 6 sides cubes in a 16 x 16 x 16 grid Generate a 16 x 16 hightmap using perlin noise Use the heightmap to fill your grid and then extract! Extract only the visible faces of the cubes and place the vertices into a single mesh using the culling method in this article (https://0fps.net/2012/06/30/meshing-in-a-minecraft-game/) Learn about octree and implement the data structure Store a 16 x 16 x 16 brick of voxels as an octree leaf node When extracting a chunk check the neighboring brick entries for transitions Here's a naive implementation from a couple years ago (don't judge me on the code haha). It's very verbose, I hope it helps ideas in your head click together.    Man I was obsessed with pools. Give a man a hammer... public class CubicChunkExtractor { private VoxelMaterialAtlas materialAtlas; public CubicChunkExtractor(VoxelMaterialAtlas materialAtlas) { this.materialAtlas = materialAtlas; } public void Extract(BrickTree brickTree, Vector3i brickWorld, ref List<Color> colors, ref List<Vector3> vertices, ref List<Vector3> normals, ref List<Vector2> uv, ref List<int> indices, ref Pool<Color> colorPool, ref Pool<Vector2> vector2Pool, ref Pool<Vector3> vector3Pool) { int xOffset = brickTree.BrickDimensionX * brickWorld.x; int yOffset = brickTree.BrickDimensionY * brickWorld.y; int zOffset = brickTree.BrickDimensionZ * brickWorld.z; ColorUtil colorUtil = new ColorUtil(); int normalDirection; for (int x = 0; x < brickTree.BrickDimensionX; ++x) { for (int y = 0; y < brickTree.BrickDimensionY; ++y) { for (int z = 0; z < brickTree.BrickDimensionZ; ++z) { int trueX = x + xOffset; int trueY = y + yOffset; int trueZ = z + zOffset; VoxelMaterial voxel = materialAtlas.GetVoxelMaterial(brickTree.GetVoxelAt(trueX, trueY, trueZ)); VoxelMaterial voxelPlusX = materialAtlas.GetVoxelMaterial(brickTree.GetVoxelAt(trueX + 1, trueY, trueZ)); VoxelMaterial voxelPlusY = materialAtlas.GetVoxelMaterial(brickTree.GetVoxelAt(trueX, trueY + 1, trueZ)); VoxelMaterial voxelPlusZ = materialAtlas.GetVoxelMaterial(brickTree.GetVoxelAt(trueX, trueY, trueZ + 1)); if (CheckForTransition(voxel, voxelPlusX, out normalDirection)) { AddQuadX(voxel, x, y, z, normalDirection, ref colors, ref vertices, ref normals, ref uv, ref indices, ref colorPool, ref vector2Pool, ref vector3Pool, colorUtil); } if (CheckForTransition(voxel, voxelPlusY, out normalDirection)) { AddQuadY(voxel, x, y, z, normalDirection, ref colors, ref vertices, ref normals, ref uv, ref indices, ref colorPool, ref vector2Pool, ref vector3Pool, colorUtil); } if (CheckForTransition(voxel, voxelPlusZ, out normalDirection)) { AddQuadZ(voxel, x, y, z, normalDirection, ref colors, ref vertices, ref normals, ref uv, ref indices, ref colorPool, ref vector2Pool, ref vector3Pool, colorUtil); } } } } } private bool CheckForTransition(VoxelMaterial start, VoxelMaterial end, out int normalDirection) { bool containsStart = start.stateOfMatter == StateOfMatter.GAS; normalDirection = Convert.ToInt32(!containsStart); return containsStart != (end.stateOfMatter == StateOfMatter.GAS); } private void AddQuadX(VoxelMaterial voxel, int x, int y, int z, int normalDirection, ref List<Color> colors, ref List<Vector3> vertices, ref List<Vector3> normals, ref List<Vector2> uv, ref List<int> indices, ref Pool<Color> colorPool, ref Pool<Vector2> vector2Pool, ref Pool<Vector3> vector3Pool, ColorUtil colorUtil) { int vertexIndex = vertices.Count; Color color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.7f, 0.1f, 0.4f); colors.Add(color); color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.7f, 0.1f, 0.4f); colors.Add(color); color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.7f, 0.1f, 0.4f); colors.Add(color); color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.7f, 0.1f, 0.4f); colors.Add(color); Vector3 fish = vector3Pool.Catch(); fish.Set(x + 1, y, z); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(x + 1, y + 1, z); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(x + 1, y, z + 1); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(x + 1, y + 1, z + 1); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(normalDirection, 0, 0); normals.Add(fish); normals.Add(fish); normals.Add(fish); normals.Add(fish); Vector2 smallFish = vector2Pool.Catch(); smallFish.Set(0, 0); uv.Add(smallFish); smallFish = vector2Pool.Catch(); smallFish.Set(1, 0); uv.Add(smallFish); smallFish = vector2Pool.Catch(); smallFish.Set(0, 1); uv.Add(smallFish); smallFish = vector2Pool.Catch(); smallFish.Set(1, 1); uv.Add(smallFish); if (voxel.stateOfMatter == StateOfMatter.GAS) { indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 1); indices.Add(vertexIndex); indices.Add(vertexIndex + 1); indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 3); } else { indices.Add(vertexIndex); indices.Add(vertexIndex + 1); indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 3); indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 1); } } private void AddQuadY(VoxelMaterial voxel, int x, int y, int z, int normalDirection, ref List<Color> colors, ref List<Vector3> vertices, ref List<Vector3> normals, ref List<Vector2> uv, ref List<int> indices, ref Pool<Color> colorPool, ref Pool<Vector2> vector2Pool, ref Pool<Vector3> vector3Pool, ColorUtil colorUtil) { int vertexIndex = vertices.Count; Color color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.1f, 0.1f, 0.1f); colors.Add(color); color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.1f, 0.1f, 0.1f); colors.Add(color); color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.1f, 0.1f, 0.1f); colors.Add(color); color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.1f, 0.1f, 0.1f); colors.Add(color); Vector3 fish = vector3Pool.Catch(); fish.Set(x, y + 1, z); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(x + 1, y + 1, z); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(x, y + 1, z + 1); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(x + 1, y + 1, z + 1); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(0, normalDirection, 0); normals.Add(fish); normals.Add(fish); normals.Add(fish); normals.Add(fish); Vector2 smallFish = vector2Pool.Catch(); smallFish.Set(0, 0); uv.Add(smallFish); smallFish = vector2Pool.Catch(); smallFish.Set(1, 0); uv.Add(smallFish); smallFish = vector2Pool.Catch(); smallFish.Set(0, 1); uv.Add(smallFish); smallFish = vector2Pool.Catch(); smallFish.Set(1, 1); uv.Add(smallFish); if (voxel.stateOfMatter == StateOfMatter.GAS) { indices.Add(vertexIndex + 2); indices.Add(vertexIndex); indices.Add(vertexIndex + 1); indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 1); indices.Add(vertexIndex + 3); } else { indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 1); indices.Add(vertexIndex); indices.Add(vertexIndex + 1); indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 3); } } private void AddQuadZ(VoxelMaterial voxel, int x, int y, int z, int normalDirection, ref List<Color> colors, ref List<Vector3> vertices, ref List<Vector3> normals, ref List<Vector2> uv, ref List<int> indices, ref Pool<Color> colorPool, ref Pool<Vector2> vector2Pool, ref Pool<Vector3> vector3Pool, ColorUtil colorUtil) { int vertexIndex = vertices.Count; Color color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.7f, 0.1f, 0.4f); colors.Add(color); color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.7f, 0.1f, 0.4f); colors.Add(color); color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.7f, 0.1f, 0.4f); colors.Add(color); color = colorPool.Catch(); colorUtil.Set(ref color, voxel.color, 0.7f, 0.1f, 0.4f); colors.Add(color); Vector3 fish = vector3Pool.Catch(); fish.Set(x, y, z + 1); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(x + 1, y, z + 1); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(x, y + 1, z + 1); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(x + 1, y + 1, z + 1); vertices.Add(fish); fish = vector3Pool.Catch(); fish.Set(0, 0, normalDirection); normals.Add(fish); normals.Add(fish); normals.Add(fish); normals.Add(fish); Vector2 smallFish = vector2Pool.Catch(); smallFish.Set(0, 0); uv.Add(smallFish); smallFish = vector2Pool.Catch(); smallFish.Set(1, 0); uv.Add(smallFish); smallFish = vector2Pool.Catch(); smallFish.Set(0, 1); uv.Add(smallFish); smallFish = vector2Pool.Catch(); smallFish.Set(1, 1); uv.Add(smallFish); if (voxel.stateOfMatter == StateOfMatter.GAS) { indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 1); indices.Add(vertexIndex); indices.Add(vertexIndex + 1); indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 3); } else { indices.Add(vertexIndex + 2); indices.Add(vertexIndex); indices.Add(vertexIndex + 1); indices.Add(vertexIndex + 2); indices.Add(vertexIndex + 1); indices.Add(vertexIndex + 3); } } }
  6. Voxel LOD?

    Usually when performing LOD with voxels you sample the grid at a lower resolution. 1x1x1 becomes 2x2x2 becomes 4x4x4. With cubic geometry you'll run into issues preserving the integrity of the mesh (a low res tree may not look like a tree) and seams will crop up between resolution transitions.   Firstly I would do is make sure you're minimizing the number of triangles in your geometry to begin with. Greedy method explained in this article: https://0fps.net/2012/06/30/meshing-in-a-minecraft-game/   And then perhaps you could generate geometry in different size chunks at different distances. Something like 64x64x64 chunks super far away and 16x16x16 blocks close up in order to reduce draw calls.   Honestly I wouldn't bother with cubic LOD, the geometry should already be fairly low res.
  7. Storing Signed Distance Fields

      Yeah, the goal is to use functions to carve and model a more complex function using the primitive shapes I'm keeping them around in order to rebuild the DistanceField at multiple resolutions. The highest resolution I'm extracting at is around 900x900x900 and storing the field as a byte array takes up a rather large amount of memory. But I can't think of a real reason to keep them around during run time once the DistanceField has been built, Thanks! I'll look into nixing them. A material consists of a local position, normal, and roughness compacted into 4 bytes and then another 4 bytes for color. I'm splatting them, one point per voxel   Shouldn't a signed distance field store signed distances, so an "edit" should write signed distances into it? The only time you'd write 0 into it, would be if a 3D cell happens to lie exactly on the surface being defined. What you're describing sounds more like painting voxel values? With actual SDF data, you then have the operators of: Union = min(F1,F2) Subtraction = max(-F1,F2) Intersection = max(F1,F2) (and more complex ones, like "smooth union" :))   In general, yep, you've got to apply each "edit" one at a time, as a data-parallel, serial list of operations. However, you can group together some repeated sets of "edits" into sets that can be done in parallel. e.g. if you have a series of Union "edits", they could be implemented using an atomic-min function, allowing multiple concurrent edits. If they were followed by a Subtraction operator, you'd have to make sure that this union-group was finished before starting the Subtraction edit though. Likewise if you then had a contiguous group of Intersection edits - the whole contiguous group of intersections could be done in parallel by using an atomic-max function :)     I guess it really isn't a distance field haha, since I'm not storing well.. distances. The grids I'm using are fairly high resolution(900x900x900) so I was trying to cut on memory consumption by storing only the points that lie close enough to the surface in a KDTree.   I just came up with an idea, maybe I could store the (x,y, z) coordinates of points that are close enough to the surface inside in the KDTree and when querying the distance from a point perform a nearest neighbor search on the tree, find the (x, y, z) coordinate close enough to be considered on the surface, and then subtract the point I'm querying from that? There would be a little computational overhead, but memorywise that should be much more efficient than the grid I'm currently using.   Thankyou! Batching distance fields and using atomic instructions to pick the highest one sounds like a road worth walking down.   If anyone else has thoughts I'd love to hear them.
  8. Hello everyone, I was wondering if anyone knows a good way of going about storing signed distance fields in a way that can be parsed well on the gpu.   Here's a description of my current implementation:   A SignedDistanceFunction consists of: A transfrom stored as a mat4x4 A signed distance function (i.e. Sphere, Torus, ect) A SignedDistanceField is a 3D byte array. The stored value indicates the material   SignedDistanceFunctions are applied like paint brushes to the SignedDistanceFields in order in parallel on the gpu. There are three different options Carve - sets the byte index to 0 Place - sets the byte index to a value Paint - Change the value of non 0 indexes An Edit is one of the above three operations   With this method each SignedDistanceFunction needs to be applied to the SignedDistanceField in an iterative fashion since an index might be written to with one Edit, but deleted in the next. For large numbers of edits this could become a performance issue.   I thought that applying the functions backwards( most recent modification first) and marking the changed cells as not needing computations( flag in a byte array) might be a decent solution, but the functions are still being applied iteratively instead of in parallel   Does anyone have any thoughts on this? If there is a completely different way of approaching distance fields I'd love to hear!
  9. The article you posted compares Unity 4 with Unreal 4. Unity 5 introduced a number of features such as physically based rendering, many 2D engine components... Not to mention a completely different pricing model.
  10. From what I gather you're trying to time your sprites in a way that the feet don't "slide" along the ground?   You'll need to keep track of the time passed between frames(delta)   Delta can be calculated by subtracting the current system time of the previous frame from the current system time of the current frame.   Each frame in the animation will cover a certain range 0-25 ms, 26 -50 ms. 50 - 60 ms, ect.. You can either assume the time is uniform or assign different lengths to each frame, but keep track of the total time the animation takes to play out. In this case 60 ms (totalAnimationTime).   Steps: 1. Each time the sprite is updated pass in the frame delta. Keep track of the total delta time the sprite has been in the animation(deltaTotal). 2. Find the local delta (how far you're into the animation loop. (localDelta = deltaTotal % totalAnimationTime ) modulo finds the remainder. 3. If local delta is 0-25 ms render the first frame. 26-50 render the second, 50-60 ms the third, ect.
  11. It doesn't look like unity allows you to create a different render. Unity supports compute shaders you could implement one through those, but I'm betting a path tracer isn't something you need. They're still not suited well for real time applications.
  12. Frustum Culling Question

    For the static objects, the easiest option is to build one of trees Tangletail mentioned as the scene is being loaded; but if your scene is dynamic, then these trees will have to be rebuilt every frame. With dynamic scenes your best bet is to use the GPU and NVIDIA has a great paper on the subject. The examples they give are in CUDA; because well... That's their thing. But the algorithms could be implemented in OpenCl or Compute shaders all the same.    http://devblogs.nvidia.com/parallelforall/thinking-parallel-part-iii-tree-construction-gpu/   BVH Trees are also a great tool.
  13. Frustum Culling Question

    If you don't factor in the model's transformation how would you know if it's in your frustum? Nothing you wrote sounds out of the ordinary, Model View Projection matrices have MVP as an acronym for a reason!
  14. C++ SFML and Box2D tutorials?

    The thing is physics and graphics are independent of each other. In box2d each entity in the world has  properties such as translation and rotation, use these properties to decide where to render and how to orient your sprites; whether you render using SFML, SDL, or your own library is arbitrary. Also Box2d coordinates are not pixel coordinates, they're "meters" even though dimensions don't exist in the digital world.   Are you having any specific issues such as entities not being spaced correctly?
  15. FBO only renders to the first target

    Oh I just assumed since incrementing from ATTACHMENT0 was the root of my error that they couldn't be. Must have been another small bug that accidently got tossed out along with that code.