george7378

Members
  • Content count

    302
  • Joined

  • Last visited

Community Reputation

1441 Excellent

About george7378

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Programming
  1. 3D Neighbour nodes on a cube

    Thanks again for the inputs I decided to add properties to each root node which describe the rotations that need to be done in order to extract the correct neighbour node in each direction. These are passed around to any relevant child nodes which also touch the root border, and the end result is something that requires minimal 'if' statements and does most of the work itself provided you set the root nodes up properly.
  2. Hi again, another quadtree question - this time about neighbour finding in a set of 6 planar quadtrees forming a cube. At the moment, I've got a single quadtree in the XZ plane so finding neighbouring nodes of the same level is easy - e.g. If a top-left node wants to know it's left-hand neighbour, all it has to do is get a reference to the top-right child of its parent's left-hand neighbour. This still works for nodes contained on the same side of the cube, but it fails if I want to know a neighbour across the boundary between two different quadtrees in the cube. It doesn't work because the definition of up, left, down or right depends on which border you're querying across so asking for your left-hand neighbor using the above logic will only return the correct cell for some of the faces. Can anyone think of a way to get this simple system to work for a cube, or will it have to be more complicated? I can't think of a way to make it work without having horrible edge cases resulting in a very inflexible system. Perhaps I will just have to ignore neighbours across the boundaries and hence have small gaps in the terrain there. Thanks for the help
  3. Thanks for the reply - yes, I thought about what would happen if I used the same arbitrary vectors for every position, which is why I choose between the X direction or the Y direction depending on the Y component of the position. I was thinking this would ensure that the tangents always form an orthonormal basis around the point on the sphere. it will result in different tangents for different positions, but I was thinking this wouldn't matter as long as I was taking samples across two perpendicular directions in the tangent plane.
  4. A quick addition - I tried creating my own adaptation: public Vector3 GetNormalFromFiniteOffset(Vector3 location, float sampleOffset) { Vector3 normalisedLocation = Vector3.Normalize(location); Vector3 arbitraryUnitVector = Math.Abs(normalisedLocation.Y) > 0.999f ? Vector3.UnitX : Vector3.UnitY; Vector3 tangentVector1 = Vector3.Cross(arbitraryUnitVector, normalisedLocation); tangentVector1.Normalize(); Vector3 tangentVector2 = Vector3.Cross(tangentVector1, normalisedLocation); tangentVector2.Normalize(); float hL = GetHeight(location - tangentVector1*sampleOffset); float hR = GetHeight(location + tangentVector1*sampleOffset); float hD = GetHeight(location - tangentVector2*sampleOffset); float hU = GetHeight(location + tangentVector2*sampleOffset); Vector3 normal = 2*normalisedLocation + (hL - hR)*tangentVector1 + (hD - hU)*tangentVector2; normal.Normalize(); return normal; } I can't test it yet, but I wonder if anyone thinks this looks like a decent approach, or are there obvious issues with how I'm doing this? Thanks again!
  5. Hi everyone, I'm currently adapting a planar quadtree terrain system to handle spherical terrain (i.e. I want to render planets as opposed to just an endless terrain stretching in the X-Z plane). A the moment, I use this algorithm to calculate the normal for a given point on the terrain from the height information (the heights are generated using simplex noise): public Vector3 GetNormalFromFiniteOffset(float x, float z, float sampleOffset) { float hL = GetHeight(x - sampleOffset, z); float hR = GetHeight(x + sampleOffset, z); float hD = GetHeight(x, z - sampleOffset); float hU = GetHeight(x, z + sampleOffset); Vector3 normal = new Vector3(hL - hR, 2, hD - hU); normal.Normalize(); return normal; } The above works fine for my planar quadtree, but of course it won't work for spherical terrain because it assumes the Y direction is always up. I guess I need to move the calculation into the plane which lies at a tangent on the sphere for the point I'm evaluating. I was wondering if anyone knows of a good or proven way to transform this calculation for any point on a sphere, or if there's a better way to calculate the normal for a point on a sphere which has been radially displaced using a noise-based height field? I'm using XNA by the way. Thanks very much for looking!
  6. 3D Adapting planar quadtree for spheres

    Thanks for the reply - makes sense! For the second point, I guess that means a given LOD in the quadtree will have a slightly differing vertex density across the sphere due to the deformation when the tiles are projected onto the sphere? I suppose it's a minor problem though.
  7. Adapting planar quadtree for sphere Hi everyone, I'm quite used to working with quadtrees for flat, 'infinite' terrain where I just have a coherent noise function that returns a height for a given x, z position and quadtree cells are split based on camera distance to their centre (where the centre takes into account the height as well). I'd like to expand this to work for spherical planets using the common method of having 6 quadtrees which form a cube, then normalising the vertex positions to make a sphere. I have a few questions though which none of the tutorials for this seem to answer: What coordinates should I use to sample my noise function? The raw (x, y, z) coordinates of the vertex positions in the cube, or should I use the same coordinates but after I've 'spherified' them? What distance should I use to decide if quadtree nodes should split/merge? Should I use distance from camera of the node in its cube position, or should I work out where the node's centre is on the sphere and then use the distance to that? Thanks for the help
  8. Hi everyone, I have a water (pixel) shader which works like this: - Sample a normal map twice using two sets of moving coordinates. Average these to get the normal. - Sample reflection and refraction maps using screen-space coordinates which are perturbed slightly according to the normal. - Add a 'murky' colour to the refraction colour based on depth. - Interpolate between reflection and refraction maps based on the fresnel term (i.e. how shallow the viewing angle is). - Add specular highlights using the sampled normal. http://imgur.com/ALYmL53.jpg http://imgur.com/2cGc8kg.jpg I'm liking the results but I feel like there should be a diffuse lighting element too, so that waves created by the normal map can be seen when you aren't looking in the direction of the Sun. This is simple enough on a solid object (I'm already doing it on the terrain in the above pictures) but I'm not sure of the most accurate way to do it for water. Should I apply a diffuse factor to both reflections and refractions? Should I do it the same way as I would for solid objects? Anyone with experience creating their own water shader, some input would be very helpful :) Thanks.
  9. What are you working on?

    I recently picked up a failed attempt from last year to create an 'infinite' procedural terrain. Really pleased with how it's going this time round: https://www.youtube.com/watch?v=K8gIssDpm04
  10. Depth in water shader

    I think the issue is that the depth doesn't have enough resolution when stored in the alpha channel. The effect seems to be working when the camera is right next to the water plane, but since my far clip plane is pretty distant, it causes problems when the camera is not close to the water. Not sure of the best way to continue with this - perhaps I should calculate the 'fog factor' in the terrain shader based on actual world positions, but then I'd have to add a ray-plane intersection to calculate the distance to the water. EDIT: I was already doing clipping when drawing the refraction map so I just decided to use the distance below the clip plane and pass that through in the alpha channel (after dividing by a scale factor to get it in a reasonable range). Produces a decent result:
  11. Depth in water shader

    Thanks for the reply. I think for XNA the projection gives a z-coordinate from 0 to 1, with 0 being the near plane (?): https://msdn.microsoft.com/en-us/library/bb195672.aspx
  12. Hi everyone, I have a planar water shader which I've been using for quite a while. It follows the standard format of interpolating between refraction/reflection maps to create the overall colour and using normal maps to perturb the texture coordinates and create specular reflection. It produces decent results but I'd like to add some depth-based effects such as weighting towards a 'murky' colour for pixels in the refraction map which have a larger volume of water between them and the camera. Something a bit like this: https://i.ytimg.com/vi/UkskiSza4p0/maxresdefault.jpg I tried the following approach to determine the water depth for each pixel of the refraction map: - As I'm rendering terrain under the water, the alpha channel is not going to be used (i.e. I won't have translucent terrain). So, in the terrain pixel shader, I used the alpha channel to return the screen-space depth, i.e: // ...pixel shader code to calculate the terrain colour here... return float4(finalTerrainColour.rgb, psInput.ScreenPos.z/psInput.ScreenPos.w);- Then in the water shader, when I sample my refraction map containing the terrain rendered below the water, I find the difference between the alpha channel (where the terrain depth is stored) and the depth of the current pixel on the water plane, i.e: // ...pixel shader code to sample refraction map colour... refractionMapColour = lerp(refractionMapColour, murkyWaterColour, saturate(depthDropoffScale*(refractionMapColour.a - psInput.ScreenPos.z/psInput.ScreenPos.w)));This didn't seem to work and produced some strange results, for example when the camera was just above the water plane, it would show the 'murky' colour only, even though the terrain was quite a way below the water (and hence refractionMapColour.a should have been much larger than the depth of the pixel on the water plane). So, can anyone spot an obvious problem with the way I tried above, and are there any better ways of going about such depth-based effects? Thanks for the help :)
  13. Hi everyone, I'm pretty happy with my system for making an 'infinite' terrain using quadtree level of detail. At the moment though, the terrain is bare with no vegetation. I'd like to place trees, etc... on my landscape in areas where the slope is sufficiently small but I'm having trouble coming up with the best way to do it. Here's my best idea right now: When a new quadtree node is created, if the slope at the central point is below a certain value, this node is eligible for vegetation. Choose a set number of random locations within the node and add trees. This means that a maximum number of trees exists per node and that larger, less detailed nodes will have a lower density of trees. However, there are problems, for example in large nodes, the centre point may not represent the terrain that the node encompasses. Also, it would be nice to have a system that will place the trees in the same 'random' locations every time a node in a particular location with a particular level of detail is created. Perhaps you can share your ideas for how to automatically place trees? Thanks in advance :)
  14. Hi everyone,   im looking st the following article for creating terrain with a binary triangle tree: https://www.gamedev.net/resources/_/technical/graphics-programming-and-theory/binary-triangle-trees-and-terrain-tessellation-r806# I understand the 'split' code in there but I was wondering if anyone could provide some similar pseudo code for the 'merge' operation that should happen when a triangle. I longer needs its two children? Thanks very much!
  15. Hi everyone,   I really like the idea of creating a simulation of a (virtually) unlimited terrain using something like a perlin noise algorithm to determine the height of any (x, z) location and applying the height to a grid of vertices. Doing this requires some sort of dynamic LOD system to simplify the distant terrain while making nearby sections appear more detailed depending on camera distance. I have a rough working version right now which is built something like this:   - The terrain consists of a quadtree where each node represents a single tile. A tile contains four vertices and may contain four immediate child tiles of the same type. This allows the quadtree to be stored by creating a set of 'base' tiles and adding/removing children recursively as requierd. My terrain tiles are defined like this:   struct TerrainCell { float sideLength; D3DXVECTOR3 centre; TerrainVertex vertices[4]; //bottom-left, top-left, bottom-right, top-right vector <TerrainCell> children;   TerrainCell(D3DXVECTOR2 in_centre, float in_sideLength) { //Logic to determine locations of vertices using the centre and sideLength. } };   - Each frame, I traverse the ENTIRE quadtree starting at the base cells. For each cell I add or remove child cells depending on the camera distance. If a cell needs to become more detailed, I add the four children if they do not already exist. I then process the children recursively in the same way. Eventually, I reach a point where a given cell decides it should NOT split. At this point, I add a reference to that cell into a 'render queue' so that I know to draw it on this frame.   Note: the logic used to determine whether cells should split is simple, and it is currently governed by this function:   bool ShouldCellSplit(TerrainCell *cell) { return D3DXVec3Length(&(cell->centre - mainCam.pos)) < 10*cell->sideLength; }   - Once this frame's render queue is built, I loop through all the cells in there and add them sequentially to a vertex buffer. The size of the buffer is hard-coded - at the moment I have made it large enough to contain 1000 cells (each with 4 vertices). It is created using the following parameters when the program starts: d3ddev->CreateVertexBuffer(vertBufferSize*4*sizeof(TerrainVertex), D3DUSAGE_DYNAMIC | D3DUSAGE_WRITEONLY, 0, D3DPOOL_DEFAULT, &vertexBuffer, 0). vertBufferSize is 1000 (i.e. the buffer can contain up to 1000 cells). Here is how I then add and render the cells:   unsigned numTilesProcessed = 0; for (unsigned rqc = 0; rqc < renderQueue.size(); rqc++) //Loop through all the cells in the render queue { void* pVoid; if (numTilesProcessed >= vertBufferSize) //The vertex buffer is full - need to render the contents and empty it { if (FAILED(d3ddev->SetVertexDeclaration(vertexDecleration))){return false;} //A declaration matching the content of a TerrainVertex if (FAILED(d3ddev->SetStreamSource(0, vertexBuffer, 0, sizeof(TerrainVertex)))){return false;}   for (unsigned prt = 0; prt < vertBufferSize; prt++) //Draw the tiles one by one { if (FAILED(d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, prt*4, 2))){return false;} }   if (FAILED(vertexBuffer->Lock(0, 0, (void**)&pVoid, D3DLOCK_DISCARD))){return false;} numTilesProcessed = 0; } else //This will occur if we have not reached the buffer's size limit yet { if (FAILED(vertexBuffer->Lock(numTilesProcessed*4*sizeof(TerrainVertex), 4*sizeof(TerrainVertex), (void**)&pVoid, D3DLOCK_NOOVERWRITE))){return false;} }   memcpy(pVoid, renderQueue[rqc]->vertices, 4*sizeof(TerrainVertex)); if (FAILED(vertexBuffer->Unlock())){return false;}   numTilesProcessed += 1; }   (There is also some more very similar code afterwards to handle the final batch of tiles which probably won't fill the vertex buffer up to its capacity).   ...and that's pretty much it! The thing is, I came up with this algorithm having had little experience of terrain rendering or quadtrees before, so I suspect that people more experienced in this area will see ways to improve it or optimise certain parts. Perhaps it's ugly and could be entirely re-written in a much cleverer way! So what do you think?   Thanks for taking a look - any comments would be much appreciated :)