george7378

Members
  • Content count

    299
  • Joined

  • Last visited

Community Reputation

1441 Excellent

Personal Information

  • Interests
    Art
    Design
    Programming
  1. A quick addition - I tried creating my own adaptation: public Vector3 GetNormalFromFiniteOffset(Vector3 location, float sampleOffset) { Vector3 normalisedLocation = Vector3.Normalize(location); Vector3 arbitraryUnitVector = Math.Abs(normalisedLocation.Y) > 0.999f ? Vector3.UnitX : Vector3.UnitY; Vector3 tangentVector1 = Vector3.Cross(arbitraryUnitVector, normalisedLocation); tangentVector1.Normalize(); Vector3 tangentVector2 = Vector3.Cross(tangentVector1, normalisedLocation); tangentVector2.Normalize(); float hL = GetHeight(location - tangentVector1*sampleOffset); float hR = GetHeight(location + tangentVector1*sampleOffset); float hD = GetHeight(location - tangentVector2*sampleOffset); float hU = GetHeight(location + tangentVector2*sampleOffset); Vector3 normal = 2*normalisedLocation + (hL - hR)*tangentVector1 + (hD - hU)*tangentVector2; normal.Normalize(); return normal; } I can't test it yet, but I wonder if anyone thinks this looks like a decent approach, or are there obvious issues with how I'm doing this? Thanks again!
  2. Hi everyone, I'm currently adapting a planar quadtree terrain system to handle spherical terrain (i.e. I want to render planets as opposed to just an endless terrain stretching in the X-Z plane). A the moment, I use this algorithm to calculate the normal for a given point on the terrain from the height information (the heights are generated using simplex noise): public Vector3 GetNormalFromFiniteOffset(float x, float z, float sampleOffset) { float hL = GetHeight(x - sampleOffset, z); float hR = GetHeight(x + sampleOffset, z); float hD = GetHeight(x, z - sampleOffset); float hU = GetHeight(x, z + sampleOffset); Vector3 normal = new Vector3(hL - hR, 2, hD - hU); normal.Normalize(); return normal; } The above works fine for my planar quadtree, but of course it won't work for spherical terrain because it assumes the Y direction is always up. I guess I need to move the calculation into the plane which lies at a tangent on the sphere for the point I'm evaluating. I was wondering if anyone knows of a good or proven way to transform this calculation for any point on a sphere, or if there's a better way to calculate the normal for a point on a sphere which has been radially displaced using a noise-based height field? I'm using XNA by the way. Thanks very much for looking!
  3. 3D Adapting planar quadtree for spheres

    Thanks for the reply - makes sense! For the second point, I guess that means a given LOD in the quadtree will have a slightly differing vertex density across the sphere due to the deformation when the tiles are projected onto the sphere? I suppose it's a minor problem though.
  4. Adapting planar quadtree for sphere Hi everyone, I'm quite used to working with quadtrees for flat, 'infinite' terrain where I just have a coherent noise function that returns a height for a given x, z position and quadtree cells are split based on camera distance to their centre (where the centre takes into account the height as well). I'd like to expand this to work for spherical planets using the common method of having 6 quadtrees which form a cube, then normalising the vertex positions to make a sphere. I have a few questions though which none of the tutorials for this seem to answer: What coordinates should I use to sample my noise function? The raw (x, y, z) coordinates of the vertex positions in the cube, or should I use the same coordinates but after I've 'spherified' them? What distance should I use to decide if quadtree nodes should split/merge? Should I use distance from camera of the node in its cube position, or should I work out where the node's centre is on the sphere and then use the distance to that? Thanks for the help
  5. Hi everyone, I have a water (pixel) shader which works like this: - Sample a normal map twice using two sets of moving coordinates. Average these to get the normal. - Sample reflection and refraction maps using screen-space coordinates which are perturbed slightly according to the normal. - Add a 'murky' colour to the refraction colour based on depth. - Interpolate between reflection and refraction maps based on the fresnel term (i.e. how shallow the viewing angle is). - Add specular highlights using the sampled normal. http://imgur.com/ALYmL53.jpg http://imgur.com/2cGc8kg.jpg I'm liking the results but I feel like there should be a diffuse lighting element too, so that waves created by the normal map can be seen when you aren't looking in the direction of the Sun. This is simple enough on a solid object (I'm already doing it on the terrain in the above pictures) but I'm not sure of the most accurate way to do it for water. Should I apply a diffuse factor to both reflections and refractions? Should I do it the same way as I would for solid objects? Anyone with experience creating their own water shader, some input would be very helpful :) Thanks.
  6. What are you working on?

    I recently picked up a failed attempt from last year to create an 'infinite' procedural terrain. Really pleased with how it's going this time round: https://www.youtube.com/watch?v=K8gIssDpm04
  7. Depth in water shader

    I think the issue is that the depth doesn't have enough resolution when stored in the alpha channel. The effect seems to be working when the camera is right next to the water plane, but since my far clip plane is pretty distant, it causes problems when the camera is not close to the water. Not sure of the best way to continue with this - perhaps I should calculate the 'fog factor' in the terrain shader based on actual world positions, but then I'd have to add a ray-plane intersection to calculate the distance to the water. EDIT: I was already doing clipping when drawing the refraction map so I just decided to use the distance below the clip plane and pass that through in the alpha channel (after dividing by a scale factor to get it in a reasonable range). Produces a decent result:
  8. Depth in water shader

    Thanks for the reply. I think for XNA the projection gives a z-coordinate from 0 to 1, with 0 being the near plane (?): https://msdn.microsoft.com/en-us/library/bb195672.aspx
  9. Hi everyone, I have a planar water shader which I've been using for quite a while. It follows the standard format of interpolating between refraction/reflection maps to create the overall colour and using normal maps to perturb the texture coordinates and create specular reflection. It produces decent results but I'd like to add some depth-based effects such as weighting towards a 'murky' colour for pixels in the refraction map which have a larger volume of water between them and the camera. Something a bit like this: https://i.ytimg.com/vi/UkskiSza4p0/maxresdefault.jpg I tried the following approach to determine the water depth for each pixel of the refraction map: - As I'm rendering terrain under the water, the alpha channel is not going to be used (i.e. I won't have translucent terrain). So, in the terrain pixel shader, I used the alpha channel to return the screen-space depth, i.e: // ...pixel shader code to calculate the terrain colour here... return float4(finalTerrainColour.rgb, psInput.ScreenPos.z/psInput.ScreenPos.w);- Then in the water shader, when I sample my refraction map containing the terrain rendered below the water, I find the difference between the alpha channel (where the terrain depth is stored) and the depth of the current pixel on the water plane, i.e: // ...pixel shader code to sample refraction map colour... refractionMapColour = lerp(refractionMapColour, murkyWaterColour, saturate(depthDropoffScale*(refractionMapColour.a - psInput.ScreenPos.z/psInput.ScreenPos.w)));This didn't seem to work and produced some strange results, for example when the camera was just above the water plane, it would show the 'murky' colour only, even though the terrain was quite a way below the water (and hence refractionMapColour.a should have been much larger than the depth of the pixel on the water plane). So, can anyone spot an obvious problem with the way I tried above, and are there any better ways of going about such depth-based effects? Thanks for the help :)
  10. Hi everyone, I'm pretty happy with my system for making an 'infinite' terrain using quadtree level of detail. At the moment though, the terrain is bare with no vegetation. I'd like to place trees, etc... on my landscape in areas where the slope is sufficiently small but I'm having trouble coming up with the best way to do it. Here's my best idea right now: When a new quadtree node is created, if the slope at the central point is below a certain value, this node is eligible for vegetation. Choose a set number of random locations within the node and add trees. This means that a maximum number of trees exists per node and that larger, less detailed nodes will have a lower density of trees. However, there are problems, for example in large nodes, the centre point may not represent the terrain that the node encompasses. Also, it would be nice to have a system that will place the trees in the same 'random' locations every time a node in a particular location with a particular level of detail is created. Perhaps you can share your ideas for how to automatically place trees? Thanks in advance :)
  11. Hi everyone,   im looking st the following article for creating terrain with a binary triangle tree: https://www.gamedev.net/resources/_/technical/graphics-programming-and-theory/binary-triangle-trees-and-terrain-tessellation-r806# I understand the 'split' code in there but I was wondering if anyone could provide some similar pseudo code for the 'merge' operation that should happen when a triangle. I longer needs its two children? Thanks very much!
  12. Hi everyone,   I really like the idea of creating a simulation of a (virtually) unlimited terrain using something like a perlin noise algorithm to determine the height of any (x, z) location and applying the height to a grid of vertices. Doing this requires some sort of dynamic LOD system to simplify the distant terrain while making nearby sections appear more detailed depending on camera distance. I have a rough working version right now which is built something like this:   - The terrain consists of a quadtree where each node represents a single tile. A tile contains four vertices and may contain four immediate child tiles of the same type. This allows the quadtree to be stored by creating a set of 'base' tiles and adding/removing children recursively as requierd. My terrain tiles are defined like this:   struct TerrainCell { float sideLength; D3DXVECTOR3 centre; TerrainVertex vertices[4]; //bottom-left, top-left, bottom-right, top-right vector <TerrainCell> children;   TerrainCell(D3DXVECTOR2 in_centre, float in_sideLength) { //Logic to determine locations of vertices using the centre and sideLength. } };   - Each frame, I traverse the ENTIRE quadtree starting at the base cells. For each cell I add or remove child cells depending on the camera distance. If a cell needs to become more detailed, I add the four children if they do not already exist. I then process the children recursively in the same way. Eventually, I reach a point where a given cell decides it should NOT split. At this point, I add a reference to that cell into a 'render queue' so that I know to draw it on this frame.   Note: the logic used to determine whether cells should split is simple, and it is currently governed by this function:   bool ShouldCellSplit(TerrainCell *cell) { return D3DXVec3Length(&(cell->centre - mainCam.pos)) < 10*cell->sideLength; }   - Once this frame's render queue is built, I loop through all the cells in there and add them sequentially to a vertex buffer. The size of the buffer is hard-coded - at the moment I have made it large enough to contain 1000 cells (each with 4 vertices). It is created using the following parameters when the program starts: d3ddev->CreateVertexBuffer(vertBufferSize*4*sizeof(TerrainVertex), D3DUSAGE_DYNAMIC | D3DUSAGE_WRITEONLY, 0, D3DPOOL_DEFAULT, &vertexBuffer, 0). vertBufferSize is 1000 (i.e. the buffer can contain up to 1000 cells). Here is how I then add and render the cells:   unsigned numTilesProcessed = 0; for (unsigned rqc = 0; rqc < renderQueue.size(); rqc++) //Loop through all the cells in the render queue { void* pVoid; if (numTilesProcessed >= vertBufferSize) //The vertex buffer is full - need to render the contents and empty it { if (FAILED(d3ddev->SetVertexDeclaration(vertexDecleration))){return false;} //A declaration matching the content of a TerrainVertex if (FAILED(d3ddev->SetStreamSource(0, vertexBuffer, 0, sizeof(TerrainVertex)))){return false;}   for (unsigned prt = 0; prt < vertBufferSize; prt++) //Draw the tiles one by one { if (FAILED(d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, prt*4, 2))){return false;} }   if (FAILED(vertexBuffer->Lock(0, 0, (void**)&pVoid, D3DLOCK_DISCARD))){return false;} numTilesProcessed = 0; } else //This will occur if we have not reached the buffer's size limit yet { if (FAILED(vertexBuffer->Lock(numTilesProcessed*4*sizeof(TerrainVertex), 4*sizeof(TerrainVertex), (void**)&pVoid, D3DLOCK_NOOVERWRITE))){return false;} }   memcpy(pVoid, renderQueue[rqc]->vertices, 4*sizeof(TerrainVertex)); if (FAILED(vertexBuffer->Unlock())){return false;}   numTilesProcessed += 1; }   (There is also some more very similar code afterwards to handle the final batch of tiles which probably won't fill the vertex buffer up to its capacity).   ...and that's pretty much it! The thing is, I came up with this algorithm having had little experience of terrain rendering or quadtrees before, so I suspect that people more experienced in this area will see ways to improve it or optimise certain parts. Perhaps it's ugly and could be entirely re-written in a much cleverer way! So what do you think?   Thanks for taking a look - any comments would be much appreciated :)
  13. D3Dlock_Nooverwrite Erratic Flashing

    Ah OK, thanks for the responses! Strange how it worked perfectly on my other computer! I guess a good approach might be to have a fixed-size buffer to hold, say, 100 tiles and I can loop through my tiles to add them into the buffer using D3DLOCK_NOOVERWRITE. Then when I fill up the buffer I can draw every tile in there, then lock it with D3DLOCK_DISCARD and fill it up from the bottom again. i.e. draw 100 tiles at a time, emptying the buffer for the next 100.
  14. Hi everyone,   I've been working on a quadtree LOD terrain system lately and up until now, things have been going pretty smoothly. You can see a couple of screenshots showing the kind of basic result I'm getting in this post.   I recently started working on a different computer and I just copied my VC++ project directly across, opened it and re-compiled it to check it would work. That's when I ran into a very strange issue that I don't really know how to diagnose. My terrain mesh, which rendered perfectly well on the previous computer, now appears as a bunch of random flashing squares instead of a complete mesh. I've made a gif from a few frame captures which shows you the kind of result I'm getting:     I've isolated the issue to my Vertex Buffer locking calls, where I specify the flags D3DLOCK_DISCARD | D3DLOCK_NOOVERWRITE. If I remove D3DLOCK_NOOVERWRITE, the issue is resolved but my frame rate drops dramatically. It is for performance reasons that I used D3DLOCK_NOOVERWRITE in the first place, so I'd like to keep using it if possible!   So, my question is, can anyone think of a reason why my code which worked perfectly fine on a different computer just yesterday is suddenly so broken? I guess it's some sort of hardware issue (I can post specs if you think it may help). Or perhaps there's a code issue which was somehow masked on the previous computer?   I'll explain how the program works. Basically the terrain is constructed using a bunch of square tiles which recursively subdivide or un-subdivide into four more tiles depending on the camera distance each frame. When I have created the list of tiles to render for a given frame, I send them, one by one, into my vertex buffer and I draw them individually using D3DPT_TRIANGLESTRIP. This means that the vertex buffer is emptied and re-filled a great deal of times each frame, holding a maximum of four vertices at any one time. Here is some code which may be useful:   TerrainVertex structure (each terrain tile has an array of four of these (i.e. TerrainVertex vertices[4];) which are filled when the tile is created): struct TerrainVertex { D3DXVECTOR4 pos; D3DXVECTOR2 texCoords; D3DXVECTOR3 normal;   TerrainVertex() {pos = D3DXVECTOR4(0, 0, 0, 1); texCoords = D3DXVECTOR2(0, 0); normal = D3DXVECTOR3(0, 1, 0);} TerrainVertex(D3DXVECTOR4 p, D3DXVECTOR2 txc, D3DXVECTOR3 n) {pos = p; texCoords = txc; normal = n;} }; Vertex declaration/buffer setup: D3DVERTEXELEMENT9 elements[] = { {0, sizeof(float)*0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {0, sizeof(float)*4, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {0, sizeof(float)*6, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_NORMAL, 0}, D3DDECL_END() }; vertexDecleration = 0;    //This is a LPDIRECT3DVERTEXDECLARATION9 declared elsewhere in the program if (FAILED(d3ddev->CreateVertexDeclaration(elements, &vertexDecleration))){return false;} if (FAILED(d3ddev->CreateVertexBuffer(4*sizeof(TerrainVertex), D3DUSAGE_DYNAMIC | D3DUSAGE_WRITEONLY, 0, D3DPOOL_DEFAULT, &vertexBuffer, 0))){return false;} Rendering loop which should render all the tiles for a given frame, one after the other: //Note that renderQueue is just a list of tiles which is re-filled on each frame. Each element in renderQueue has four vertices as described above for (unsigned rqc = 0; rqc < renderQueue.size(); rqc++) { void* pVoid; if (FAILED(vertexBuffer->Lock(0, 0, (void**)&pVoid, D3DLOCK_DISCARD | D3DLOCK_NOOVERWRITE))){return false;} memcpy(pVoid, renderQueue[rqc]->vertices, sizeof(renderQueue[rqc]->vertices)); if (FAILED(vertexBuffer->Unlock())){return false;}   if (FAILED(d3ddev->SetVertexDeclaration(vertexDecleration))){return false;} if (FAILED(d3ddev->SetStreamSource(0, vertexBuffer, 0, sizeof(TerrainVertex)))){return false;}   if (FAILED(d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2))){return false;} }   So, you can see that the vertexBuffer is continually being re-filled with four different vertices for each tile.   Thanks for looking, and let me know if I can provide any more useful info!
  15. Dealing With Quadtree Terrain Cracks

    Thanks very much for the replies. I really like the idea of averaging the positions of the vertices - that should need very little overhead at all. I've wondered about whether my algorithm can ever give me a situation were two adjacent tiles differ by more than one LOD value, but having tried a bunch of things empirically, I think it naturally enforces this. The decision to split a cell is based on camera distance as a multiple of the cell's side length which seems to produce cascading areas which differ by one LOD only. I'll give these ideas a go and see what happens :)