• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

george7378

Members
  • Content count

    295
  • Joined

  • Last visited

Community Reputation

1441 Excellent

About george7378

  • Rank
    Member

Personal Information

  • Location
    United Kingdom
  1. Hi everyone, I have a water (pixel) shader which works like this: - Sample a normal map twice using two sets of moving coordinates. Average these to get the normal. - Sample reflection and refraction maps using screen-space coordinates which are perturbed slightly according to the normal. - Add a 'murky' colour to the refraction colour based on depth. - Interpolate between reflection and refraction maps based on the fresnel term (i.e. how shallow the viewing angle is). - Add specular highlights using the sampled normal. http://imgur.com/ALYmL53.jpg http://imgur.com/2cGc8kg.jpg I'm liking the results but I feel like there should be a diffuse lighting element too, so that waves created by the normal map can be seen when you aren't looking in the direction of the Sun. This is simple enough on a solid object (I'm already doing it on the terrain in the above pictures) but I'm not sure of the most accurate way to do it for water. Should I apply a diffuse factor to both reflections and refractions? Should I do it the same way as I would for solid objects? Anyone with experience creating their own water shader, some input would be very helpful :) Thanks.
  2. I recently picked up a failed attempt from last year to create an 'infinite' procedural terrain. Really pleased with how it's going this time round: https://www.youtube.com/watch?v=K8gIssDpm04
  3. I think the issue is that the depth doesn't have enough resolution when stored in the alpha channel. The effect seems to be working when the camera is right next to the water plane, but since my far clip plane is pretty distant, it causes problems when the camera is not close to the water. Not sure of the best way to continue with this - perhaps I should calculate the 'fog factor' in the terrain shader based on actual world positions, but then I'd have to add a ray-plane intersection to calculate the distance to the water. EDIT: I was already doing clipping when drawing the refraction map so I just decided to use the distance below the clip plane and pass that through in the alpha channel (after dividing by a scale factor to get it in a reasonable range). Produces a decent result:
  4. Thanks for the reply. I think for XNA the projection gives a z-coordinate from 0 to 1, with 0 being the near plane (?): https://msdn.microsoft.com/en-us/library/bb195672.aspx
  5. Hi everyone, I have a planar water shader which I've been using for quite a while. It follows the standard format of interpolating between refraction/reflection maps to create the overall colour and using normal maps to perturb the texture coordinates and create specular reflection. It produces decent results but I'd like to add some depth-based effects such as weighting towards a 'murky' colour for pixels in the refraction map which have a larger volume of water between them and the camera. Something a bit like this: https://i.ytimg.com/vi/UkskiSza4p0/maxresdefault.jpg I tried the following approach to determine the water depth for each pixel of the refraction map: - As I'm rendering terrain under the water, the alpha channel is not going to be used (i.e. I won't have translucent terrain). So, in the terrain pixel shader, I used the alpha channel to return the screen-space depth, i.e: // ...pixel shader code to calculate the terrain colour here... return float4(finalTerrainColour.rgb, psInput.ScreenPos.z/psInput.ScreenPos.w);- Then in the water shader, when I sample my refraction map containing the terrain rendered below the water, I find the difference between the alpha channel (where the terrain depth is stored) and the depth of the current pixel on the water plane, i.e: // ...pixel shader code to sample refraction map colour... refractionMapColour = lerp(refractionMapColour, murkyWaterColour, saturate(depthDropoffScale*(refractionMapColour.a - psInput.ScreenPos.z/psInput.ScreenPos.w)));This didn't seem to work and produced some strange results, for example when the camera was just above the water plane, it would show the 'murky' colour only, even though the terrain was quite a way below the water (and hence refractionMapColour.a should have been much larger than the depth of the pixel on the water plane). So, can anyone spot an obvious problem with the way I tried above, and are there any better ways of going about such depth-based effects? Thanks for the help :)
  6. Hi everyone, I'm pretty happy with my system for making an 'infinite' terrain using quadtree level of detail. At the moment though, the terrain is bare with no vegetation. I'd like to place trees, etc... on my landscape in areas where the slope is sufficiently small but I'm having trouble coming up with the best way to do it. Here's my best idea right now: When a new quadtree node is created, if the slope at the central point is below a certain value, this node is eligible for vegetation. Choose a set number of random locations within the node and add trees. This means that a maximum number of trees exists per node and that larger, less detailed nodes will have a lower density of trees. However, there are problems, for example in large nodes, the centre point may not represent the terrain that the node encompasses. Also, it would be nice to have a system that will place the trees in the same 'random' locations every time a node in a particular location with a particular level of detail is created. Perhaps you can share your ideas for how to automatically place trees? Thanks in advance :)
  7. Hi everyone,   im looking st the following article for creating terrain with a binary triangle tree: https://www.gamedev.net/resources/_/technical/graphics-programming-and-theory/binary-triangle-trees-and-terrain-tessellation-r806# I understand the 'split' code in there but I was wondering if anyone could provide some similar pseudo code for the 'merge' operation that should happen when a triangle. I longer needs its two children? Thanks very much!
  8. Hi everyone,   I really like the idea of creating a simulation of a (virtually) unlimited terrain using something like a perlin noise algorithm to determine the height of any (x, z) location and applying the height to a grid of vertices. Doing this requires some sort of dynamic LOD system to simplify the distant terrain while making nearby sections appear more detailed depending on camera distance. I have a rough working version right now which is built something like this:   - The terrain consists of a quadtree where each node represents a single tile. A tile contains four vertices and may contain four immediate child tiles of the same type. This allows the quadtree to be stored by creating a set of 'base' tiles and adding/removing children recursively as requierd. My terrain tiles are defined like this:   struct TerrainCell { float sideLength; D3DXVECTOR3 centre; TerrainVertex vertices[4]; //bottom-left, top-left, bottom-right, top-right vector <TerrainCell> children;   TerrainCell(D3DXVECTOR2 in_centre, float in_sideLength) { //Logic to determine locations of vertices using the centre and sideLength. } };   - Each frame, I traverse the ENTIRE quadtree starting at the base cells. For each cell I add or remove child cells depending on the camera distance. If a cell needs to become more detailed, I add the four children if they do not already exist. I then process the children recursively in the same way. Eventually, I reach a point where a given cell decides it should NOT split. At this point, I add a reference to that cell into a 'render queue' so that I know to draw it on this frame.   Note: the logic used to determine whether cells should split is simple, and it is currently governed by this function:   bool ShouldCellSplit(TerrainCell *cell) { return D3DXVec3Length(&(cell->centre - mainCam.pos)) < 10*cell->sideLength; }   - Once this frame's render queue is built, I loop through all the cells in there and add them sequentially to a vertex buffer. The size of the buffer is hard-coded - at the moment I have made it large enough to contain 1000 cells (each with 4 vertices). It is created using the following parameters when the program starts: d3ddev->CreateVertexBuffer(vertBufferSize*4*sizeof(TerrainVertex), D3DUSAGE_DYNAMIC | D3DUSAGE_WRITEONLY, 0, D3DPOOL_DEFAULT, &vertexBuffer, 0). vertBufferSize is 1000 (i.e. the buffer can contain up to 1000 cells). Here is how I then add and render the cells:   unsigned numTilesProcessed = 0; for (unsigned rqc = 0; rqc < renderQueue.size(); rqc++) //Loop through all the cells in the render queue { void* pVoid; if (numTilesProcessed >= vertBufferSize) //The vertex buffer is full - need to render the contents and empty it { if (FAILED(d3ddev->SetVertexDeclaration(vertexDecleration))){return false;} //A declaration matching the content of a TerrainVertex if (FAILED(d3ddev->SetStreamSource(0, vertexBuffer, 0, sizeof(TerrainVertex)))){return false;}   for (unsigned prt = 0; prt < vertBufferSize; prt++) //Draw the tiles one by one { if (FAILED(d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, prt*4, 2))){return false;} }   if (FAILED(vertexBuffer->Lock(0, 0, (void**)&pVoid, D3DLOCK_DISCARD))){return false;} numTilesProcessed = 0; } else //This will occur if we have not reached the buffer's size limit yet { if (FAILED(vertexBuffer->Lock(numTilesProcessed*4*sizeof(TerrainVertex), 4*sizeof(TerrainVertex), (void**)&pVoid, D3DLOCK_NOOVERWRITE))){return false;} }   memcpy(pVoid, renderQueue[rqc]->vertices, 4*sizeof(TerrainVertex)); if (FAILED(vertexBuffer->Unlock())){return false;}   numTilesProcessed += 1; }   (There is also some more very similar code afterwards to handle the final batch of tiles which probably won't fill the vertex buffer up to its capacity).   ...and that's pretty much it! The thing is, I came up with this algorithm having had little experience of terrain rendering or quadtrees before, so I suspect that people more experienced in this area will see ways to improve it or optimise certain parts. Perhaps it's ugly and could be entirely re-written in a much cleverer way! So what do you think?   Thanks for taking a look - any comments would be much appreciated :)
  9. Ah OK, thanks for the responses! Strange how it worked perfectly on my other computer! I guess a good approach might be to have a fixed-size buffer to hold, say, 100 tiles and I can loop through my tiles to add them into the buffer using D3DLOCK_NOOVERWRITE. Then when I fill up the buffer I can draw every tile in there, then lock it with D3DLOCK_DISCARD and fill it up from the bottom again. i.e. draw 100 tiles at a time, emptying the buffer for the next 100.
  10. Hi everyone,   I've been working on a quadtree LOD terrain system lately and up until now, things have been going pretty smoothly. You can see a couple of screenshots showing the kind of basic result I'm getting in this post.   I recently started working on a different computer and I just copied my VC++ project directly across, opened it and re-compiled it to check it would work. That's when I ran into a very strange issue that I don't really know how to diagnose. My terrain mesh, which rendered perfectly well on the previous computer, now appears as a bunch of random flashing squares instead of a complete mesh. I've made a gif from a few frame captures which shows you the kind of result I'm getting:     I've isolated the issue to my Vertex Buffer locking calls, where I specify the flags D3DLOCK_DISCARD | D3DLOCK_NOOVERWRITE. If I remove D3DLOCK_NOOVERWRITE, the issue is resolved but my frame rate drops dramatically. It is for performance reasons that I used D3DLOCK_NOOVERWRITE in the first place, so I'd like to keep using it if possible!   So, my question is, can anyone think of a reason why my code which worked perfectly fine on a different computer just yesterday is suddenly so broken? I guess it's some sort of hardware issue (I can post specs if you think it may help). Or perhaps there's a code issue which was somehow masked on the previous computer?   I'll explain how the program works. Basically the terrain is constructed using a bunch of square tiles which recursively subdivide or un-subdivide into four more tiles depending on the camera distance each frame. When I have created the list of tiles to render for a given frame, I send them, one by one, into my vertex buffer and I draw them individually using D3DPT_TRIANGLESTRIP. This means that the vertex buffer is emptied and re-filled a great deal of times each frame, holding a maximum of four vertices at any one time. Here is some code which may be useful:   TerrainVertex structure (each terrain tile has an array of four of these (i.e. TerrainVertex vertices[4];) which are filled when the tile is created): struct TerrainVertex { D3DXVECTOR4 pos; D3DXVECTOR2 texCoords; D3DXVECTOR3 normal;   TerrainVertex() {pos = D3DXVECTOR4(0, 0, 0, 1); texCoords = D3DXVECTOR2(0, 0); normal = D3DXVECTOR3(0, 1, 0);} TerrainVertex(D3DXVECTOR4 p, D3DXVECTOR2 txc, D3DXVECTOR3 n) {pos = p; texCoords = txc; normal = n;} }; Vertex declaration/buffer setup: D3DVERTEXELEMENT9 elements[] = { {0, sizeof(float)*0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {0, sizeof(float)*4, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {0, sizeof(float)*6, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_NORMAL, 0}, D3DDECL_END() }; vertexDecleration = 0;    //This is a LPDIRECT3DVERTEXDECLARATION9 declared elsewhere in the program if (FAILED(d3ddev->CreateVertexDeclaration(elements, &vertexDecleration))){return false;} if (FAILED(d3ddev->CreateVertexBuffer(4*sizeof(TerrainVertex), D3DUSAGE_DYNAMIC | D3DUSAGE_WRITEONLY, 0, D3DPOOL_DEFAULT, &vertexBuffer, 0))){return false;} Rendering loop which should render all the tiles for a given frame, one after the other: //Note that renderQueue is just a list of tiles which is re-filled on each frame. Each element in renderQueue has four vertices as described above for (unsigned rqc = 0; rqc < renderQueue.size(); rqc++) { void* pVoid; if (FAILED(vertexBuffer->Lock(0, 0, (void**)&pVoid, D3DLOCK_DISCARD | D3DLOCK_NOOVERWRITE))){return false;} memcpy(pVoid, renderQueue[rqc]->vertices, sizeof(renderQueue[rqc]->vertices)); if (FAILED(vertexBuffer->Unlock())){return false;}   if (FAILED(d3ddev->SetVertexDeclaration(vertexDecleration))){return false;} if (FAILED(d3ddev->SetStreamSource(0, vertexBuffer, 0, sizeof(TerrainVertex)))){return false;}   if (FAILED(d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2))){return false;} }   So, you can see that the vertexBuffer is continually being re-filled with four different vertices for each tile.   Thanks for looking, and let me know if I can provide any more useful info!
  11. Thanks very much for the replies. I really like the idea of averaging the positions of the vertices - that should need very little overhead at all. I've wondered about whether my algorithm can ever give me a situation were two adjacent tiles differ by more than one LOD value, but having tried a bunch of things empirically, I think it naturally enforces this. The decision to split a cell is based on camera distance as a multiple of the cell's side length which seems to produce cascading areas which differ by one LOD only. I'll give these ideas a go and see what happens :)
  12. Hi everyone,   I'm working on a quadtree-based dynamic terrain program which subdivides a virtual mesh to change level of detail depending on camera distance to the terrain. My world is built with cells defined by the following structure:   struct TerrainCell { float sideLength; D3DXVECTOR3 centre; TerrainVertex vertices[4]; //bottom-left, top-left, bottom-right, top-right vector <TerrainCell> children; };   ...Note that every cell has a list of child cells which may be empty or populated with four children depending on camera distance to the cell.   At the moment, my program works as follows:   - Hold an array of 'base' TerrainCells whose sideLength is the largest allowed. - Each frame, build a 'render queue' by traversing the quadtree for each base cell, creating or removing child cells recursively depending on distance. This render queue is simply a Vector containing pointers to all the cells which will be rendered this frame. It is built with the following recursive function:   void ProcessCell(TerrainCell *cell) { //No children if camera is too far away or cell is minimum size if (!ShouldCellSplit(cell) || cell->sideLength <= sideLengthLimit) { cell->children.clear(); renderQueue.push_back(cell); return; }   //If the cell SHOULD have children but DOESN'T, add them if (cell->children.size() != 4) { cell->children.clear(); float halfSize = cell->sideLength/2, quarterSize = cell->sideLength/4;   cell->children.push_back(TerrainCell(D3DXVECTOR2(cell->centre.x - quarterSize, cell->centre.z + quarterSize), halfSize)); cell->children.push_back(TerrainCell(D3DXVECTOR2(cell->centre.x + quarterSize, cell->centre.z + quarterSize), halfSize)); cell->children.push_back(TerrainCell(D3DXVECTOR2(cell->centre.x + quarterSize, cell->centre.z - quarterSize), halfSize)); cell->children.push_back(TerrainCell(D3DXVECTOR2(cell->centre.x - quarterSize, cell->centre.z - quarterSize), halfSize)); }   //Recursively process the children ProcessCell(&cell->children[0]); ProcessCell(&cell->children[1]); ProcessCell(&cell->children[2]); ProcessCell(&cell->children[3]); }   - I then loop through the render queue, put the vertices for each cell into a D3DUSAGE_DYNAMIC buffer and draw them one at a time using as TriangleStrips using DrawPrimitive().   This creates my desired effect very well, i.e. a dynamic terrain surface which subdivides in certain places depending on camera distance:   [attachment=32622:wf.png]   ...however, I have the problem of cracks between adjacent cells which have different detail levels:   [attachment=32621:sld.png]   I'm aware that this is a problem which can be solved using a few approaches, but I'm quite stuck as to how I should implement these into my own algorithm. My favourite idea is to build up a library of where the cracks exist and fill them in with a separate rendering pass by drawing some new triangles using the vertices that define the cracks. However I'm a bit stuck on the best way to gather this information. How do I actually find out which cells are contributing to a crack and how can I find the vertices of the associated triangle?   Any thoughts that you might have on how you would go about solving this issue would be much appreciated - I feel like I just need something to push me in the right direction and get me started :)   Thanks!
  13. Hi everyone,   I'm currently trying to create a simple procedural terrain system that I will later be able to expand and improve. So far I've created a quadtree-based system of cells which will divide based on camera proximity. When enough recursive division has occured, each cell is put into a queue and used to generate vertices for a dynamic buffer. I'm happy with my plane-generating system, but at the moment all it does is generate a flat plane with appropriate subdivision.   When each vertex is generated, I want to determine its height using a noise function. I'm familiar with using octaves of noise along with interpolation and 1d and 2d PRNG functions to produce perlin-style noise, but the functions I've used in the past won't quite meet my needs here. I want to have a noise function that will produce the same pseudo-random output for a given pair of x-y coordinates, but I also want to be able to set a 'seed' value that will let me generate more than one terrain output as a result.   I'm thinking that the simplest way to do this would be to use a 1D noise function like this one I've used before: float hash_noise_1d(int input) {            input = (input << 13)^input; float r = 1 - ((input*(input*input*15731 + 789221) + 1376312589) & 0x7fffffff)/float(1073741824); return (r + 1)/2; }   ...and to find a way of collapsing the x coordinate, the y coordinate and the seed down into a single integer value for input. This way, I would always get the same result for a given set of x-y coordinates combined with a given seed. So my question is: does anyone have a clever way to combine two (integer) coordinates along with an integer seed? I'm thinking that clever use of some bitwise operators would do the trick.   Thanks for your input :)
  14. Hey, thanks for the replies everyone:   - cozzie: Basically, my program is a demonstration of a simple autopilot program which is used to control a spacecraft in two dimensions. All I have to do is set the program running and the autopilot algorithm does the rest. Setting a fixed time-per-frame ensures the results are consistent and the autopilot is using a fixed timestep for each update. Hence there is no need for a good realtime framerate when I am capturing the video, I just need to make sure I can save every frame in sequence. So the answer to the second question is no - I like to render them on-screen so I can see what it's up to, but it's not totally necessary.   - Hodgman, harveypekar: I'll give those methods a go, thanks for the help :)
  15. Hi everyone,   I know this is quite an ugly way of doing video capture, but I'd like to be able to try the following in my DX9/C++ app:   - Force each frame in the program loop to represent a fixed time interval (e.g. 1/60th of a second - easy to do) - Dump each frame to an external image file in a folder (I'd use a counter which is incremented in each frame to append a sequence number to the given frame's filename) - Import each of these frames to a movie maker program in the knowledge that I am guaranteed a specific framerate   I've tried using programs such as Fraps and Camstudio to capture my screen, but the framerates are variable and often low. I'm simply making a demonstration video for my program, which doesn't even require realtime user interaction, so it doesn't matter how long it takes to render/save each frame in sequence.   Is there a way to save the frames like this? I'm using a windowed app too if that's a restriction. Perhaps you know an alternative way of saving each and every frame my program renders?   Thanks a lot!