• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

162 Neutral

About cheese

  • Rank
  1. Hey - Coming to the States

    Sorry I don't have much to contribute, I'm just interested. How are you planning the visit - what sort of visa do you have/how long are you staying for? Anyway, best of luck with - sorry I have little knowledge on applying for artist type jobs, particularly in the US.
  2. Idea for Planet Rendering

    As JohnBSmall said, it is certainly possible to render potentially gigabyte-sized datasets on relatively high-spec computers assuming that you have a sufficiently large dataset (for example Blue Marble from NASA). You can make up for the lack of surface detail by introducing procedural noise. The best engines I've seen that accomodate existing datasets with procedural noise are Lutz Justen and Ysaneya. Lutz uses a subdivided cube (rather than an icosahedron) for simplicity of regular grids, combined with Hoppe's Geometry Clipmaps to generate procedural data on the fly to increase surface detail. I'm not entirely sure about Ysaneya's engine (Flavien himself can provide a better description), but from his videos I *think* he generates chunks/patches on the fly in a seperate thread as opposed to primitive-level caching of individual vertices as in a clipmap. Both authors are extremely helpeful members of this community, and can provide a far better description than I can. If you want to render extremely large datasets without the addition of procedural noise, then you may want to consider P-BDAM however thats probably over the top and involves a huge amount of pre-processing. Final word about using an icosahedron as a basis for subdivision. Advantage is that the 20 triangular faces are more uniform in size and will show less distortion than say a cube. You could possibly represent the entire planet using 10 rectangular heightmaps by grouping triangular faces into pairs (i.e. sort of parallelograms). Apologies if this is what you already intended. Problems that I encountered using an icosahedron - accessing/manipulating vertices in a triangle is less trivial than a regular grid (i.e. a cube), and you may have texture addressing problems near the edges of chunks. Fingers_ managed to overcome these problems with awesome results using an icosahedron, so perhaps its best to talk to him. Here is one final link about how you can represent a planet using an icosahedron. I've got a ton more links if you want them, just shout. Hope that helps. *EDIT* Sod it, here's a bunch of random links that may help (in no particular order). Planet Rendering: Part 1 - The Basics Planet Rendering: Part 2 - Generating The Data A Real-Time Procedural Universe, Part Three: Matters of Scale
  3. Solar system exploration

    Well unless you're wanting to render real datasets then you'll probably want to generate your own universe procedurally. You'll definitely want to check out Ysaneya's work on procedural universe: Ysaneya's Journal Sean O'Neil's webpage He's the author of the article posted above on Gamasutra and has attempted to adapt ROAM for planets. Lutz Justen's website Another amazingly impressive planet engine, Lutz uses Geometry Clipmaps applied to a planet surface, wish some amazing results. I've got a huge bunch of other links relating to planet rendering so if you need them, please just ask. Here is a tutorial on home to generate a procedural starfield. Although its indended for Paintshop or some other package you should be able to program it yourself. I know this was the inspiration for Ysaneya's starfield method. Best of luck... this is something I really intend to investigate in the future when I get some more time.
  4. My first Demo Production

    Very nice, well done! Worked perfectly on my 1.5Ghz 128mb Radeon 9800.
  5. chunked lod terrain

    As far as I understand, the reference implementation of Chunked LOD uses mesh simplification - it uses ROAM to simplify regular grids of patches. ie. initially each patch is a regular grid and contains the same number of primitives. The ROAM approach is then used to simplify the patch (ie. merge triangles together), within some maximum error tolerance (how this is specified I do not know, read the ROAM paper - its free on the internet). I remember in this thread, the original poster mentions that he took the code to simplify the terrain sraight from Thatcher Ulrich's implementation. Perhaps you could try talking to him?
  6. Geodesic sphere (all triangles equal)

    For examples of creating icosahedrons: Procedural Planets Part 1 - Structure and Platonic Solids The first link is a tutorial showing you how to tesselate an icosahedron to achieve a sphere without cracks (ie. for a planet). The second link is a description of all sorts of different solids that you could experiment with. It contains definitions in terms of vertices so its little more than a cut and paste.
  7. Quote:create high-res model create max LOD mesh, obtain bumpmap projecting high-res model normals into this one create lower LOD mesh, obtain bumpmap from high-res model This is effectively what I do in my terrain engine. I generate a single normal map for each terrain patch, from the max LOD mesh. I then use this normal map for all lower LOD mesh as well. I found that using a single normal map for the all LOD levels reduces the affect of vertex popping between LOD transitions, and thats without geomorphing. As I read in some other thread a while ago, I think this is because the human eye is more sensitive to changes in light than it is to motion (same principal as camouflage I suppose). The disadvantage of this approach is that the resolution of your normal map is limited by the resolution of you max LOD mesh. For my purposes this is wasn't really a problem, but for particularly large terrains, your going to have to generate more/larger normal maps which is obviously going to take more time. I haven't implemented geomorphing so I don't know how difficult it is. Based on my limited experience, I would suggest to try normal/bump mapping and then see how noticeable the vertex popping is. If your unsatisfied then implement morphing as well. Hope that helps in some way.
  8. Hi, I'm trying to manually sample a cube map, based on a normalized 3D vector, from the centre of the cube. I can't think of any method of getting d3d to do this for me without using a pixel shader. Here's my code, its based on an NVidia OpenGL tutorial: void CubeCoords(D3DXVECTOR3 xyz) { float rx = xyz.x; float ry = xyz.y; float rz = xyz.z; int majorAxis = -1; float absX = absolute(rx); float absY = absolute(ry); float absZ = absolute(rz); if (absX > absY && absX > absZ) { if (rx <= 0) majorAxis = D3DCUBEMAP_FACE_NEGATIVE_X; else majorAxis = D3DCUBEMAP_FACE_POSITIVE_X; } if (absY > absX && absY > absZ) { if (ry <= 0) majorAxis = D3DCUBEMAP_FACE_NEGATIVE_Y; else majorAxis = D3DCUBEMAP_FACE_POSITIVE_Y; } if (absZ > absX && absZ > absY) { if (rz <= 0) majorAxis = D3DCUBEMAP_FACE_NEGATIVE_Z; else majorAxis = D3DCUBEMAP_FACE_POSITIVE_Z; } float sc, tc, ma; switch(majorAxis) { case D3DCUBEMAP_FACE_POSITIVE_X: { sc = -rz; tc = -ry; ma = rx; } break; case D3DCUBEMAP_FACE_NEGATIVE_X: { sc = rz; tc = -ry; ma = rx; } break; case D3DCUBEMAP_FACE_POSITIVE_Y: { sc = -rx; tc = -rz; ma = ry; } break; case D3DCUBEMAP_FACE_NEGATIVE_Y: { sc = rx; tc = -rz; ma = ry; } break; case D3DCUBEMAP_FACE_POSITIVE_Z: { sc = rx; tc = -ry; ma = rz; } break; case D3DCUBEMAP_FACE_NEGATIVE_Z: { sc = -rx; tc = -ry; ma = rz; } break; } float u = ((sc / absolute(ma)+1) / 2) * cubeSize; float v = ((tc / absolute(ma)+1) / 2) * cubeSize; } My code seems to work for the most part, however it appears incorrect around the seams of the cube, and for the positive and negative y faces. My theory is that perhaps the seams are visible because texture clamping is not being used, or that perhaps OpenGL cube maps are not oriented in the same way as D3D cube maps. I can post screenshots to demonstrate the problem graphically if anyone would prefer. Many thanks in advance, I am very grateful.
  9. Thanks again for your reply Lutz - with your help, I think I've finally found a solution. It works as follows: For the current vertex, I generate four surrounding vertices on the planet surface in object space. I do this using spherical coordinates, and the closeness of the vertices depends on the resolution of the cube map. I use these vertices to generate four small triangles, each of which share the current vertex. I then do what you do - average the surface normals of these triangles, to get the normal of the current vertex in object space. Its a pretty long-winded method, and there's loads of room for optimisation in my implementation, but the lighting looks great, even for cube maps of low resolution. And, it'll work with any subdivision method or LoD scheme. Thanks again for your continuing help Lutz - it's really helped.
  10. Thankyou very much for your replies. Lutz, in reply to your hints.. 1) Yes, I checked the cube heightmap both by applying it to the planet, and by tesselating it in a paint application. I use 3D normalized coordinates to input to the noise function 2) I tried what you said, and it does indeed look like a perfect sphere. I am pretty sure that using the pseudocode you posted would solve my problems, however it is infeasible. My LOD scheme is based on chunked level-of-detail using triangular patches, so I didn't know how to calculate the normals of vertices on the edge of patches, because I don't store pointers between neighbouring triangles, that belong to different patches. How do you go about calculating the normals of vertices on the edge of patches? Another quick question Lutz - what size is your cubic normalmap? Also, once you have calculated the vertex normals by averaging the normals of the triangles, do you interpolate between these to fill in the normal map? paul, I think you've identified the problem - it fits exactly with the symptoms I am getting. Unfortunately, I know very little about bezier/spline surfaces, though I will certainly follow your recommendation if I can find any relevant articles. Good news, I think I've found a reasonable solution to the problem, although it is a bit of a hack. I first generate a spherical heightmap, and then generate a spherical normalmap in object space from this heightmap. I then create a cubic normalmap by sampling the spherical normalmap, which gives the normals in object space, with no visible cube seams. The problem with this is that the normal appears to be a lot more pixelly, and I'm aiming to overcome this at the moment. Thanks again for your input.
  11. I am trying to implement lighting in my planet renderer, using dot3 normal mapping. I am trying to store the normal map as a cube texture. The terrain is generated procedurally using Perlin Noise, and so the normal map will also be generated procedurally. I am trying to generate the normal map as follows: 1) Generate a cubic heightmap, by sampling the noise function at every pixel of the cube map. 2) Generate the normalmap from the heightmap (the resultant normal map will be in tangent space) 3) Convert the normalmap to object space by rotation I have succesfully implemented stage 1 - to test the heightmap, I applied the texture to the planet and all faces of the cube matched up seamlessly. However, I am having great problems with stage 2 - converting the heightmap to a normalmap. I have managed to generate each face of the normal cube map, and they look correct, however they do not match up seamlessly when applied to the planet. Here's a screenshot of what I mean: The problem is definitely not the heightmap. I think that the problem may be that I need to reverse the way in which the normals are encoded, for certain faces of the cube. For example, at the moment, I used do the following for EVERY face: color.red = ((normal.x + 1) / 2) * 255; color.green = ((normal.y + 1) / 2) * 255.0f; color.blue = ((normal.z + 1) / 2) * 255.0f; And maybe I should be doing something like the following: color.red = ((-normal.x + 1) / 2) * 255; color.green = ((-normal.y + 1) / 2) * 255.0f; color.blue = ((-normal.z + 1) / 2) * 255.0f; Does anyone else store normals in cube maps, or generate their own cubic normal maps procedurally. Any help that you may be able to give me would be eternally appreciated - its driving me nuts. Thanks in advance.
  12. Hi, I'm trying to texture a sphere (representing a planet), by using cube maps. I think I understand the concepts fully, and have got it working, however I can't get the seams of the cube to match up properly. I'll try to demonstrate - here's a screenshot of my application; its just a sphere set to use the 'lobby.dds' taken from a DirectX 8.1 sample. I've highlighted the seam of the front cube face, to aid clarity. Assuming that the sample cube map is correct (as it is taken from a Microsoft sample), the problem must surely be the texture coordinates. The sphere is centered at the origin, and I set the texture coordinates (a 3D unit vector 'uvr') equal to the position vector of the vertex (vector 'xyz'), for example: D3DXVECTOR3 uvr = xyz; Does anyone have any idea what the problem might be? Alternatively, would anyone mind posting any of their own code, on how to calculate the texture coordinates for each vertex? As always, your help is extremely appreciated. P.S. I've tried to keep the initial description brief, please ask if any code extracts would help.
  13. Thanks Lutz - thats an excellent idea, it solves all of the problems I mentioned. To avoid having to convert the light into tangent space for every vertex/pixel, I was hoping to precalculate the cube normal map such that the normals are in world space. Out of interest - Lutz, do you store your normal maps in world space, or do you use shaders to transform the light vector into tangent space etc?
  14. I've reached the stage of texturing my planet engine, and I wish to implement normal mapping. I have read the BumpEarth samples provided in both the DirectX and NVidia SDK's, and had a couple of questions. My intention was to use a height map for the entire planet, and apply it using spherical mapping. I would then generate the normal map from the height map. However, I can see two problems: The normals will not be in world space, and so the planet will be light over the entire surface, where shadow should be cast of the side of the planet that is facing away from the light. Could I solve this by transforming the light vector into object space? Is it undesirable to use a single, very large texture for the entire planet? I understand that I could apply a smaller texture to each patch of terrain, though how would I calculate the normal of the vertices on the border of patches, without knowing the normals of all their neighbouring vertices in another patch? Thanks as always, in advance.
  15. Hmm, you may have made a valid point. I don't think memcpy() copies memory back to system memory, however it is an operation performed by the CPU, which means that the cache buffer would have to be copied in chunks from VRAM to registers on the CPU, and then back to a different area in VRAM. I must admit, I hadn't considered this. Memcpy() from VRAM to VRAM should not involve system RAM however, and so it still may be fairly quick. I *think* that the main cost of memcpy() from system RAM to VRAM is in the overhead to setup the transfer, rather than the transfer itself. If system RAM is bypassed, this overhead should be avoided. Perhaps its worth starting another thread to discuss this. I do know is that in Yann's description of his cache system, he has cache in VRAM - I wonder how he manages the cache.
  • Advertisement