Jump to content
  • Advertisement

yonisi

Member
  • Content Count

    19
  • Joined

  • Last visited

Everything posted by yonisi

  1. OK I see. Thax for fast answers! So basically you both suggest to go with a separate road rendering (i.e separate geometry) as the best method. In general, what you say sounds very logic, as: 1) Sure I do hold all the height data in a heightmap as after all it is for a flight sim and so full physics must work (Interaction of airplanes, vehicles, structures, trees etc with the terrain), so flattening areas shouldn't be such a big deal (I already have flattening code 2. The roads information that I'm planning to implement on the terrain will be originated from real data, and since the terrain is also based on real data (All stuff like terrain projection is already taken care of) the roads should be rendered where real roads exist, so I don't expect that much of mismatches. However, all that said, I do worry mostly (and that's why I considered this hard to achieve) about what seem to be a must for matching road vertices with terrain vertices. I mean, since we talk about virtual terrain with limited resolution (My case for now, 62.5m mesh), I'm worried that I will have to attach vertices wherever the terrain vertices are positioned, as well as at every place the road is changing. In other words, I will have to closely make sure that all geometry is "100% parallel" vertically at all points or otherwise I will see either roads hovering above terrain or sinking in. I think this is the greatest challenge by rendering roads as separated 3D objects. Thanx again for answers and help!
  2. Hi, I'm working on a terrain engine for sometime now, and I would like to add now some roads to my terrain. I'm not sure yet to which direction should I look for to get the best result. My terrain is for a flight simulator and so it covers a pretty large area of 1024x1024 KM in size. The heightmap size is 16K so I have 62.5m mesh resolution at the highest tessellation level (It is good enough). The main problem I have though is that it's pretty hard for me to get down to details, as the rendering is done in large chunks (Based on Nvidia DX11 terrain tessellation sample - Shown here: https://developer.nvidia.com/dx11-samples and also the PDF here: http://fliphtml5.com/lvgc/xhjd/basic) and all the texture coordinates are generated from the grid in the shaders, I hardly have any control on "small items". Not sure if this is a good enough explanation of my problem, but it's like I can't just tell the terrain: "Draw road segment here". The way I see it, I have more or less 4 options (Unless you can suggest more): 1. Trying this sample from Humus engine that is rendering roads on terrain using stencil buffer and box volumes: http://www.humus.name/index.php?page=3D&ID=84 - I think this is my preferred choice but for some reason I wasn't able to get anything rendered at all on my terrain using the code from their sample (At least with translation to my own engine, I must be doing something wrong. Do you also think that this is the best method to draw roads on terrain? Pros: Seems simple and efficient, using the stencil buffer that the HW gives for free Cons - I can't get it to work 2. Render 3D models of roads on the terrain. Pros - Minimal rendering, no interfering with the terrain rendering itself. Cons - Hard to get correct results with tessellated terrain, potential Z-Buffer issues with the terrain. I think that's the worse solution. 3. Simple texture mapping - Holding huge textures to map roads on the entire terrain. As basically I need only highways and real roads (Sand roads, small streams and such others will be coverd by a 4m/pixel photoreal textures), it could be not that much but still will probably cost some GBs of textures to cover the entire terrain area with sufficient resolution (Roads as textures should have at least 4m/pixel). Pros - Relatively easy to render, no Z-Buffer issues Cons - Large textures to hold and manage, branching in the Terrain's pixel shader for checking if we are on a road pixel 4. Texture mapping but with RTT. Instead of holding constant textures in disk and load them when necessary, RTT the roads into textures in memory and hold only the necessary data this way. By using small tiles (Example 1x1 KM) I think it's possible to map only tiles with roads and render them, all other tiles will be set with 0-mapping and so shader will fetch "no roads" texture and won't map anything Pros - Texture mapping but without holding any real textures on disk Cons - Requires RTT including probably render into mipmaps stages, may be a bit complicated to manage, area with many roads may get to large usage of memory I still would like to get the Humus sample to at least work on my engine. Unless you think it has other issues which may make it not that great for broad usage on a large rendered terrain. Thanx!
  3. Thanx a lot Hodgman!! I'll dive into those details to find what I need!
  4. Hi, I have a terrain engine that is using a lot of textures, including texture arrays and indexing. I'm using currently full size RAW textures in order to index into texture arrays. RAW textures are relatively large in size as they are full size data. My question is as follows: Let's talk about 8 bit indexing, so 0-255 only. And let's assume I need as large size as possible so I'm using 16384 size textures: RAW file in size 16384^2 8-bit depth = 256MB DXT1 file in size 16384^2 8-bit per channel on 3 channels without mipmapping = 128MB The cost of course is accuracy, understood. But, assuming I don't need all the 0-255 range but I can get along with say 50 indices or so. Would it be fine to use DXT1 to get an index? What I mean specifically is that let's say I need to be able to specify 50 different indices, would it be OK to encode the values into DDS DXT1 texture as 0, 5, 10, 15, 20 etc etc until it'll get to 220. And in shader, sample that texture and translate the value to an accurate index. Can I do that? The motivation is of course memory size, with 128MB I can create 3 "index textures" instead of 768 for the same with RAW accurate data.
  5. Hi, I have a Terrain rendering demo that is running and rendering OK, but when I run it in Debug mode (Device debug flag on and Visual studio debug build), the Device Release() method is crashing, I don't know why and I couldn't find a way to track the reason. Is there something that can tell me why the Device fail to Release? I don't know what exactly to show you so I'll just post the App final destructor, which is trivial: D3DApp::~D3DApp() { SAFE_DELETE( mBackBuffer ); ReleaseCOM( mSwapChain ); ReleaseCOM(md3dImmediateContext); if ( mDebug ) { mDebug->ReportLiveDeviceObjects(D3D11_RLDO_DETAIL); } ReleaseCOM(mDebug); ReleaseCOM(md3dDevice); --> Crashing here #if defined(DEBUG) || defined(_DEBUG) _CrtSetReportMode( _CRT_ERROR, _CRTDBG_MODE_DEBUG ); _CrtDumpMemoryLeaks(); #endif } ReleaseCom( x ) is a simple SAFE_RELEASE kind of method, means if ( x ) { x->Release(); x = nullptr; } SAFE_DELETE is same but deleting an object (mBackBuffer is of a wrapper TextureHandle class that wraps all my textures) I also used our SVN server to try and track where the problem started, but it's a very slow process as running a debug session here takes ~6-7 minutes each time, and I saw it goes long back down the SVN path so not really practical to find where the problem started. However, I'm almost sure there is some better way to know why D3D isn't able to close properly.
  6. yonisi

    DX11 Planar Reflections issue

    OK OK I think I understand what you mean, I'll try to improve, thanx for all the advices and time!
  7. Hi, I have a terrain engine where the terrain and water are on different grids. So I'm trying to render planar reflections of the terrain into the water grid. After reading some web pages and docs and also trying to learn from the RasterTek reflections demo and the small water bodies demo as well. What I do is as follows: 1. Create a Reflection view matrix - Technically I ONLY flip the camera position in the Y direction (Positive Y is up) and add to it 2 * waterLevel. Then I update the View matrix and I save that matrix for later. The code: void Camera::UpdateReflectionViewMatrix( float waterLevel ) { mBackupPosition = mPosition; mBackupLook = mLook; mPosition.y = -mPosition.y + 2.0f * waterLevel; //mLook.y = -mLook.y + 2.0f * waterLevel; UpdateViewMatrix(); mReflectionView = View(); } 2. I render the Terrain geometry to a 512x512 sized Render target by using the Reflection view matrix and an opposite culling (My Terrain is using front culling by nature so I'm using back culling for the Reflction render pass). Let me say that I checked with the Graphics debugger and the Reflection Render target looks "OK" at this stage (Picture attached). I don't know if the fact that the terrain is shown only at the top are of the texture is expected or not, but it seems OK. 3. Render the Reflection texture into the water using projective texturing - I hope this step is OK code wise. Basically I'm sending to the shader the WorldReflectionViewProj matrix that was created at step 1 in order to use it for the projective texture coordinates, I then convert the position in the DS (Water and terrain are drawn with Tessellation) to the projective tex coords using that WorldReflectionViewProj matrix, then I sample the reflection texture after setting up the coordinates in the PS. Here is the code: //Send the ReflectionWorldViewProj matrix to the shader: XMStoreFloat4x4(&mPerFrameCB.Data.ReflectionWorldViewProj, XMMatrixTranspose( ( mWorld * pCam->GetReflectedView() ) * mProj )); //Setting up the Projective tex coords in the DS: Output.projTexPosition = mul(float4(worldPos.xyz, 1), g_ReflectionWorldViewProj); //Setting up the coords in the PS and sampling the reflection texture: float2 projTexCoords; projTexCoords.x = input.projTexPosition.x / input.projTexPosition.w / 2.0 + 0.5; projTexCoords.y = -input.projTexPosition.y / input.projTexPosition.w / 2.0 + 0.5; projTexCoords += normal.xz * 0.025; float4 reflectionColor = gReflectionMap.SampleLevel(SamplerClampLinear, projTexCoords, 0); texColor += reflectionColor * 0.25; I'll add that when compiling the PS I'm getting a warning on those dividing by input.projTexPosition.w for a possible float division by 0, I tried to add some offset or some minimum to the dividing term but that still not solved my issue. Here is the problem itself. At relatively flat view angles I'm seeing correct reflections (Or at least so it seems), but as I pitch the camera down, I'm seeing those artifacts which I have no idea where are coming from. I'm culling the terrain in the reflection render pass when it's lower than water height (I have heightmaps for that). Any help will be appreciated because I don't know what is wrong or where else to look.
  8. yonisi

    DX11 Planar Reflections issue

    OK. It seems actually like 1) and 2) are caused by the same issue: As I stated in above post, it's a physical issue with my terrain and water grids, as the reflection render pass depends on the water level I sent to it, the reflection will apparently match only if the camera position is at the height of the reflecting surface (This case the water grid). And because my water heightmap is "following the terrain" at areas where the water height is below the terrain height, I have different readings of the "current water level" at different camera position. That effect is for sure the cause of 2) and I suspect also of 1) because when I hover with the camera above the water surface, I don't have such anomalies. For issue 4) I solved it by adding some offset to the w component of the projected tex position. i.e I did: projTexCoords.x = input.projTexPosition.x / (input.projTexPosition.w + 0.1) / 2.0 + 0.5; projTexCoords.y = -input.projTexPosition.y / (input.projTexPosition.w + 0.1) / 2.0 + 0.5; Added a 0.1 factor. That probably distorting a bit the reflection, but it solves the edges issue I have (Can be clearly seen in the vid at ~55 seconds, the edges are "cut", make sure to watch at 1080 resolution). For 3) I'm not sure if I understand what do you mean by "too deep" ? it means that the reflection is "sticking out" too much further into the water? If yes then I can see the 0.1 factor I added helps also with that (i.e the reflection is more "attached" to the water edges this way) Also, how can I flatten the projection matrix? Sorry if a stupid question, but I'm still a "starter" with Graphics so not familiar with all the terms. Thanx a lot for all the help! But I wouldn't want you to try those samples I linked just for helping me on this! You are a great help already and helped me with most of the issues without such painful effort
  9. yonisi

    DX11 Planar Reflections issue

    OK for issue "2" above, I think I know what is the problem, and its actually not related to Graphics, but to the physics of my terrain, My water heightmap follows the terrain pretty close, so if I have a hill on the terrain, the water will climb with such hill, even though the water is covered by the hill anyway, hence I'm getting a bad reading on "Water level" that goes up and down and that explains the bouncing. So reason for issue "2" is known.
  10. yonisi

    DX11 Planar Reflections issue

    Sure! My sources were: RasterTek Reflection tutorial: http://rastertek.com/dx11tut27.html RasterTek small water bodies tutorial: http://rastertek.com/tertut16.html This blog here: http://archive.gamedev.net/archive/reference/articles/article2138.html And also Habib's lake water tutorial: https://habibs.wordpress.com/lake/ But honestly, I think I did some kind of a mix of all For example, RasterTek tutorial suggest to set the Look.y of the reflection camera to the position.y, but for me that doesn't work. Current state is: If I count the problems, seems like I have 4 which I can't understand (Timing references to the above vid): 1. At the start you can see as I pitch down the camera, the reflection of the sky color is removed and the natural color of the water takes place. 2. After that you can see how getting closer and further cause the reflections to "bounce" in a crazy way, I have no idea why this is happening. 3. At ~40 seconds, I show how the reflection of the terrain is somewhat off in terms of textures don't match the above rendered terrain, I think the geometry is OK though, so it could be something with my terrain textures coords, maybe I should flip them for reflections or something... 4. At ~55 seconds I show an edge issue that I have at flat angles when pitching the camera down. Pitching back up where Look.y is slightly above 0, and those edges issues are gone. Something else which I don't understand. Realizing that my Reflection texture is 512x512 in size, I'm trying to set the Projection matrix to have a 1:1 aspect ration instead of the default ratio (full screen Width:Height), but when I do that I'm getting weirder behavior, like reflections are way off (Of course I'm putting back the correct projection matrix before rendering to back buffer). So I wonder if I even should touch that at all... Cheers!
  11. yonisi

    DX11 Planar Reflections issue

    OK current status is much better after updating the camera Look.y vector to be always positive and changing the address mode the sampler to BORDER and setting the border to 0, no more such anomalies. However, something is still somewhat off, I seem to get wrong reflections (i.e terrain doesn't look exactly same above ground and in reflection, like there is some shift somewhere).
  12. yonisi

    DX11 Planar Reflections issue

    Seems like it is a bad sampling out of the texture range. Because if I change the sampling to WRAP instead of CLAMP sampler, I'm getting different kind of artifacts (Wide lines due to WRAP instead of vertical lines from sampling border - I guess its borders). Problems are still though: 1. Why I have sampling out of texture? 2. I tried to set borders to 0 for the CLAMP sampler, but still I get the same artifacts.
  13. Hi all, I'm working on a terrain engine with DX11. The engine suppose to render every frame a surface that represents ~180x180 KM and the texturing should look good from high above but also close to the ground (The engine is part of a game and the camera view can come to 0 altitude, basically). More regarding texturing - Since this is a replacement engine to an existing one (which use much lower mesh resolution on DX9), the textures already exist and there are basically ~3000 of them. On DX9 the engine is issueing a draw call for every texture, which results in ~1000 draw calls per frame for the terrain (As not all textures are usually used per frame, and of course many are tiled more than once as the mesh is huge). With DX11 I'm storing all those ~3500 on 2 texture arrays and using very few draw calls (Due to Tessellation technique, the textures can be done with 1 draw call). In order to select which textures will be tiled where, I have a blendmap that is a 1024x1024 16-bit RAW file that I sample in the Pixel shader and choose according to the value which array to use of the 2 and which texture eventually to tile at the UV ccordinate. Now, since I want to fight the huge number of textures that will require a lot of VRAM (Assuming I need at least 3K textures and let's say acceptable resolution is 1024x1024 DXT1, that's already more than 2GB that I have to store in VRAM, only for the art textures of the terrain), my idea for texturing was to use another set of textures in Multi-texture fashion to tile most of the terrain and leave only special areas that require special look to be tiled with more unique textures. But, Now everything got complicated as I learn about Tiled-Resources Thing is that I simply don't get how EXACTLY it's working. I have an idea after I read what I could on the web and I even have this source code of Mars rendering which is linked on the bottom of this page: https://blogs.windows.com/windowsexperience/2014/01/30/directx-11-2-tiled-resources-enables-optimized-pc-gaming-experiences/ Also I saw that PPT (Didn't heard the lecture though): https://channel9.msdn.com/Events/Build/2013/4-063 So I simply want to ask: 1. According to what I described and assuming my engine eventually renderes real areas of the world, would it be my best choice to use Tiled Resources and even increase the number of textures in use? Because basically the dream of artists is to use a unique Sattelite image for textures... 2. Currently the engine is designed for D3D 11.0 only, we didn't meant to go 11.2 which will require users to have windows 8.1 at least, is it worth it? 3. From coding POV, how complicated it should be to handle Tiled Resources? As I understand the idea is to allow the App specify which textures are needed for the current frame and only the subresources (i.e needed textures and only the necessary mip levels) are uploaded to VRAM. It means I will need some App (C++ for me) code that will tell the rendering code which textures I need and which mip-levels for each texture? Or is it something "Automatic" ? 4. I do have the Mars example from Microsoft which I linked above, and I'm going to inspect it deeper (They are using 16K^3 - 1GB texture for the rendering, but they say they are using only 16MB of VRAM for each frame, and I could verify with process explorer that the App uses only ~80MB of VRAM total), But I don't know if it really represents the same complexity that I have, i.e many different textures, as here it's only 1 large texture, If I understand it correctly. Any help would be welcome.
  14. Thanx everyone for the answers! MJP - I'm going to go with your advice then and use Texture arrays. I learned DX11 coding from Frank Luna Book and he always warned about dynamic branching in shaders, but since I already saw some in code that is working for years without too much performance issues, I guess it won't be such a big deal. FWIW - I'm working on this engine for DX11 and PC only (As eventually this engine should be implemented on an existing game engine going through an upgrade). The specific textures I'm referring to here actually won't use any mipmapping because those are textures that will hold mapping data, and not art (e.g watermaps will hold 8-bit color values that will be used as depth maps and some other data related to water), I also plan to use the same mechanics for my blendmaps. Currently I have 8192x8192 textures that hold texture IDs and alpha values that are used in the Pixel shader to select which texture IDs to blend with which alpha for each, the result is eventually a Multitexture operation. Regarding textures size and limits and performance, I'm ALREADY using in current version a couple of blend maps that are all 8192x8192 and I didn't noticed any performance issues, not even with my previous laptop that had GTX-750 mobile card. This engine isn't for any kind of console or mobile devices, it's for PC only and most users using the current version of the game already have descent Hardware, so I don't think there will be any issues with using 4096x4096 size.
  15. Hi everyone, I have a terrain grid which is pretty large and I'd like to have some data stored in textures to map stuff for me. For example I'm going to have water bodies data which is made of 64 DDS textures where each texture is 4096x4096. Now, since I don't want to hold all 64 texture in memory all the time - Basically I can, but VRAM will probably be crowded with other stuff and I want to save room. Also there are some other sets that I'll need to hold which may be even larger. So, I decided that at a given moment I'd like to hold a cube of 3x3 of those 64 textures in VRAM and use the camera position to decide at which cube I'm right now and render those 3x3 cubes accordingly. Now, I have 2 ways that I thought of doing what I need: 1. Use a texture array - I already know how to use this and it would be pretty easy to manage I think. But My fear is that I won't have a way to decide which array index will fit each pixel WITHOUT USING a couple of if/else pairs for dynamic branching in the Pixel shader which AFAIK isn't such a good idea. Please if you think that using a couple of dynamic branches isn't THAT bad, then I may do just that, it would be easier for me. 2. Use a texture Atlas in memory - This solution has the advantage that I can directly translate world position in the Pixel shader to texture coordinates and sample, but I'm not sure how to load 3x3 DDS textures into 1 big Atlas that is 3x3 times the size of each of the textures. I'm especially confused with how to order the textures correctly in the Atlas, as I'm not sure it'll be ordered same way as loading into an Array. If option #2 is doable I think to go with that would be easier than translating world position to array indices. Thanx for any help.
  16. Issue resolved. My error was the order that I took the worldPos.xz which is used to fetch the texcoords. The original order was: Bilerp worldPos Displace worldPos by FFT Assign (the already displaced) worldPos.xz to the DS output.worldPos.xz (Which is used as texcoord in the PS) The correct order is: Bilerp worldPos Assign (BEFORE displacement) worldPos.xz to the DS output.worldPos.xz (Which is used as texcoord in the PS) Displace worldPos by FFT Almost drove myself crazy around this
  17. Hi, As part of my terrain project, I'm trying to render ocean water. I have a nice FFT Compute shader implementation which outputs a nice 512x512 heightmap (It can also output a Gradient map but I disabled it as there are issues with it). The FFT code is from the Nvidia FFT ocean sample for DX11. Now, here is the weird thing, I have 2 different methods that render the water grid, both using the same FFT Heightmap SRV (SRVs are members of a dedicated Resource Manager class), and both are rendering the FFT Heightmap same way exactly. Although the grids are different, eventually I made the FFT map tile in a way where the scales are almost 1:1. The rendering itself is pretty much straight forward (Using DX11 Tessellation pipeline): 1. In domain shader - Sample the Heightmap in order to displace the vertices 2. In pixel shader - Finite diff to get the normals - Sample the heightmap 4 times and calculate the normals as usual Now here is the weird thing: Method 1 - Normals look good after Finite diff operation - Unfortunately I can't use this method as it has some other issues. Method 2 - Normals are coming out distorted in a way that I can't explain - More than that, if in the Domain shader I give up the displacement on the horizontal axis (XZ) and leave only the vertical displacement on Y axis, the normals are fine. With full displacement (XZ included) it feels like the normals aren't compensating for the XZ movement of the displacement. I tried to play with anything I could think of, but normals look bad no matter what. And I really don't want to give up the XZ displacement as with vertical displacement only, the FFT looks kinda crippled. I tried also to use ddx_fine and ddy_fine, and it seems like the normals looking more accurate (i.e taking the XZ movement into account), but the quality was very low, so not usable. But the fact that the natural derivatives functions showed the XZ movement more accurately does give me hope that there is a better way to do it (?) So, Is there a better way to calculate the normals more accurately? Here is the difference: Method 1 normals - Nice and crispy Method 2 normals - Distorted Also here is the Method 2 displacement in wireframe, and it's looking good as can be seen: I'm also attaching here the relevant DS and PS code that makes the displacement and normals in method 2 (Method 1 code is same, just has some more stuff like Perlin noise blended in the distance, but the FFT related stuff is same exactly): DS displacement code // bilerp the position float3 worldPos = Bilerp(terrainQuad[0].vPosition, terrainQuad[1].vPosition, terrainQuad[2].vPosition, terrainQuad[3].vPosition, UV); float3 displacement = 0; displacement = SampleHeightForVS(gFFTHeightMap, Sampler16Aniso, worldPos.xz); displacement.z *= -1; // Flip Z back because the tex coordinates use a flipped Z, if not flipping the FFT look kinda upside down worldPos += displacement * FFT_DS_SCALE_FACTOR; return worldPos; PS finite diff: float3 CalcNormalForOceanHeightMap(float2 uv) { float2 one_texel = float2(1.0f / 512.0f, 1.0f / 512.0f); float2 leftTex; float2 rightTex; float2 bottomTex; float2 topTex; float leftY; float rightY; float bottomY; float topY; float normFactor = 1.0 / 512.0; leftTex = uv + float2(-one_texel.x, 0.0f); rightTex = uv + float2(one_texel.x, 0.0f); bottomTex = uv + float2(0.0f, one_texel.y); topTex = uv + float2(0.0f, -one_texel.y); leftY = gFFTHeightMap.SampleLevel(Sampler16Aniso, leftTex, 0 ).z * normFactor; rightY = gFFTHeightMap.SampleLevel(Sampler16Aniso, rightTex, 0 ).z * normFactor; bottomY = gFFTHeightMap.SampleLevel(Sampler16Aniso, bottomTex, 0 ).z * normFactor; topY = gFFTHeightMap.SampleLevel(Sampler16Aniso, topTex, 0 ).z * normFactor; float3 normal; normal.x = (leftY - rightY); normal.z = (bottomY - topY); normal.y = 1.0f / 64.0; return normalize(normal); } Any help would be welcome, thanx!
  18. Thanx for the answers! Ths issue was resolved by Using the Nvidia based terrain and increasing the tris count (It has that option because it's based on "quad rings" with different sized patches, and since the all algorithm is screen-space optimized, reducing the patches size doesn't have THAT bad effect on performance) in a way that the heightmap is over sampled, then performing quad bilinear interpolation (same as it's done in the Domain shader) gets the correct altitude. Also by using the Nvidia based algorithm I'm not bound to the 64-sized patches, so the Heightmap could stay at its original size of 11052. To my surprise, using the low tris count version (of any of the 2 methods) with the padded Heightmap which should have been OK, still showed altitude differences at some points that I cannot explain. It looks like with the over-sampled version, the quads were smoothed, but with 2 tris, sometimes quads were having some unnecessary "break" in the middle (Where the 2 tris are divided) which caused the miss match issue. Seems like the FP rounding is OK actually, so I left it as-is. I know powers of 2 are better but here the real world size of the Heightmap came to that size, so I prefer using it than going up to 16384 and do data-paging and also double the Heightmap size in VRAM. Also there is no mipmapping for the Heightmap (It's always sampled with 0 mip level), so LOD/mipmapping don't play here. Yes that is indeed correct, bounded to the 64-sized patches I had to make the Heightmap 11073x11073 because otherwise the was a miss alignment, but not by 1 cell diff, but by 64! as the algorithm actually is rounding down the number of patches, so 11073 was the only way to go (And the initial sync issue was due to the fact that I missed the number of patches by 1!).
  19. Hi all, I'm working on a terrain for some project I'm involved in. The terrain is pretty large and represents a ~90 Meter (3 arc-second) resolution for a surface that is 1024x1024 KM in size. For that purpose the Heightmap texture is large and its size is 11072^2. Actually the original size was 11052^2, but I had to pad it to be 11072^2 as the rendering algorithm is using 64x64 patches for the DX11 64-limit tessellation to divide the terrain exactly as necessary, so 11072 / 64 gives exactly 173 patches, which is the number of patches entering the pipeline. Now, I have 2 methods for rendering the terrain which yields the EXACT same results (There is a very minor textures difference but the vertices from both methods are exactly one on top the other): 1. Basic method that renders the heightmap as-is with the tessellation pipeline - I had to ditch that method due to severe FP accuracy issues with the art textures, but still the geometry is OK. 2. More complicated method by Nvidia as described here - https://www.yumpu.com/en/document/view/32144510/directx-11-terrain-tessellation-nvidia-developer-zone - I used the Nvidia source code and manipulated it for my engine and use a constant heightmap without noise, rather than the original procedural noise they generate in the original code. So, these 2 methods render EXACTLY the same geometry which should represent the heightmap 1:1. I'm sure it should be 1:1 because In the 1st method I input to the pipeline 173 quad patches which are then divided by the tessellator from 1-64 times (I'm actually using power of 2 fashion for the tessellation, but it shouldn't matter as the issue I have is anyway relevant only to the highest detailed tessellation). The heightmap data is coming from a RAW texture contains 11072x11072 components of 16-bit values (i.e the size of the texture is ~232MB). As the heightmap contains values in feet from real world SRTM data, I'm just scaling it after loading from the texture to be grid local float values (i.e I have a FEET_TO_GRID value which is used for the scaling). Since I'm loading RAW texture as-is I consider it should be 100% accurate as it's not compressed like DDS texture. Technically for the loading I read the texture data into an array of 16-bit short values (cause data could be negative as well), and then I feet the scaled values into a float array of the same size as the heightmap, i.e 11072x11072, then I'm creating a SRV with 32-bit float format (i.e DXGI_FORMAT_R32_FLOAT). Now to the issue - The float array used to create the SRV is saved in system RAM so I can use it for simple collision detection and triangles normals calculations. The calculations for the collision and normals are correct as I'm using Barycentric interpolation (Example here https://classes.soe.ucsc.edu/cmps160/Fall10/resources/barycentricInterpolation.pdf). Let's leave normals alone for now, I only care about the height. But I didn't managed to put objects EXACTLY on the terrain surface, usually objects hover a bit or sink into a bit, but I must have them exactly on the terrain surface, and I think it should work because I'm holding in system RAM exactly the same height field data as the most detailed tessellated patch rendering. So I can't understand why my objects can't stick to the terrain. Things I thought about and tried, and also some stuff I noticed that may give more info: 1. I've noticed that the heightmap is being "followed", I.e I can see the object moving with the terrain surface changes, but there is some shift or scale that cause the height to not be in sync. I tried to look in the rendering for such shift/scale distortion but couldn't find any, and since I have the same terrain rendered with 2 different methods, small chance that both cause the same distortion in the rendering if there was some shift/scale issue (I'd expected them to differ). 2. Using a constant value for the height when loading the heightmap and the object stands perfectly on the terrain, so there is no FP accuracy issue with the scaling factor etc. 3. Using sine wave instead of the heightmap values showed same as #1 - The object is moving in a sine wave way but not in-sync with the rendered surface. 4. I thought maybe since the tessellator divides the tris in "different" fashion in a given patch (i.e sometimes it's ABC/BCD and sometimes ACD/ABD). So I tried to change the collision computation to this and that fashion, but anyway both give ~same results, so it can't be due to "different tri-dividing fashion). 5. My main concern - Could it be that because the heightmap texture UV size is irregular, then the shaders introduce some distortion compared to the data? Maybe DX does some up/down scaling to a more "friendly" number for the Texture size and so creating the distortion? Some reference pictures: Both terrain methods rendered one on top of the other, you can see the vertices are 1:1 exactly. Black is 1st method and Red is the Nvidia method: Crate sunk in the terrain, if I make it move then you can see it hovers/sinks at some points but still following the terrain shape, although mostly the movement changes aren't exactly in sync with the surface (And the distortion I suspect): Please help because I'm out of ideas. I know that in some games some distortion is allowed and expected as the collision data doesn't hold the full-resolution rendering details, but here it's not the case, and also cannot be accepted as this crate in the picture represents a pretty large object in the terrain I'm working on (i.e it's ~9x9 meters building/structure as it's size is 0.1 and the grid units are ~90 meters in size). I thought of trying to output the vertices world positions somehow directly from the Domain shader (I'm not yet sure how exactly, if possible at all), or alternatively render to texture once with full tessellation on all patches (So data should be highest detail) and save the texture data to some array and compare it with the data I'm using for the height calculations. If you have other ideas, please share. Thanx!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!