Jump to content
  • Advertisement

yonisi

Member
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

105 Neutral

About yonisi

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. OK I see. Thax for fast answers! So basically you both suggest to go with a separate road rendering (i.e separate geometry) as the best method. In general, what you say sounds very logic, as: 1) Sure I do hold all the height data in a heightmap as after all it is for a flight sim and so full physics must work (Interaction of airplanes, vehicles, structures, trees etc with the terrain), so flattening areas shouldn't be such a big deal (I already have flattening code 2. The roads information that I'm planning to implement on the terrain will be originated from real data, and since the terrain is also based on real data (All stuff like terrain projection is already taken care of) the roads should be rendered where real roads exist, so I don't expect that much of mismatches. However, all that said, I do worry mostly (and that's why I considered this hard to achieve) about what seem to be a must for matching road vertices with terrain vertices. I mean, since we talk about virtual terrain with limited resolution (My case for now, 62.5m mesh), I'm worried that I will have to attach vertices wherever the terrain vertices are positioned, as well as at every place the road is changing. In other words, I will have to closely make sure that all geometry is "100% parallel" vertically at all points or otherwise I will see either roads hovering above terrain or sinking in. I think this is the greatest challenge by rendering roads as separated 3D objects. Thanx again for answers and help!
  2. Hi, I'm working on a terrain engine for sometime now, and I would like to add now some roads to my terrain. I'm not sure yet to which direction should I look for to get the best result. My terrain is for a flight simulator and so it covers a pretty large area of 1024x1024 KM in size. The heightmap size is 16K so I have 62.5m mesh resolution at the highest tessellation level (It is good enough). The main problem I have though is that it's pretty hard for me to get down to details, as the rendering is done in large chunks (Based on Nvidia DX11 terrain tessellation sample - Shown here: https://developer.nvidia.com/dx11-samples and also the PDF here: http://fliphtml5.com/lvgc/xhjd/basic) and all the texture coordinates are generated from the grid in the shaders, I hardly have any control on "small items". Not sure if this is a good enough explanation of my problem, but it's like I can't just tell the terrain: "Draw road segment here". The way I see it, I have more or less 4 options (Unless you can suggest more): 1. Trying this sample from Humus engine that is rendering roads on terrain using stencil buffer and box volumes: http://www.humus.name/index.php?page=3D&ID=84 - I think this is my preferred choice but for some reason I wasn't able to get anything rendered at all on my terrain using the code from their sample (At least with translation to my own engine, I must be doing something wrong. Do you also think that this is the best method to draw roads on terrain? Pros: Seems simple and efficient, using the stencil buffer that the HW gives for free Cons - I can't get it to work 2. Render 3D models of roads on the terrain. Pros - Minimal rendering, no interfering with the terrain rendering itself. Cons - Hard to get correct results with tessellated terrain, potential Z-Buffer issues with the terrain. I think that's the worse solution. 3. Simple texture mapping - Holding huge textures to map roads on the entire terrain. As basically I need only highways and real roads (Sand roads, small streams and such others will be coverd by a 4m/pixel photoreal textures), it could be not that much but still will probably cost some GBs of textures to cover the entire terrain area with sufficient resolution (Roads as textures should have at least 4m/pixel). Pros - Relatively easy to render, no Z-Buffer issues Cons - Large textures to hold and manage, branching in the Terrain's pixel shader for checking if we are on a road pixel 4. Texture mapping but with RTT. Instead of holding constant textures in disk and load them when necessary, RTT the roads into textures in memory and hold only the necessary data this way. By using small tiles (Example 1x1 KM) I think it's possible to map only tiles with roads and render them, all other tiles will be set with 0-mapping and so shader will fetch "no roads" texture and won't map anything Pros - Texture mapping but without holding any real textures on disk Cons - Requires RTT including probably render into mipmaps stages, may be a bit complicated to manage, area with many roads may get to large usage of memory I still would like to get the Humus sample to at least work on my engine. Unless you think it has other issues which may make it not that great for broad usage on a large rendered terrain. Thanx!
  3. Thanx a lot Hodgman!! I'll dive into those details to find what I need!
  4. Hi, I have a terrain engine that is using a lot of textures, including texture arrays and indexing. I'm using currently full size RAW textures in order to index into texture arrays. RAW textures are relatively large in size as they are full size data. My question is as follows: Let's talk about 8 bit indexing, so 0-255 only. And let's assume I need as large size as possible so I'm using 16384 size textures: RAW file in size 16384^2 8-bit depth = 256MB DXT1 file in size 16384^2 8-bit per channel on 3 channels without mipmapping = 128MB The cost of course is accuracy, understood. But, assuming I don't need all the 0-255 range but I can get along with say 50 indices or so. Would it be fine to use DXT1 to get an index? What I mean specifically is that let's say I need to be able to specify 50 different indices, would it be OK to encode the values into DDS DXT1 texture as 0, 5, 10, 15, 20 etc etc until it'll get to 220. And in shader, sample that texture and translate the value to an accurate index. Can I do that? The motivation is of course memory size, with 128MB I can create 3 "index textures" instead of 768 for the same with RAW accurate data.
  5. Hi, I have a Terrain rendering demo that is running and rendering OK, but when I run it in Debug mode (Device debug flag on and Visual studio debug build), the Device Release() method is crashing, I don't know why and I couldn't find a way to track the reason. Is there something that can tell me why the Device fail to Release? I don't know what exactly to show you so I'll just post the App final destructor, which is trivial: D3DApp::~D3DApp() { SAFE_DELETE( mBackBuffer ); ReleaseCOM( mSwapChain ); ReleaseCOM(md3dImmediateContext); if ( mDebug ) { mDebug->ReportLiveDeviceObjects(D3D11_RLDO_DETAIL); } ReleaseCOM(mDebug); ReleaseCOM(md3dDevice); --> Crashing here #if defined(DEBUG) || defined(_DEBUG) _CrtSetReportMode( _CRT_ERROR, _CRTDBG_MODE_DEBUG ); _CrtDumpMemoryLeaks(); #endif } ReleaseCom( x ) is a simple SAFE_RELEASE kind of method, means if ( x ) { x->Release(); x = nullptr; } SAFE_DELETE is same but deleting an object (mBackBuffer is of a wrapper TextureHandle class that wraps all my textures) I also used our SVN server to try and track where the problem started, but it's a very slow process as running a debug session here takes ~6-7 minutes each time, and I saw it goes long back down the SVN path so not really practical to find where the problem started. However, I'm almost sure there is some better way to know why D3D isn't able to close properly.
  6. yonisi

    Planar Reflections issue

    OK OK I think I understand what you mean, I'll try to improve, thanx for all the advices and time!
  7. yonisi

    Planar Reflections issue

    OK. It seems actually like 1) and 2) are caused by the same issue: As I stated in above post, it's a physical issue with my terrain and water grids, as the reflection render pass depends on the water level I sent to it, the reflection will apparently match only if the camera position is at the height of the reflecting surface (This case the water grid). And because my water heightmap is "following the terrain" at areas where the water height is below the terrain height, I have different readings of the "current water level" at different camera position. That effect is for sure the cause of 2) and I suspect also of 1) because when I hover with the camera above the water surface, I don't have such anomalies. For issue 4) I solved it by adding some offset to the w component of the projected tex position. i.e I did: projTexCoords.x = input.projTexPosition.x / (input.projTexPosition.w + 0.1) / 2.0 + 0.5; projTexCoords.y = -input.projTexPosition.y / (input.projTexPosition.w + 0.1) / 2.0 + 0.5; Added a 0.1 factor. That probably distorting a bit the reflection, but it solves the edges issue I have (Can be clearly seen in the vid at ~55 seconds, the edges are "cut", make sure to watch at 1080 resolution). For 3) I'm not sure if I understand what do you mean by "too deep" ? it means that the reflection is "sticking out" too much further into the water? If yes then I can see the 0.1 factor I added helps also with that (i.e the reflection is more "attached" to the water edges this way) Also, how can I flatten the projection matrix? Sorry if a stupid question, but I'm still a "starter" with Graphics so not familiar with all the terms. Thanx a lot for all the help! But I wouldn't want you to try those samples I linked just for helping me on this! You are a great help already and helped me with most of the issues without such painful effort
  8. yonisi

    Planar Reflections issue

    OK for issue "2" above, I think I know what is the problem, and its actually not related to Graphics, but to the physics of my terrain, My water heightmap follows the terrain pretty close, so if I have a hill on the terrain, the water will climb with such hill, even though the water is covered by the hill anyway, hence I'm getting a bad reading on "Water level" that goes up and down and that explains the bouncing. So reason for issue "2" is known.
  9. yonisi

    Planar Reflections issue

    Sure! My sources were: RasterTek Reflection tutorial: http://rastertek.com/dx11tut27.html RasterTek small water bodies tutorial: http://rastertek.com/tertut16.html This blog here: http://archive.gamedev.net/archive/reference/articles/article2138.html And also Habib's lake water tutorial: https://habibs.wordpress.com/lake/ But honestly, I think I did some kind of a mix of all For example, RasterTek tutorial suggest to set the Look.y of the reflection camera to the position.y, but for me that doesn't work. Current state is: If I count the problems, seems like I have 4 which I can't understand (Timing references to the above vid): 1. At the start you can see as I pitch down the camera, the reflection of the sky color is removed and the natural color of the water takes place. 2. After that you can see how getting closer and further cause the reflections to "bounce" in a crazy way, I have no idea why this is happening. 3. At ~40 seconds, I show how the reflection of the terrain is somewhat off in terms of textures don't match the above rendered terrain, I think the geometry is OK though, so it could be something with my terrain textures coords, maybe I should flip them for reflections or something... 4. At ~55 seconds I show an edge issue that I have at flat angles when pitching the camera down. Pitching back up where Look.y is slightly above 0, and those edges issues are gone. Something else which I don't understand. Realizing that my Reflection texture is 512x512 in size, I'm trying to set the Projection matrix to have a 1:1 aspect ration instead of the default ratio (full screen Width:Height), but when I do that I'm getting weirder behavior, like reflections are way off (Of course I'm putting back the correct projection matrix before rendering to back buffer). So I wonder if I even should touch that at all... Cheers!
  10. yonisi

    Planar Reflections issue

    OK current status is much better after updating the camera Look.y vector to be always positive and changing the address mode the sampler to BORDER and setting the border to 0, no more such anomalies. However, something is still somewhat off, I seem to get wrong reflections (i.e terrain doesn't look exactly same above ground and in reflection, like there is some shift somewhere).
  11. yonisi

    Planar Reflections issue

    Seems like it is a bad sampling out of the texture range. Because if I change the sampling to WRAP instead of CLAMP sampler, I'm getting different kind of artifacts (Wide lines due to WRAP instead of vertical lines from sampling border - I guess its borders). Problems are still though: 1. Why I have sampling out of texture? 2. I tried to set borders to 0 for the CLAMP sampler, but still I get the same artifacts.
  12. Hi, I have a terrain engine where the terrain and water are on different grids. So I'm trying to render planar reflections of the terrain into the water grid. After reading some web pages and docs and also trying to learn from the RasterTek reflections demo and the small water bodies demo as well. What I do is as follows: 1. Create a Reflection view matrix - Technically I ONLY flip the camera position in the Y direction (Positive Y is up) and add to it 2 * waterLevel. Then I update the View matrix and I save that matrix for later. The code: void Camera::UpdateReflectionViewMatrix( float waterLevel ) { mBackupPosition = mPosition; mBackupLook = mLook; mPosition.y = -mPosition.y + 2.0f * waterLevel; //mLook.y = -mLook.y + 2.0f * waterLevel; UpdateViewMatrix(); mReflectionView = View(); } 2. I render the Terrain geometry to a 512x512 sized Render target by using the Reflection view matrix and an opposite culling (My Terrain is using front culling by nature so I'm using back culling for the Reflction render pass). Let me say that I checked with the Graphics debugger and the Reflection Render target looks "OK" at this stage (Picture attached). I don't know if the fact that the terrain is shown only at the top are of the texture is expected or not, but it seems OK. 3. Render the Reflection texture into the water using projective texturing - I hope this step is OK code wise. Basically I'm sending to the shader the WorldReflectionViewProj matrix that was created at step 1 in order to use it for the projective texture coordinates, I then convert the position in the DS (Water and terrain are drawn with Tessellation) to the projective tex coords using that WorldReflectionViewProj matrix, then I sample the reflection texture after setting up the coordinates in the PS. Here is the code: //Send the ReflectionWorldViewProj matrix to the shader: XMStoreFloat4x4(&mPerFrameCB.Data.ReflectionWorldViewProj, XMMatrixTranspose( ( mWorld * pCam->GetReflectedView() ) * mProj )); //Setting up the Projective tex coords in the DS: Output.projTexPosition = mul(float4(worldPos.xyz, 1), g_ReflectionWorldViewProj); //Setting up the coords in the PS and sampling the reflection texture: float2 projTexCoords; projTexCoords.x = input.projTexPosition.x / input.projTexPosition.w / 2.0 + 0.5; projTexCoords.y = -input.projTexPosition.y / input.projTexPosition.w / 2.0 + 0.5; projTexCoords += normal.xz * 0.025; float4 reflectionColor = gReflectionMap.SampleLevel(SamplerClampLinear, projTexCoords, 0); texColor += reflectionColor * 0.25; I'll add that when compiling the PS I'm getting a warning on those dividing by input.projTexPosition.w for a possible float division by 0, I tried to add some offset or some minimum to the dividing term but that still not solved my issue. Here is the problem itself. At relatively flat view angles I'm seeing correct reflections (Or at least so it seems), but as I pitch the camera down, I'm seeing those artifacts which I have no idea where are coming from. I'm culling the terrain in the reflection render pass when it's lower than water height (I have heightmaps for that). Any help will be appreciated because I don't know what is wrong or where else to look.
  13. Hi all, I'm working on a terrain engine with DX11. The engine suppose to render every frame a surface that represents ~180x180 KM and the texturing should look good from high above but also close to the ground (The engine is part of a game and the camera view can come to 0 altitude, basically). More regarding texturing - Since this is a replacement engine to an existing one (which use much lower mesh resolution on DX9), the textures already exist and there are basically ~3000 of them. On DX9 the engine is issueing a draw call for every texture, which results in ~1000 draw calls per frame for the terrain (As not all textures are usually used per frame, and of course many are tiled more than once as the mesh is huge). With DX11 I'm storing all those ~3500 on 2 texture arrays and using very few draw calls (Due to Tessellation technique, the textures can be done with 1 draw call). In order to select which textures will be tiled where, I have a blendmap that is a 1024x1024 16-bit RAW file that I sample in the Pixel shader and choose according to the value which array to use of the 2 and which texture eventually to tile at the UV ccordinate. Now, since I want to fight the huge number of textures that will require a lot of VRAM (Assuming I need at least 3K textures and let's say acceptable resolution is 1024x1024 DXT1, that's already more than 2GB that I have to store in VRAM, only for the art textures of the terrain), my idea for texturing was to use another set of textures in Multi-texture fashion to tile most of the terrain and leave only special areas that require special look to be tiled with more unique textures. But, Now everything got complicated as I learn about Tiled-Resources Thing is that I simply don't get how EXACTLY it's working. I have an idea after I read what I could on the web and I even have this source code of Mars rendering which is linked on the bottom of this page: https://blogs.windows.com/windowsexperience/2014/01/30/directx-11-2-tiled-resources-enables-optimized-pc-gaming-experiences/ Also I saw that PPT (Didn't heard the lecture though): https://channel9.msdn.com/Events/Build/2013/4-063 So I simply want to ask: 1. According to what I described and assuming my engine eventually renderes real areas of the world, would it be my best choice to use Tiled Resources and even increase the number of textures in use? Because basically the dream of artists is to use a unique Sattelite image for textures... 2. Currently the engine is designed for D3D 11.0 only, we didn't meant to go 11.2 which will require users to have windows 8.1 at least, is it worth it? 3. From coding POV, how complicated it should be to handle Tiled Resources? As I understand the idea is to allow the App specify which textures are needed for the current frame and only the subresources (i.e needed textures and only the necessary mip levels) are uploaded to VRAM. It means I will need some App (C++ for me) code that will tell the rendering code which textures I need and which mip-levels for each texture? Or is it something "Automatic" ? 4. I do have the Mars example from Microsoft which I linked above, and I'm going to inspect it deeper (They are using 16K^3 - 1GB texture for the rendering, but they say they are using only 16MB of VRAM for each frame, and I could verify with process explorer that the App uses only ~80MB of VRAM total), But I don't know if it really represents the same complexity that I have, i.e many different textures, as here it's only 1 large texture, If I understand it correctly. Any help would be welcome.
  14. Thanx everyone for the answers! MJP - I'm going to go with your advice then and use Texture arrays. I learned DX11 coding from Frank Luna Book and he always warned about dynamic branching in shaders, but since I already saw some in code that is working for years without too much performance issues, I guess it won't be such a big deal. FWIW - I'm working on this engine for DX11 and PC only (As eventually this engine should be implemented on an existing game engine going through an upgrade). The specific textures I'm referring to here actually won't use any mipmapping because those are textures that will hold mapping data, and not art (e.g watermaps will hold 8-bit color values that will be used as depth maps and some other data related to water), I also plan to use the same mechanics for my blendmaps. Currently I have 8192x8192 textures that hold texture IDs and alpha values that are used in the Pixel shader to select which texture IDs to blend with which alpha for each, the result is eventually a Multitexture operation. Regarding textures size and limits and performance, I'm ALREADY using in current version a couple of blend maps that are all 8192x8192 and I didn't noticed any performance issues, not even with my previous laptop that had GTX-750 mobile card. This engine isn't for any kind of console or mobile devices, it's for PC only and most users using the current version of the game already have descent Hardware, so I don't think there will be any issues with using 4096x4096 size.
  15. Hi everyone, I have a terrain grid which is pretty large and I'd like to have some data stored in textures to map stuff for me. For example I'm going to have water bodies data which is made of 64 DDS textures where each texture is 4096x4096. Now, since I don't want to hold all 64 texture in memory all the time - Basically I can, but VRAM will probably be crowded with other stuff and I want to save room. Also there are some other sets that I'll need to hold which may be even larger. So, I decided that at a given moment I'd like to hold a cube of 3x3 of those 64 textures in VRAM and use the camera position to decide at which cube I'm right now and render those 3x3 cubes accordingly. Now, I have 2 ways that I thought of doing what I need: 1. Use a texture array - I already know how to use this and it would be pretty easy to manage I think. But My fear is that I won't have a way to decide which array index will fit each pixel WITHOUT USING a couple of if/else pairs for dynamic branching in the Pixel shader which AFAIK isn't such a good idea. Please if you think that using a couple of dynamic branches isn't THAT bad, then I may do just that, it would be easier for me. 2. Use a texture Atlas in memory - This solution has the advantage that I can directly translate world position in the Pixel shader to texture coordinates and sample, but I'm not sure how to load 3x3 DDS textures into 1 big Atlas that is 3x3 times the size of each of the textures. I'm especially confused with how to order the textures correctly in the Atlas, as I'm not sure it'll be ordered same way as loading into an Array. If option #2 is doable I think to go with that would be easier than translating world position to array indices. Thanx for any help.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!