DwarvesH

Members
  • Content count

    199
  • Joined

  • Last visited

Community Reputation

510 Good

About DwarvesH

  • Rank
    Member

Personal Information

  1. Yes, it was the normal calculation. I inherited the code form the CPU version so I guess that one was bad too. I eventually settled on a Sobel operator: float4 PSNHeightToNormal(float4 inPos : SV_POSITION, float2 inTex : TEXCOORD0) : SV_TARGET { float ps = 1 / size; float3 n; float scale = worldScale; n.x = -(h(inTex, ps, ps) - h(inTex, -ps, ps) + 2 * (h(inTex, ps, 0) - h(inTex, -ps, 0)) + h(inTex, ps, -ps) - h(inTex, -ps, -ps)); n.y = -(h(inTex, -ps, -ps) - h(inTex, -ps, ps) + 2 * (h(inTex, 0, -ps) - h(inTex, 0, -ps)) + h(inTex, ps, -ps) - h(inTex, ps, ps)); n.z = 1 / scale; n = normalize(n); n = n * 0.5 + 0.5; return float4(n.x, n.z, n.y, 1); } Hopefully this one works as expected. I still need to test it a bit because I am having a bit of a brain fart since the switch from RH to LH. I have no longer of an intuitive concept of "forward" and I need to contentiously convert in my head from one system to another :).   There is still a bunch of things to decide regarding how to interpret data for LOD transition and if to use mipmaps or just secondary lower resolution textures.   Is there a way to control how DeviceContext->GenerateMips works? What filters it uses? I couldn't find anything.   Additionally, since for LOD I am generating every chunk separately using noise, I have reintroduced the issue of TSeams at chunk borders...
  2.     The normal are normalized, in the PS too.   And I am using gamma correct rendering. The debug maps you see on the left are all ran though a pixel shader. The height map is made more human readable by coloring terrain above water gray and bellow blue and the normal map is rendered in "fake non sRGB mode".   Universally used gamma correct rendering is fairly new, and in the past people would just output their linear normals from the shaders to the screen and I have gotten used to the look of normal maps like that, displayed wrong. So I wrote a little pixel shader to fake that look on a gamma corrected renderer.
  3. So I tried two blending methods for the highest detail LOD based loosely on more physically sound operations, and got these two attached results.   Man, rendering...   I'll go with 5 from my previous reply for stylized look and with 7from this reply for "realistic" look for now until I can shed more light on the problem.
  4. Thanks for the input Krohm!       I'm talking about the general shape of the curve the nDotL goes from 1 to 0 based on sampling rate.   I believe too that in theory you should use you maximum resolution/LOD0 height map to get the normal map. But I do not like the visual result I get when I build it at LOD0. Maybe I am building it wrong! The results are worse and worse as I increase resolution. Here are the results for 4096x4096:   [attachment=28529:nn01.png]   Maybe it is correct, but I do not think so.   If I go to LOD2 (4 time lower size), I get a bit of normals:   [attachment=28530:nn02.png]   Going to LOD4, the second to last lowest in quality, I get this result:   [attachment=28531:nn03.png]   I decided to try some things out. Here is a normal LOD5, the lowest shot, with regular normals:   [attachment=28532:nn04.png]   And here is shot that uses a high resolution normal map corresponding to LOD0, only using some physically very unsound blending a normals:   [attachment=28533:nn05.png]   I need to try some more physically sound blending.   I have no idea yet which direction to follow. More like the first screenshot or more like the last or next to last? Artistically I like the last ones.       What kind of special test case do you have in mind?
  5. I know I've been spamming a bit the forums, but please bare with me.   I have this old DX9 CPU based terrain LOD system and I'm updating it to a modern DX11 GPU based one. Progress has been slow but very fun, since I get to replace hundreds of lines of code doing expensive CPU LOD operations with a few GPU lines and I also get a performance boost out of it.   I am not going to talk a bout terrain height across LOD and morphing because these questions have fairly solid answers in my head.   It is about lighting. Even with physically based rendering, the good old cosine factor, the nDotL plays a huge role since we multiply by it. So I want to get it right, but I am getting some weird results as I move across LOD/MIP levels in my normal map generated based on the height map.   Supposing that we have a sampling rate of 1, meaning for NxN vertices we have NxN texels in our height map and a direction of float3(0.80, -0.40, 0.0), I have a first question.   1. Is the following output of nDotL correct, useful for physically based rendering and must be used as a general guideline across all LOD levels, meaning that whatever the sampling rate, your nDotL should roughly follow this curve?   [attachment=28526:ndotl1.png]   This looks correct for a low resolution (128x128) input texture.   But if I double the height map and normal map resolution, but adjust the sample rate (half it) so that the amount of vertices remains the same, I get this result, which changes the the nDotL curve:   [attachment=28527:ndotl2.png]   This because I am creating the normal map based on sampling of 4 points in the height map: float4 GenNM(VertexShaderOutput input) : SV_TARGET { float ps = 1 / size; //ps *= size / 128; float3 n; n.x = permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x - ps, input.texCoord.y, 0, 0), 0).x * 30 - permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x + ps, input.texCoord.y, 0, 0), 0).x * 30; n.z = -(permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x, input.texCoord.y - ps, 0, 0), 0).x * 30 - permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x, input.texCoord.y + ps, 0, 0), 0).x * 30); n.y = 2; n = normalize(n); n = n * 0.5 + 0.5; return float4(n, 1); } permTexture2d is the wrongly named heightMapTexture and 30 is the world scale. When I double the height map resolution, it becomes finer and the height points closer, so some of the overall curve of the lower resolution height map is lost, rather than adding detail to it. Am I understanding this right?   Doubling the height and normal resolution again and halving the sample rate so that we have the same amount of vertices I get this result:   [attachment=28528:ndotl3.png]   By the time I get to the desired resolution, the normals become quite flat and so does the lighting.   The reason for generating a high resolution map is that this is the input for LOD 0 close to the camera terrain. LOD 0 is rendered using a lot of vertices. As I move away from the camera, I start using less vertices, spaced further apart. The final LOD will have the 128x128 vertices, like in the first low resolution screenshot.   So the next question is:   2. How should the normal map on full resolution look? More like the one in the first screenshot, but with more detail, or more like in the last screenshot, flat.   3. How consistent should be the various MIP levels of the normal map as I go down in resolution?  
  6. Here is one thing I do not understand...   I generate a texture map with values [0..1], where 0 is ocean floor maximum and 1 is mountain maximum, 0.5 being sea level, using GPU simplex noise.   In my vertex shader, I sample that texture: input.Position.y = permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord, 0, 0), 0).x * 30; float4 worldPosition = mul(float4(input.Position, 1), World); output.Position = mul(worldPosition, ViewProj); In the pixel shader I used to take the world position and use its y coordinate to done coloring, like: float inz = ((input.wPos.y / 30) - 0.5) * 2; Since at the moment I am using just low resolution meshes for terrain, I did not like the interpolation artifacts. Plus using the y coordinate causes very flat color gradients. So I decided to sample the same map as in the vertex shader, getting the actual height at that point: float inz = (permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord, 0, 0), 0).x - 0.5) * 2; And this is what I get when I render all values inz <= 0 as black and the rest with some colors:   [attachment=28513:1002_06.png]   I really can't figure out why I get that pattern. It does not look like interpolation errors. It is not a big issue, I can work around it, but I am very curious what causes this. The shape depends on the size of the input maps, so it is related to sampling somehow...
  7. Sorry, it is indeed that I misunderstood how to populate data for DXGI_FORMAT_R8G8B8A8_SNORM.   Multiple octave noise is looking as expected now:   [attachment=28454:noise4.png]   On the plus side, I know now how to do a weird procedural square snaky texture!
  8. This single octave of simplex noise doesn't look right, does it?   [attachment=28452:noise.png]   It is a port of: https://digitalerr0r.wordpress.com/2011/05/15/xna-shader-programming-tutorial-25-perlin-noise-using-the-gpu/   Which implements: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter26.html   A port of a port and something was lost in translation, when switching from XNA and DX9 to C++ DX11.   Probably my code that generates the two textures on the CPU that are sent to the shader have some issues. Probably with the NormalizedByte4 type: https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.packedvector.normalizedbyte4.aspx   This type must be matched to something, so I'm guessing DXGI_FORMAT_R8G8B8A8_SNORM? This type should give a range of [-1..1] in floats.        
  9.   So, have you tried with D3D11_USAGE_STAGING?   Edit: Bind flags may need to be set to zero (no flags at all) since Staged resources cannot be bound...   Thanks for the help!   Unfortunately, even with Usage and BindFlags and CpuFlags set as such, the texture still fails to create. I even tried setting the MiscFlags to zero as one link suggested.   Right now I'm trying to create a second staging texture, and copy the first one to the second, and try to lock that one. This is far more complicated than it should be, with the maze of textures and resources and resource views in DirectX 11.   I created the texture and the ShaderResourceView, and I'm trying to figure out how to get a Resource from the created files to pass them to CopyResouce.       Why do you need to create a GPU resource then? Just save your "large textures" as raw height values and read them directly from disk to CPU. That would make much more sense (and would be faster).      For the terrain there are two stages.   The first stage is the preprocessing stage. I need the input height map to be in GPU readable and in texture format because: 1. Artists provide textures. 2. The output of the GPU simplex noise shader is a texture 3. The input is passed on the the normal map calculator shader, the self shadowing shader, etc..   The output of these stages is one or multiple textures.   These I need to split into small tiles for streaming and the second stage, which is the level loading. They are optionally cached to disk. This stages does not work with the big textures, only the small ones.   Performance is not an issue. The whole thing should take fractions of a second, except for the disk access of course.
  10. Hello everybody!   I have a small problem: I can't figure out how to create a texture that is readable on the CPU side after loading. I have googled the issue and the normal instructions are changing the Usage and CpuBindFlags, but as soon as I touch the flags, D3DX11CreateShaderResourceViewFromFile returns FAILED.   I need to read the texture for two things: 1. I have finally transitioned to VTF. Thousands of lines of CPU code for building and managing terrain LOD are now moved to the GPU. This means that I need to have my terrain data in a GPU readable format: textures. I read my terrain map into a texture and I would like to split it up into small chunks of 128x128 (size not important). So I want to read one or multiple large textures, lock them, use the data on the CPU to get all mip levels in a chunk and create a default usage and bind flags texture for each chunk, then discard the large texture. 2. I have several special partitioning schemes that have their data saved on disk as a texture. I need to read that texture, process it, and then discard it.   Both steps are done once at load time so performance is not a problem.   I have a simple wrapper for the texture class: bool Create(wchar_t* path) { HRESULT result; // Load the texture in. D3DX11_IMAGE_LOAD_INFO info; info.Width = D3DX11_DEFAULT; info.Height = D3DX11_DEFAULT; info.Depth = D3DX11_DEFAULT; info.FirstMipLevel = D3DX11_DEFAULT; info.MipLevels = D3DX11_DEFAULT; info.Usage = (D3D11_USAGE) D3DX11_DEFAULT; info.BindFlags = D3DX11_DEFAULT; info.CpuAccessFlags = D3D11_CPU_ACCESS_READ; info.MiscFlags = D3DX11_DEFAULT; info.Format = DXGI_FORMAT_FROM_FILE; info.Filter = D3DX11_DEFAULT; info.MipFilter = D3DX11_DEFAULT; info.pSrcInfo = NULL; //info.CpuAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ; result = D3DX11CreateShaderResourceViewFromFile(DeviceSingleton, path, &info, NULL, &Handle, NULL); if(FAILED(result)) return false; ID3D11Texture2D* tex; Handle->GetResource((ID3D11Resource**)&tex); D3D11_TEXTURE2D_DESC desc; tex->GetDesc(&desc); tex->Release(); Width = desc.Width; Height = desc.Height; return true; } I tried all combinations of Usage and CpuAccessFlags and the creation fails. It only works with D3DX11_DEFAULT for all values.   And if I leave all at default, my Lock method fails: void* Lock() { ID3D11Resource* res = nullptr; Handle->GetResource(&res); res->QueryInterface(&TexHandle); D3D11_MAPPED_SUBRESOURCE mappedResource; // Lock the constant buffer so it can be written to. HRESULT result = DeviceContextSingleton->Map(TexHandle, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); if(FAILED(result)) return nullptr; return mappedResource.pData; } The fields are: ID3D11ShaderResourceView* Handle; ID3D11Texture2D* TexHandle; Thank you for your time reading this!
  11. Ways to render a massive amount of sprites.

    I think you are over-thinking it!   What exactly are you trying to do? It is a 2D sprite based game? There is absolutely no way you will have performance problems. The more old school it is, the less you will have. Just don't render the whole level, do some frustum culling. Have a good map/level representation, sit down and code and you should see more than 200 FPS and dynamic 2D lighting.   But for a more scaleable solution, you should divide the level into chunks, like a 32x32 chunk of sprites. When you scroll create new chunks as needed and just cull on a chunk basis.   On the other hand, if you are incorporating sprites in a full 3D game, like for particles, the situation changes.
  12.   Thanks!   It seems there is no way to do what I want based on those rules.   But I did find a solution that works: drawing the line list a second time, but this time as points.   Another problem was that as the lines were snapping to pixels as I was moving my character/camera, they had horrible temporal aliasing with them jittering up and down one pixel.   There is again no solution for this except for line AA: https://dl.dropboxusercontent.com/u/45638513/sprite19.png   Or alternatively a post-processing edge detection algorithm.   Well, at least now I know how Door Kickers got those thin but soft lines: they must have used line AA too! http://inthekillhouse.com/wordpress/wp-content/uploads/2013/08/2013_7_24_17_48_59.jpg
  13. So I'm creating a top down/2.5D/polygons + 2 top down camera game.   I snap the polygons to a grid but the game is not pixely.   I would also like to add borders to walls. How I did this is drawing the walls a second time, but this time with lines.    But I have found that the rasterizer does not behave exactly as it does for filling polygons as it does for drawing lines. Randomly the end points are not draw, especially for horizontal lines.   One can see this in screenshots if you zoom in a bit: https://dl.dropboxusercontent.com/u/45638513/sprite15.png https://dl.dropboxusercontent.com/u/45638513/sprite16.spr.png   The outlines look a bit rounder at corners.   Is there a way to get DirectX to fill the exact border of a triangle with a line? In a portable way?   Or maybe I'm overthinking things and shouldn't really care about one pixel.  
  14.   Do you have any reference for that? Never had that problem, can use windows with client area of prime x prime without issue...     The backbuffer can have odd dimensions and 3D rendering works great with it. It is 2D rendering that does not work well for GUI and stuff. I am using an orthographic camera and a SpriteBatch (written by myself) type class to render the GUI. I am also using filtering to render the GUI so that stretched controls look nice and smooth. Without filtering odd buffers work. With filtering, they don't. It is probably my fault, not a real issue with DX, since I did not pay attention to texel centers and whatnot that is needed for dealing with bilinear filtering.   I may investigate this further and if it is an issue with SpriteBatch, I may disable the code that forces the window to be even. But right now I need to get the port form C#/DX9 to C++/agnostic between DX10 and DX11 done ASAP! I will be at least one week late :).       No, I do not. I tried to do as little as possible and let DXGI do as much as possible. Probably would have been much easier to handle everything myself.
  15.   Sometimes I wish it was possible only to create less laggy GUIs...