• Advertisement

turanszkij

Member
  • Content count

    146
  • Joined

  • Last visited

Community Reputation

430 Neutral

3 Followers

About turanszkij

  • Rank
    Member

Personal Information

Social

  • Twitter
    turanszkij
  • Github
    turanszkij

Recent Profile Visitors

5249 profile views
  1. DX12 Shader compile step

    Thank you @galop1n and @MJP. You bring up good points. I have been thinking about writing a tool to automatically generate shader permutation, but I also like the ideas you mentioned, like "spread compiles over the network", but I guess even just spreading it over multiple cores would benefit the compile times. I also like the idea that for example the editor could reference the shader compiler and do automatic compiling of shaders when they change. I am playing with the idea to have an app that has a higher level of notion of shader library (knowledge of engine, permutations, etc), and I could even call it from VS to build before the engine and also call the built executable and initiate shader compile as the next build step. I guess if I want to do something like it and restructure shader compilation, this would be the best time before it becomes more complicated.
  2. I am using just Lua and a little helper header called Luna to easily bind C++ objects to be accessible from a lua script. I think this should be all you need, really easy to set up and you introduce no additional dependencies. Though I don't understand what you mean by "work with dx12", you certainly don't want to call dx12 functions from a lua script? You could do it in theory, but I wouldn't, let Lua scripting be a part of your game logic, or calling engine functions, but don't use it to replace performance sensitive low-level code.
  3. DX12 Shader compile step

    I really don't want to learn how to create a VS plugin lol. And I absolutely want to keep the build as simple as just need to press F5 with a clean visual studio install. Well, that's not the case anymore with the new shader compilers added, but it is in very experimental stage so far so it's fine I guess. When it becomes stable and tested, each compiler will probably have its own project referencing the shader source project and calling a custom build step.
  4. DX12 Shader compile step

    The path I started to walk is having a python script parse the shader project and extract a batch script that will call the shader compiler for every shader included in that project with the correct build profile. For the time being I will keep the visual studio shader compiler as default, and each time I need to add a shader I will do it on the shaders project, then once I want to build them with an other shader compiler, the python script will generate the batch and the batch will build all. Later I might try to integrate the whole process into Visual Studio. Be back if I find a better solution...
  5. Hi, right now building my engine in visual studio involves a shader compiling step to build hlsl 5.0 shaders. I have a separate project which only includes shader sources and the compiler is the visual studio integrated fxc compiler. I like this method because on any PC that has visual studio installed, I can just download the solution from GitHub and everything just builds without additional dependencies and using the latest version of the compiler. I also like it because the shaders are included in the solution explorer and easy to browse, and double-click to open (opening files can be really a pain in the ass in visual studio run in admin mode). Also it's nice that VS displays the build output/errors in the output window. But now I have the HLSL 6 compiler and want to build hlsl 6 shaders as well (and as I understand I can also compile vulkan compatible shaders with it later). Any idea how to do this nicely? I want only a single project containing shader sources, like it is now, but build them for different targets. I guess adding different building projects would be the way to go that reference the shader source project? But how would they differentiate from shader type of the sources (eg. pixel shader, compute shader,etc.)? Now the shader building project contains for each shader the shader type, how can other building projects reference that? Anyone with some experience in this?
  6. Lighting space

    Actually, right now I like to use light indices for every kind of shading (deferred, forward, tiled...). I can update a huge entity array once (which also contains lights), then every shader which needs to use any entity just need to have an index offset, not update a whole structure. Though from an old-school deferred shader standpoint for example, the shader would be faster if it loaded from a small constant buffer instead of a huge entity array with an index. Still need to make a decision on that, but at least this way all the shading paths are a bit more unified and easier to keep them managed.
  7. Lighting space

    I think it's no longer true though if you are using perspective projection, smooth normal or normal mapping. For the topic though I would always go with world space instead of view space, because this way you can use the global set of light data for all passes using different cameras. An other contender would be texture space shading with a completely different approach, but it can decouple shading from screen resolution and update/render loop frequency as well, see this: https://gpuopen.com/texel-shading/
  8. Voxelization cracks

    Or maybe you mean you try to visualize the voxels with ray-marching and that's where you get the errors? In that case, you should refine your steps because stepping along the ray can sometimes completely miss voxels (because of large steps or shooting at the voxel edge)
  9. Voxelization cracks

    @matt77hias What do you mean by disappearing voxels for some camera angles? They should not depend on camera angle, are they not captured in world space? @arnero You can use rasterization to determine which triangle overlaps which voxels and build a voxelization entirely on the GPU.
  10. The reason is that TDR (Timeout Detection and Recovery) feature is set to trigger after 2 seconds by default on Windows. I think you can force create a graphics device somehow that overrides the system defaults, or alternatively the system settings can also be changed in the registry (refer to article).
  11. 3D Tube lights or projected area light

    @hyperknot Hi, probably fully featured area lights are not the best fit for mobile as they can result in quite heavy shaders. That being said, it is actually harder to find a proper diffuse term for the area lights than speculars. For the speculars, all you have to do is to trace the area light with your reflection vector (which you already have, named reflectDir). If the trace succeeds, then you can use the same reflection vector to calculate a phong specular. If it doesn't succeed, you have to find the point on the area light surface closest to the reflection ray. Your new reflection ray will be then closestPoint - in.posEye in your case, then just proceed to calculate the phong specular with that. For tube lights, you can find the closest point on light surface by first finding the closest point to reflection ray on the tube line segment: // P0 and P1 are the tube line segment endpoints // surface.P is the surface position (start point of the reflection ray) // R is the reflection ray float3 L0 = P0 - surface.P; float3 L1 = P1 - surface.P; float3 Ld = L1 - L0; float t = dot(R, L0) * dot(R, Ld) - dot(L0, Ld); t /= dot(Ld, Ld) - sqr(dot(R, Ld)); // sqr is just x*x float3 L = (L0 + saturate(t) * Ld); Once you have the closest point on the segment, place a sphere on that point with the radius of the tube and pick the closest point on the sphere: float3 centerToRay = dot(L, R) * R - L; float3 closestPoint = L + centerToRay * saturate(light.GetRadius() / length(centerToRay)); L = normalize(closestPoint); And now you can use L as the new reflection vector as input to your phong specular term. Good luck!
  12. NSight captured RTVs are black

    But Nsight has a profiler too. I use it quite often. There is a nice blog how to use it: https://devblogs.nvidia.com/the-peak-performance-analysis-method-for-optimizing-any-gpu-workload/
  13. NSight captured RTVs are black

    Never mind I just checked the simplest geometry in the font rendering, which is AoS as well, still nothing. Nsight devsupport said they are working on it, but that was like a year ago already..
  14. It doesn't work if you want to rasterize into 3D texture anyway in a single pass. The fact that you change axis of rasterization per triangle seem to mess up the thing. Though maybe a 3-pass voxelization might be worth it, so you avoid the geometry shader and can use this conservative raster feature.
  15. NSight captured RTVs are black

    Thanks, you don't have to check my engine lol I already checked on a bunch of PCs for the same result. So you have the same issue. The strange thing is that the data can be viewed correctly in the resource view, but in a raw format without reinterpreting according to the input layout. I am sort of OK with that, but makes things a bit more difficult. I recall it broke somewhere around the time I rewrote my vertex buffers to be deinterleaved positions, texcoords, etc. Do your vertex buffers have all the properties interleaved or not eg. SOA or AOS layout?
  • Advertisement