Vilem Otte

GDNet+ Basic
  • Content count

    733
  • Joined

  • Last visited

Community Reputation

2953 Excellent

1 Follower

About Vilem Otte

  • Rank
    Crossbones+

Personal Information

  • Interests
    Art

Social

  • Twitter
    VilemOtte
  • Github
    Zgragselus
  • Steam
    Zgragselus

Recent Profile Visitors

22846 profile views
  1. DX11 having problems debugging SSAO

    While just briefly reading the code (it's quite hard to say what is going on - your SSAO calculation doesn't look correctly to me though), here are few notes which might lead you to where the issue is: Make sure you know in which space you are - world space, view space, object space, etc. ... doing this wrong will be one of the reasons for view-dependent errors. Do NOT multiply by random constants that make it "look good" - make sure each constant has a reason why it is there. Put it in the comment. Compare everything - you can write how 'view space normals', 'view space position', etc. when generating G-Buffer (into another buffer), and compare against your reconstruction - this way you can proof that you have your input data correct Now, for the SSAO: Make sure you're sampling in hemisphere ABOVE the point in direction of normal. From your specified vectors you will also attempt to sample in the opposite hemisphere. You will need some randomization (otherwise you will need a lot of samples to make SSAO look like anything resembling SSAO). I also recommend checking out other shaders doing SSAO - F.e. on ShaderToy - https://www.shadertoy.com/view/4ltSz2 - it might help you find what is wrong on your side (I'm intentionally adding it here, as if you compare the actual SSAO calculation, as yours does seem incorrect to me)
  2. DX12 Texture min filter not working

    You've understood one thing wrong - how the magnification and minification filtering works. During magnification the number of texels you need to read per pixel processed is always 1 (in case of point filter) or 4 (in case of bi-linear filter). During minification - the further away the object is, the more texels from original texture are in single pixel ... it is possible that if your object fits in one pixel, all texels would require to be read for that pixel (in case of implementation that doesn't require mipmaps - such implementation is NOT Direct3D) - that would be too slow, so what you require are mipmaps for the texture. So, you require mipmaps for your minification filtering to work.
  3. Shadows

    I was actually trying to avoid this topic as long as possible, but with my current updates on lighting system - I had to bump into it, and of course write about it. So, when designing my lighting system I kept in mind few things I want to achieve with it - multiple light sources, that is a must have (especially when you're using deferred shading), all of those need to be used throughout the stages (global illumination, reflections, etc.), and I require all lighting to be dynamic and capable of casting shadows. Other than that multiple supported light types - point, spot, directional and area light sources, all supporting casting shadows. While the lighting system is still not complete, shadows pretty much are. Or to be precise shadow filtering, a user needs to be able to select what shadow filtering method is light going to use (depending on the light importance, range and strength). So let's take a look at few examples of shadow filters available: Standard shadow mapping & PCF shadow mapping Bilinear shadow mapping & PCF + Bilinear shadow mapping Are standard common filters well known and used. The advantage of these is that they are fast - and therefore possible to use for all lights. Yet they don't somehow seem realistic - especially for area lights, this is where more advanced filters are required. Percentage closer soft shadows & Mip-map penumbrae shadow maps Are advanced filters for area-based shadow maps, and allow for variable penumbra size. Note that the images might have a bit low bias, resulting in small artifacts (I didn't really fine tune anything for taking the images). These shadow filters should allow for most scenarios I can imagine (but technically if any other is required it can be added).
  4. DX12 MSAA in DX12?

    You don't need additional command lists, etc. A pipeline state is enough. You can create 2 pipeline states and in the code use: ID3D12GraphicsCommandList::SetPipelineState(ID3D12PipelineState* pPipelineState) In a way that: if (msaa) { cmdList.SetPipelineState(msaaPSO); ... } else { cmdList.SetPipelineState(nonMsaaPSO); ... } ...
  5. DX12 MSAA in DX12?

    Yes, you will need additional: pipeline state (therefore shaders and most likely root signature) and buffers (color and depth). As for matching msaaTexture - I have a root signature with 1 descriptor table there, that contains 1 descriptor range. This range contains just 1 descriptor of type D3D12_DESCRIPTOR_RANGE_TYPE_SRV. This SRV is created standardly with CreateShaderResourceView. The resource passed in is committed resource for color buffer. I might read the code later (I just briefly went through - and it doesn't contain additional pipeline state, buffers, etc. yet) - as right now I'm just on short lunch brake in work.
  6. DX12 MSAA in DX12?

    I have made it, and it's actually not that hard Create a pipeline state for render target with specific MSAA samples and quality, like this (I've removed additional code) DXGI_SAMPLE_DESC samplerDesc; samplerDesc.Count = samplesMSAA; samplerDesc.Quality = qualityMSAA; D3D12_GRAPHICS_PIPELINE_STATE_DESC desc = { 0 }; ... desc.SampleDesc = samplerDesc; device->CreateGraphicsPipelineState(&desc, __uuidof(ID3D12PipelineState), (void**)&pipelineState); Create render target buffers that hold rendered data (for both - color & depth buffers) D3D12_RESOURCE_DESC desc = DescTex2D(...) desc.SampleDesc.Count = samplesMSAA; desc.SampleDesc.Quality = qualityMSAA; .... CreateTextureResource(...) CreateDerivedViews(...) Next you need to have pipeline state for resolving and I assume you want to resolve into backbuffer for swap chain (so you already have color (& depth) buffers for that). This pipeline state and texture buffers are no-MSAA, so samples count is 1 and sample quality is 0 Rendering is then straight forward - you render your scene with MSAA pipeline state into MSAA buffers. And during resolve you attach SRV of those buffers and resolve in your shader like for example this (note SamplesMSAA is a define with number of samples used in MSAA): Texture2DMS<float4, SamplesMSAA> msaaTexture : register(t0); cbuffer msaaDesc : register(b0) { uint2 dimensions; } struct Input { float4 position : SV_POSITION; float2 texCoord : TEXCOORD0; }; Input VS(uint id : SV_VertexID, float3 position : POSITION, float2 texCoord : TEXCOORD0) { Input result; result.position = float4(position, 1.0f); result.texCoord = texCoord; return result; } float4 PS(Input input) { uint2 coord = uint2(input.texCoord.x * dimensions.x, input.texCoord.y * dimensions.y); float4 tex = float4(0.0f, 0.0f, 0.0f, 0.0f); #if SamplesMSAA == 1 tex = diffuse.Load(coord, 0); #else for (uint i = 0; i < SamplesMSAA; i++) { tex += diffuse.Load(coord, i); } tex *= 1.0f / SamplesMSAA; #endif return tex; } I could post working code somewhere, but I have most objects in DX12 wrapped, and whole engine attached to it - so it would probably make no sense and would be overly confusing (my most simple example does MSAA with deferred shading and post-tonemapping resolve... along with voxel cone tracing - that is a LOT of noise around).
  7. I'm not sure whether standard mipmap generation (averaging) is a good/correct one for signed distance fields (correct me if I'm wrong, it's 5:22 am and I'm in front of computer for more than 20 hours ... so I'm not really thinking straight). Can you show how your distance fields (incl. higher pyramid levels - miplevels) look like?
  8. DX12 DX12 and threading

    @MJP I will just add a short note about VSYNC and triple-buffering. Players in competitive action games will generally disable both of them. "Missing" VSYNC (and therefore delaying frame present for another X ms) will put you at severe disadvantage, and triple buffering means you're basically presenting frame that is some short time old (and yes - even though hard to distinguish for "us" - casuls - professional players will notice the difference and different feel). So in the end, it really matters what is your target product.
  9. Ludum Dare 40

    Since Ludum Dare 35 I'm regularly participating in every one of them and this one wasn't exception. My release thoughts are positive - this time I've again worked with one friend (with whom we've worked also in the past on Ludum Dare), and I enjoyed it a lot. As this is not a post mortem yet, I will not go into details what went right or wrong - meanwhile I'll just show the results and put out few notes... Yes, that's one of the screenshots from the game (without UI). I'm using this one as a "cover" screenshot - so it should also be included here. Anyways this was my another experience with Unity and maybe one of the last Ludum Dare experiences with it. While I do like it, if I can think about suitable game for my own game engine for the theme next time, it's possible that I won't use Unity next time. Ludum Dare Each 4 months or so, this large game jam happens. It's a sort of competition, well... there are no prizes, and I honestly do it just for fun (and to force myself to do some "real" game development from time to time). It takes 48 or 72 hours to create your game (depending on whether you go for compo or jam category), and there are just few basic rules (which you can read on site - https://ldjam.com/). Then for few weeks you play and rate other games, and the more you play, the more people will play and rate your game. While ratings aren't in my opinion that important, you get some feedback through comments. Actually I was wrong about no prizes - you have your game and feedback of other people who participate in Ludum Dare as a prize. Unity... I've used Unity for quite long time - and I have 2 things to complain about this time, majority of all used shaders in Air Pressure (yes, that is the game's name) are actually custom - and I might bump into some of them in post mortem. Unity and custom shaders combination is actually quite a pain, especially compared to my own engine (while it isn't as generic as Unity is - actually my engine is far less complex, and maybe due to that shader editing and workflow is actually a lot more pleasant ... although these are my own subjective feelings, impacted by knowing whole internal structure of my own engine in detail). Second thing is particularly annoying, related to Visual Studio. Unity extension for Visual Studio is broken (although I believe that recent patch that was released during the Ludum Dare fixed it - yet there was no time for update during the work), each time a C# file is created, the project gets broken (Intellisense works weird, Visual Studio reports errors everywhere, etc.), the only work around was to delete the project files (solution and vcxproj) and re-open Visual Studio from Unity (which re-created solution and vcxproj file). Unity! On the other hand, it was good for the task - we finished it using Unity, and it was fun. Apart from Visual Studio struggles, we didn't hit any other problem (and it crashed on us just once - during whole 72 hours for jam - once for both of us). So I'm actually quite looking forward to using it next time for some project. Anyways, I did enjoy it this time a lot, time to get back into work (not really game development related). Oh, and before I forget, here they are - first gameplay video and link to the game on Ludum Dare site: https://ldjam.com/events/ludum-dare/40/air-pressure PS: And yes I've been actually tweeting progress during the jam, which ended up in a feeling, that I've probably surpassed number of Tweets generated by Donald Trump in past 3 days.
  10. Having a pause from handling files and editor is good, to at least update something in rendering and keep motivation high. So I went ahead and implemented voxel cone tracing global illumination (and reflections of course). Anyways, image time: Although quite dark, secondary shadows are visible. Note, global illumination is full integrated into the editor. Reflective box, global illumination debug buffer (in Debug window), and color bleeding visible from spotlight. Anyways so much for the show - so how is it done? In short: Scene is voxelized, during which phase lights and shadows are injected in. Reflection pass performs cone tracing, the cone angle is defined based on material properties GI pass performs cone tracing for global illumination Lighting pass has 1 fullscreen quad for indirect light (and reflections), and then 1 for each light (which I'd like to replace with tile-based method) Resolution for reflection and GI pass can be any (therefore even sub-sampling can be done), actually in the images Scene is voxelized into 512x512x512 buffer, Reflection & GI pass are done at FullHD with 1x MSAA, and Lighting pass is done with 8x MSAA. Original G-Buffer generation is done at 8x MSAA. Everything is resolved later (actually even after the tone mapping pass). I have an option to switch from voxel texture into sparse voxel octree, yet it is still heavily un-optimized (and slower), although having a lot smaller memory footprint. When I manage to find some more time for that, I'd like to switch over to sparse voxel octree only. If possible, I'd like to re-visit resource management and dynamic re-loading, which would be a bit less 'showcase' and more 'coding' topic. Other than that, virtual shadow maps and virtual textures are going to be visit and attempted by me, hopefully in next weeks. Side note: If you're trying to implement VXGI or voxelization on the GPU, and having some questions - I'll gladly answer them. That should be it for today, thanks for reading!
  11. HLSL Geometry Shader issue

    And dang! Magic happened. I updated the driver to version 17.11.2 from the 17.11.1 (I believe driver number is actually another one, that the one that is showed in Radeon application) -> and magically it works! To be precise BOTH things work - GPU based validation no longer seems to crash, and passing out the structure from geometry shader also doesn't seem to crash anymore.
  12. HLSL Geometry Shader issue

    @MJP I tried both - gs_5_0 (no success with this one) and also enabling GPU validator. GPU validator told me nothing, the crash was exactly the same. What is interesting, is that enabling GPU validator introduced a random error of CreateCommittedResource failing - it fails with: 0x887a0005 - The video card has been physically removed from the system, or a driver upgrade for the video card has occurred. The application should destroy and recreate the device. For help debugging the problem, call GetDeviceRemovedReason. So I went ahead and called GetDeviceRemovedReason 0x887a0007 - The device failed due to a badly formed command. This is a run-time issue; The application should destroy and recreate the device. The funny thing is, this error is only experienced on random with SetEnableGPUBasedValidation set to TRUE. Now I've checked where it was crashing also - and it was during the load phase on CreateCommittedResource, and the parameters seems valid (and same each run, the crash is random though!). Weird... I haven't tried the new compiler, but it might be worth trying (I did try to pass flags to D3DCompileFromFile to disable optimizations, etc. Without any success or message. @CortexDragon I thought the same so I gave it a shot, changed structure to: struct Geom2Frag { float4 mPosition : SV_POSITION; nointerpolation float4 mAABB : AABB; float4 mNormal : TEXCOORD1; float4 mTexCoord : TEXCOORD0; nointerpolation uint4 mAxis : AXIS; float4 temp : TEXCOORD2; }; This way it's 96 bytes, and there shouldn't be any alignment problems. And yes, it still crashes. I'm trying one additional thing - I noticed that I'm currently not running recent AMD drivers (the update was released about 7 days back, I'm still on older version). Let me quick-try the driver update.
  13. So, I've been playing a bit with geometry shaders recently and I've found a very interesting bug, let me show you the code example: struct Vert2Geom { float4 mPosition : SV_POSITION; float2 mTexCoord : TEXCOORD0; float3 mNormal : TEXCOORD1; float4 mPositionWS : TEXCOORD2; }; struct Geom2Frag { float4 mPosition : SV_POSITION; nointerpolation float4 mAABB : AABB; float3 mNormal : TEXCOORD1; float2 mTexCoord : TEXCOORD0; nointerpolation uint mAxis : AXIS; float3 temp : TEXCOORD2; }; ... [maxvertexcount(3)] void GS(triangle Vert2Geom input[3], inout TriangleStream<Geom2Frag> output) { ... } So, as soon as I have this Geom2Frag structure - there is a crash, to be precise - the only message I get is: D3D12: Removing Device. Now, if Geom2Frag last attribute is just type of float2 (hence structure is 4 bytes shorter), there is no crash and everything works as should. I tried to look at limitations for Shader Model 5.1 profiles - and I either overlooked one for geometry shader outputs (which is more than possible - MSDN is confusing in many ways ... but 64 bytes limit seems way too low), or there is something iffy that shader compiler does for me. Any ideas why this might happen?
  14. That first triangle.

    Rushing towards prototype of anything I do as soon as possible, to get in more motivation. My long-term hobby project has quite large code base so far... yet I'm adding things progressively and visualizing everything I do - it tends to be very rewarding and pushing you forward when you can see something working visually. When doing projects from scratch, F.e. for game jams, I tend to build prototype ASAP and then focus on specific gameplay aspects adding good (audio) visuals with animations/physics to them (literally one object/effect at time). It just makes me happy when I see a thing I prototyped hours back to look and feel how it was supposed to look.
  15. Dynamic resource reloading

    Making editors is a pain. I have a list of thousands of items I'd rather do than this - yet I made myself a promise to drag at least one full featured editor tool over the finish line. There are few reasons for that: I believe I have quite useful engine, it was my pet project all these years, it went through many transformations and stages - and solid tool is something like a goal I'd like to do with it, to make it something better than "just a framework". I'm very patient person, and I believe also hard working one. Throughout the years my goal is to make a game on my own engine (note, I've made games with other engines and I've used my engine for multiple non-game projects so far -> it eventually branched to be a full-featured commercial project in past few years). I've made few attempts but mostly was stopped by lacking such tool - that would allow me to build scenes and levels, in an easy way. And the most important one ... I consider tools one of the hardest part in making any larger project, so it is something like a challenge for me. Anyways so much for motivation, the tool is progressing well - it can be used to assemble scene so far, various entities (like light or materials) can have their properties (components) modified, with full undo/redo system of course. And so the next big part was ahead of me - asset loading and dynamic reloading. So here are the results: Engine editor and texture editor before my work on the texture. And then I worked on the texture: And after I used my highly professional programmer-art skills to modify the texture! All credits for GameDev.net logo go to its author! Yes, it's working. The whole system needs a bit of cleanup - but in short this is how it works: All textures are managed by Manager<Texture> class instance, this one is defined in Editor class There is a thread waiting for change on hard drive with ReadDirectoryChangesW Upon change in directory (or subdirectories), DirectoryTree class instance is notified. It updates view in bottom left (which is just a directory-file structure for watched directory and subdirectories), and also for modified/new files creates or reloads records in Manager<Texture> class instance (on Editor level) The trick is, reloading the records can only be done while they're not in use (so some clever synchronization needs to be done) I might write out some interesting information or even short article on this. Implementing it was quite a pain, but it's finally done. Now short cleanup - and towards the next one on my editor todo list! Thanks for reading & see you around!