Vilem Otte

GDNet+ Basic
  • Content count

    715
  • Joined

  • Last visited

Community Reputation

2941 Excellent

About Vilem Otte

  • Rank
    Crossbones+

Personal Information

  • Interests
    Art

Social

  • Twitter
    VilemOtte
  • Github
    Zgragselus
  1. DX11 Set Lighting according to Time of the day

    Or if you want to go more precise - https://midcdmz.nrel.gov/spa/
  2. Dealing with frustration

    I would say: "No project is too big, and no project is small enough". The key point is motivation, and it is different for every single developer. Some take few days off, or go for holiday. Some take few days off explicitly to stay with family. Some just switch to another project. ... Some even build rockets and other crazy stuff in real life. How to keep yourself motivated? Well for start switch over to making something visual and short term (last item I did was playing with variance shadow maps ... seeing this in motion on your own D3D12 engine encourages you to work on!):
  3. Ludum Dare 39 - Release thoughts

    Yet another Ludum Dare has come around, and this time I've participated without a real team on development side (I had some help from my sister in audio art, ideas, gameplay balancing and user interface). Before publishing full post mortem I'd like to provide a link: https://ldjam.com/events/ludum-dare/39/im-an-nuclear-engineer Last 3 Ludum Dares really encourage me into starting and finishing some more serious game project, although I'm still thinking about it...
  4. Need a script for Main Menu buttons

    So, as I've created some games already in Unity, I will describe simple example of how I'd do it in Unity these days (and most likely will do it next time I'll use Unity for something): Here is a simple example I put together in few minutes: https://otte.cz/random/MainMenu.rar Short description: What you need is some GameObject in scene which will have script containing public functions that are handlers for buttons. The class holding these must derive from MonoBehaviour, like the one in the archive: using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.SceneManagement; public class MainMenuController : MonoBehaviour { // Function called for button, must have 0 or 1 string argument at most, must be public public void NewGameHandler() { // Load another scene with "Game" name, scenes must be added in Build Settings! SceneManager.LoadScene("Game"); } // Function called for button, must have 0 or 1 string argument at most, must be public public void QuitHandler() { // Exit the application Application.Quit(); } } Now on the buttons, add a On Click record, where you attach (drag & drop, or just click + select) this game object with above-like MonoBehaviour attached. Next, select (from event drop down) the MonoBehaviour class name and under it there will be a function (NewGameHandler, QuitHandler)... note that the drop down has 2 levels, it is easy to miss it. It is setup in the project in the archive, so feel free to use anything from it.
  5. Depth of Field Aperture Help

    Wait... you don't do anything to your secondary rays. The algorithm for DoF in ray tracing works as following: Start by casting a ray from camera into the scene. But instead of intersecting it with anything, you just determine the focal point for given pixel in the scene (pixel focal point). Now, for N samples, do: Select a random starting point on aperture. For circular - random polar angle + random radius, for rectangular - just use some distribution over the rectangle. Just make sure your probability distribution functions for your random number generator integrate to 1. Cast a ray through this point on aperture and focal point like you do normally, reflecting/refracting it around. I can put you a smallpt-like example up in few hours from now (I still have something to finish at work).
  6. I can't state for a book, but what you're trying is to simulate some physics behavior. So the real question from me is, are you trying to: Simulate the actual effect (in which sense it will have also most likely impact on gameplay), so that the actual effect behaves realistically Fake the effect and just rendering something that looks good/realistic There is quite huge difference between the two, and in the first case the actual rendering is most likely minor problem (due to the nature of simulation, it tends to be easy to actually render it), you can use information that are part of the actual simulation. In the second case, it tends to be easy to make 'good looking particles' for literally anything, while extremely hard to make them move and animate like realistic effect (basically 'fake' the simulation part).
  7. Voxel Cone Tracing - Octrees

    The problem with interpolation is, I can't do that beforehand. I have a set of elements from which I'm building a tree (those are voxels, generated for the scene). Now as I know the position of each voxel, I have no idea about its sorroundings (they can be in random order inserted into the set), all I know is their position and data (color, normal, whatever I wish to store). Adding voxels with interpolations would mean that I need to find neighbors in this set (which is actually easily done when octree building is finished). This will be quite hard, as the tree can't be easily modified. EDIT: Thinking of this, yes technically it is possible to add nodes, removing them might be more tricky - but yes, also possible ... as actually each node represents sparse octree itself. I believe this might be a decent way to handle inserting dynamic objects too ... or maybe different levels of quality (whenever it is necessary). Making this easily usable even for case of 'ball in stadium'. Each of the elements is inserted into the tree (the tree I generate is sparse), and doing any form of refinement will be hard, if not impossible once tree is generated (it uses parallel algorithm for that): You start with root node (levels = 1) Loop (for i = 0 to levels): For each element - traverse into the 'levels', and flag which nodes are to be split For each node in current level - if node is flagged, generate 8 child nodes Allocate generated child nodes Repeat the loop Now, I can't really change a tree once it is finished. The point is, I realized that bricks are something different than what I thought they are. The value in the centre is the most important one, and 26 values sorrounding the center are duplicated (yes, it is "memory overkill" in this sense)! The correct version of previous images is (and should be): [sharedmedia=gallery:images:8649] The mipmapping part seemed tricky at first (but my algorithm is wrong), what I should actually do is, for child nodes, use their center values (and the values between the centres - which will be 3x3x3 total values! Not 4x4x4!) and average this into the parent node's center. Then put the boundaries to fill boundaries in the parent node. Now, this will need additional step to perform the interpolations like I did for leaves, to guarantee that the border values in bricks are correct. [sharedmedia=gallery:images:8650]   Sounds quite complicated though (I didn't have time to measure time figures, as the code is still work in progress, so hopefully it won't be that big problem to compute).   EDIT2: Yes, I can confirm that now it works quite properly (even AO is smooth). The last part I'd like to investigate is traversal.   The sampling one is of course extremely fast if sample count is low, but as stated - unreliable. Increasing sample count can solve problems (basically oversampling everything), but tends to be quite heavy. But what's the point of using SVO when I'm just brute-forcing through right? Which has O(n) complexity.   I've tried ray-AABB intersections and stepping down the tree (using stack-based approach ... basically something similar to what one would use for BVH ray tracing). While reliable, it is extremely heavy if the nodes aren't pushed into stack in correct order (which doesn't seem as straight forward as I thought will be). This should theoretically have O(log(n)) complexity, assuming I put the children to stack in order to process them "along the ray".   I'd also like to try DDA-like approach (as each time I can determine on which level the empty intersected node is, it should be straight forward to step accordingly to next node), as stated previously - if implemented correctly, it should have O(log(n)) complexity of finding next node along "ray", and 
  8. Posts sketches

  9. Voxel Cone Tracing - Octrees

    And no, I still don't have filtering right. I noticed that 2 items will cause problems, so the first one is filtering inside leaves: Here is a pseudo-example: [sharedmedia=gallery:images:8641] This is our starting point - semi transparent nodes are visualization of part of the tree that is empty (therefore those nodes are not really there, and or course we can detect that there are empty nodes). Top left 4 nodes are full. Bottom left 4 nodes are also full (some with data that have alpha = 0). I will demonstrate filtering along X-axis (as described in paper). So, the first step is: [sharedmedia=gallery:images:8642] Writing data to the right neighbor (of course as tree is sparse we can't really access the 'transparent' - non existing nodes). This will be the result: [sharedmedia=gallery:images:8643] Now, as the data in right-most voxels need to be the same as in the left-most voxels, we need to copy from right back to the left. Like: [sharedmedia=gallery:images:8644] And the problem is: [sharedmedia=gallery:images:8646] Obviously as we can't write into non-existent nodes (due to sparse nature of the tree), the values won't match (even when we assume that value was with alpha = 0 in the previous steps). All the data neighboring non-existent nodes will be invalid ending up in a border. Of course the same problem is hit when computing interior nodes (mip mapped ones) - they will not match properly. Resulting in non-smooth ambient occlusion like this: [sharedmedia=gallery:images:8648] The paper sadly doesn't address this problem at all (it describes scenario when the tree is dense, in which case it is working properly, but as soon as tree is sparse, the problems arise). Any idea how to properly solve this? Obviously the leaf nodes can be solved by detecting non-existent nodes around currently processed node and set values to match (in all 6 directions). But how to perform 'mip-mapping' after (e.g. how to compute interior node values in a way that would make sense)?   My apologize for double-post!
  10. Voxel Cone Tracing - Octrees

    So, work in progress. [sharedmedia=gallery:images:8640] I believe I've got filtering and brick building correct as of now (I've even checked the 3D texture and it seems correct to me. Anyways I'm using the sampling traversal as of now (which is incorrect, as it doesn't give me any advantage of using octree), but I had to switch back to it (for testing the filtering) due to having something wrong in the ray-octree one (which I'll need to update to cone-octree). Whole SVO is built on GPU as of now, and the steps are following: // Build a hierarchy (similar to how it is done in papers) for (i = 0; i < levels; i++) { FlagNodes(); AllocNodes(); InitNodes(); } // Fill base level with data (as per your description, just done in 3D) FillLeaves(); // Filter bricks (as we need neighbor information!) FilterLeaves(); // Build interior nodes (interpolate from lower levels) for (i = 0; i < levels; i++) { BuildInterior(levels - i - 1); } Works like a charm, it is also very fast (I will do actual measurement shortly), and I am over-allocating memory a bit now (as I don't need log2(levels), but log2(levels)-1 depth of tree - as leaves (bricks) actually store one whole level in the 3D texture. I'm going to give figures for performance and memory out once I solve the traversal part (which I'm digging in now). And I haven't even started optimization yet! Thanks for advises in this thread. If everything is successful, then my next post will go into blog here on GameDev, with the figures and maybe some demo/code to show.
  11. Voxel Cone Tracing - Octrees

    No worries, I got the idea. It makes sense but you're right, it will have quite large memory impact (yet doing trilinear filtering in shader seems to have more impact). The octree tends to be quite sparse (For Sponza, the occupancy is about 3-4% (and it tends to be groupped) so there tends to be quite a lot of empty nodes even higher in the hierarchy. Although this might not be general rule for any scene. Anyways, time to do some coding. Thanks!
  12. Voxel Cone Tracing - Octrees

    Thanks for the response. I've read through Quantum Break paper but it doesn't really help much. I'm not really against increasing memory footprint, I just don't have any idea how to store 2x2x2 data in 3x3x3 brick (e.g. basically a brick would hold all 8 voxels in a leaf). Based on my thinking it has to be 4x4x4 brick (you have 2x2x2 interior composed of those 8 values + the border), which doesn't sound that bad. EDIT: They do have this paragraph:     This doesn't really make any sense to me. As in this sense I need to have a function like: // This is pseudo-code! But will do as example. // I'm looping through all voxels and storing them in tree, I can find out respective brick without any problem // But I have no idea how to store the data! // // brick here points to one brick in which I'm storing current voxel that is in processing // color is the value I want to store // location is index into X, Y and Z in voxel array void StoreColorInBrick(uint brick[3][3][3], uint color, uint location[3]) { // ... Now assuming my location is: // location[0] % 2 determines whether current voxel color is to the left or right (e.g. on x-axis) // location[1] % 2 determines whether current voxel color is to the top or bottom (e.g. on y-axis) // location[2] % 2 determines whether current voxel color is to the front or back (e.g. on z-axis) // // For simplicity let's assume I'm doing lower left corner of some node, where do I store color? // should it be brick[0][0][0]? // or added that belong to lower left (4 nodes), e.g. to brick[0..1][0..1][0..1] .. in which case // how to handle the parts that are not in corner (there will be more values written most likely). // Average them? Sum them? } I believe I can make the interpolation work after (for each brick, I can find neighboring brick (if there isn't any, that means there are zeroes ~ that node is empty)). I assume the interpolation would then be only for boundary voxels standard sum/2 for 2 that are directly neighboring between 2 bricks.
  13. Voxel Cone Tracing - Octrees

    Thanks for the replies. #IYP - you're correct, yes the point is to make the process faster than going literally 'through all cells' I believe I've got it correct to some extent. Let me share a screenshot here: [sharedmedia=gallery:images:8634] As you can see, the data are not really filtered - I've followed the paper - https://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-SparseVoxelization.pdf Before I go to my questions and notes - let me outline the algorithm here, so that you might have some ideas where I did something wrong or right. So, my octree building algorithm goes as following: Graphics queue: VoxelOctree::Generate() { // Simply render the scene, storing fragments (position + color) in UAV // Use counter buffer to count number of voxels RenderScene(); } Compute queue: VoxelOctree::BuildSVO() { levels = log2(dimensions) for (i = 0; i < levels; i++) { // Flag nodes that are going to be subdivided OctreeNodeFlag(); // For each node on current level, that has been flagged, allocate 8 children OctreeNodeAlloc(); // Initialize children that were generated OctreeNodeInit(); } // At this point I'm absolutely sure that octree structure is working properly! // But how to fill the data??? // Next to octree I have a data buffer which currently has N*rgba8 field (where N is // the total number of nodes in octree). Octree node index basically matches the color // in this array. // // Following function just clear octree color and fill single color per node (using // atomics and approximate average allows us to fill EACH node to which voxel maps) OctreeNodeClear(); OctreeNodeFill(); } My understanding of bricks (e.g. how the data should be stored originally) is to allocate large structured buffer which would have 3x3x3 voxel space per each node in the octree (that means 27 times the size of single color per node), which would allow me to the filtering. So what I had? I simply ran another kernel for each leaf node, that would search what is in 26 voxel surrounding (searching the whole tree from top) the center and filling those values. Which tends to be quite slow. Plus I'm not entirely sure how should I filter it (I stored it in the structured buffer and attempted to do trilinear filtering, without too much success though). So my real question is, what should I store in those 'bricks' and how to store & obtain data for them (without taking too long time)? Once stored, I still need to perform filtering (which can be done by using 3D texture and pointing to voxels in center of each 3x3x3 voxels in that ~ I can figure out indexing, but I still have problem with obtaining data for them and potentially also storing them in interior nodes of the tree). The paper doesn't really mention how to fill data into the 'bricks', and we don't really have any information about ALL neighbors (just 7 others, that are in the same parent of the current nodes, but what about the rest?).
  14. Voxel Cone Tracing - Octrees

    Thanks for reply! My octree actually point to a 'brick' per each node, incl. leaves (which is 3x3x3 voxels stored in 3D texture, to allow for hardware trilinear filtering). I implemented this based on Voxel Cone Tracing paper. The problem is, that all papers that went into octree voxel cone tracing didn't pay too much attention into traversal, which brings me to either: They use just sampling (which is extremely simple, yet as you said has major flaws) Octree traversal isn't really hard one, so they don't even consider mentioning it (yet, once you work with cones - you basically have some max. level you want to traverse into, which changes based on your cone angle ... which brings in some complexity) They don't want to publish it (which I don't really believe, anyone who works a bit with ray tracing can put it together in few days) I'll put some effort into full traversal then, and share some results here.
  15. And it's again me - I'd like to first say thanks for all the hints on the 3D texture mip mapping, which allowed me to do some great experiments with voxel cone tracing. Now, I already had some time to move further and implement octree construction on GPU which is quite fast, along with populating octree with the data. So far so good, and I'm even able to search for voxel on position in octree using a function now. So why am I writing here? It is simple, as I'm going to cast rays (and later cones) in the octree: I'd like to know whether it has sense to implement full ray-octree traversal (e.g. stack-based traversal that would actually intersect the nodes), or whether I should just do the 'sampling method' (e.g. do N steps along ray and accumulate results)?