Vilem Otte

GDNet+ Basic
  • Content count

    718
  • Joined

  • Last visited

Community Reputation

2943 Excellent

About Vilem Otte

  • Rank
    Crossbones+

Personal Information

  • Interests
    Art

Social

  • Twitter
    VilemOtte
  • Github
    Zgragselus
  1. Game Engine Editor

    So, I'm slowly progressing with my hobby project. This is the first time I write something about the project directly tied to it, or more likely essential part of it. I'll share few details here about the editor development. Making useful game/engine editor that is easy to control is definitely a long-run task. While the engine itself is updated to support Direct3D 12 currently, there was no real editor that could be at least a bit generic. For my current goal with this project, I decided to start with editor and work from there. So where am I at? I'm already satisfied with some of the basic tasks - selection system, transformations, editing scenegraph tree (re-assigning object elsewhere - through drag&drop, etc.), undo-redo system (yet with adding more features this needs to grow), I'm definitely not satisfied with way I handle rotations edited through input fields. Component editing is currently work-in-progress (you can see prototype on the screenshot), and definitely needs add/delete buttons. It is not properly connected with undo-redo system yet, but it works. So what are problems with components? They're not finished, few basic were thrown together for basic scenes, but I'm not satisfied with them (F.e. lighting & shadowing system is going to get overhauled while I do this work) Undo/redo tends to be tricky on them, as each action needs to be reversible (F.e. changing light type means deletion of current Light-deriving class instance and creation of new different Light-deriving class instance) Selecting textures/meshes/..., basically anything from library, requires a library (which has no UI as of now!) Clearly the component system has advantages and disadvantages. The short term plan is: Update lighting & shadowing system Add library (that it makes sense!) with textures, meshes and other data - make those drag&drop into components Add a way to copy/paste selection Add a way to add/remove components on entities Add save/load for the scene in editor format Alright, end of my short pause - time to continue!
  2. Editor - Local vs Global

    Thanks for the response - so I can second that it definitely is viable to get some sleep first and think about it the next day (prevents huge amounts of confusion). I've written it down on the paper and it makes a lot more sense now. Given a world matrix W: 1 0 0 0 0 0 -1 0 0 1 0 0 0 0 0 1 and transformation matrix M: 1 0 0 0 0 1 0 5 0 0 1 0 0 0 0 1 a result of W * M would be: 1 0 0 0 0 0 -1 0 0 1 0 5 0 0 0 1 Which is correct - the resulting matrix is M applied to transformation W in local coordinates. Now if I want different basis R - I need to transform M into it and then back from it like - R * M * inv(R) - which is pretty much equal to what you wrote (although I don't want to undo W transformation) - so, when I want global coordinates, my basis relatively to W is inv(W) - therefore W' = W * inv(W) * M * inv(inv(W)) = W * inv(W) * M * W = Id * M * W = M * W To proof this: M * W would be: 1 0 0 0 0 0 -1 5 0 1 0 0 0 0 0 1 Which is correct. Anyways thanks for the response - I actually don't want to undo W, but apply another transformation to it (e.g. I want W' = W * R * T * inv(R) in terms of what you wrote). Note. for my description I'm using standard math definition for multiplication - AB = A * B, AB[i, j] = sum from k=1 to m (A[i,k] * B[k, j]) - first index is row, second column. Column vs Row major terms tend to confuse people imo (even though it's just about how data are laid out in memory).
  3. Okay, so I officially got confused (which is most likely also due to actually working on something at around 5am). So, my objects in scene (I call them "entities") stored in a scene-graph like way have a transformation matrix (aka world matrix - W). Now, whenever I pick some object, I want to allow to do the translation/rotation/scale either in default basis (Euclidean space axes), or local basis (basically Euclidean space axes transformed with rotational part of world matrix - let's call it L') ... e.g. not unlike 'Global' and 'Local' space in Unity. I also know the transformation matrix - T - with which I want to transform the object. So the only remaining question is, how to calculate new world matrix for global and local - W'? One of them is going to be W' = T * W (which I believe should be the local one) ... and the global one should therefore be W' = (inverse(L') * T) * W Am I approaching this properly?
  4. DX11 Set Lighting according to Time of the day

    Or if you want to go more precise - https://midcdmz.nrel.gov/spa/
  5. Dealing with frustration

    I would say: "No project is too big, and no project is small enough". The key point is motivation, and it is different for every single developer. Some take few days off, or go for holiday. Some take few days off explicitly to stay with family. Some just switch to another project. ... Some even build rockets and other crazy stuff in real life. How to keep yourself motivated? Well for start switch over to making something visual and short term (last item I did was playing with variance shadow maps ... seeing this in motion on your own D3D12 engine encourages you to work on!):
  6. Ludum Dare 39 - Release thoughts

    Yet another Ludum Dare has come around, and this time I've participated without a real team on development side (I had some help from my sister in audio art, ideas, gameplay balancing and user interface). Before publishing full post mortem I'd like to provide a link: https://ldjam.com/events/ludum-dare/39/im-an-nuclear-engineer Last 3 Ludum Dares really encourage me into starting and finishing some more serious game project, although I'm still thinking about it...
  7. Need a script for Main Menu buttons

    So, as I've created some games already in Unity, I will describe simple example of how I'd do it in Unity these days (and most likely will do it next time I'll use Unity for something): Here is a simple example I put together in few minutes: https://otte.cz/random/MainMenu.rar Short description: What you need is some GameObject in scene which will have script containing public functions that are handlers for buttons. The class holding these must derive from MonoBehaviour, like the one in the archive: using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.SceneManagement; public class MainMenuController : MonoBehaviour { // Function called for button, must have 0 or 1 string argument at most, must be public public void NewGameHandler() { // Load another scene with "Game" name, scenes must be added in Build Settings! SceneManager.LoadScene("Game"); } // Function called for button, must have 0 or 1 string argument at most, must be public public void QuitHandler() { // Exit the application Application.Quit(); } } Now on the buttons, add a On Click record, where you attach (drag & drop, or just click + select) this game object with above-like MonoBehaviour attached. Next, select (from event drop down) the MonoBehaviour class name and under it there will be a function (NewGameHandler, QuitHandler)... note that the drop down has 2 levels, it is easy to miss it. It is setup in the project in the archive, so feel free to use anything from it.
  8. Depth of Field Aperture Help

    Wait... you don't do anything to your secondary rays. The algorithm for DoF in ray tracing works as following: Start by casting a ray from camera into the scene. But instead of intersecting it with anything, you just determine the focal point for given pixel in the scene (pixel focal point). Now, for N samples, do: Select a random starting point on aperture. For circular - random polar angle + random radius, for rectangular - just use some distribution over the rectangle. Just make sure your probability distribution functions for your random number generator integrate to 1. Cast a ray through this point on aperture and focal point like you do normally, reflecting/refracting it around. I can put you a smallpt-like example up in few hours from now (I still have something to finish at work).
  9. I can't state for a book, but what you're trying is to simulate some physics behavior. So the real question from me is, are you trying to: Simulate the actual effect (in which sense it will have also most likely impact on gameplay), so that the actual effect behaves realistically Fake the effect and just rendering something that looks good/realistic There is quite huge difference between the two, and in the first case the actual rendering is most likely minor problem (due to the nature of simulation, it tends to be easy to actually render it), you can use information that are part of the actual simulation. In the second case, it tends to be easy to make 'good looking particles' for literally anything, while extremely hard to make them move and animate like realistic effect (basically 'fake' the simulation part).
  10. Voxel Cone Tracing - Octrees

    The problem with interpolation is, I can't do that beforehand. I have a set of elements from which I'm building a tree (those are voxels, generated for the scene). Now as I know the position of each voxel, I have no idea about its sorroundings (they can be in random order inserted into the set), all I know is their position and data (color, normal, whatever I wish to store). Adding voxels with interpolations would mean that I need to find neighbors in this set (which is actually easily done when octree building is finished). This will be quite hard, as the tree can't be easily modified. EDIT: Thinking of this, yes technically it is possible to add nodes, removing them might be more tricky - but yes, also possible ... as actually each node represents sparse octree itself. I believe this might be a decent way to handle inserting dynamic objects too ... or maybe different levels of quality (whenever it is necessary). Making this easily usable even for case of 'ball in stadium'. Each of the elements is inserted into the tree (the tree I generate is sparse), and doing any form of refinement will be hard, if not impossible once tree is generated (it uses parallel algorithm for that): You start with root node (levels = 1) Loop (for i = 0 to levels): For each element - traverse into the 'levels', and flag which nodes are to be split For each node in current level - if node is flagged, generate 8 child nodes Allocate generated child nodes Repeat the loop Now, I can't really change a tree once it is finished. The point is, I realized that bricks are something different than what I thought they are. The value in the centre is the most important one, and 26 values sorrounding the center are duplicated (yes, it is "memory overkill" in this sense)! The correct version of previous images is (and should be): [sharedmedia=gallery:images:8649] The mipmapping part seemed tricky at first (but my algorithm is wrong), what I should actually do is, for child nodes, use their center values (and the values between the centres - which will be 3x3x3 total values! Not 4x4x4!) and average this into the parent node's center. Then put the boundaries to fill boundaries in the parent node. Now, this will need additional step to perform the interpolations like I did for leaves, to guarantee that the border values in bricks are correct. [sharedmedia=gallery:images:8650]   Sounds quite complicated though (I didn't have time to measure time figures, as the code is still work in progress, so hopefully it won't be that big problem to compute).   EDIT2: Yes, I can confirm that now it works quite properly (even AO is smooth). The last part I'd like to investigate is traversal.   The sampling one is of course extremely fast if sample count is low, but as stated - unreliable. Increasing sample count can solve problems (basically oversampling everything), but tends to be quite heavy. But what's the point of using SVO when I'm just brute-forcing through right? Which has O(n) complexity.   I've tried ray-AABB intersections and stepping down the tree (using stack-based approach ... basically something similar to what one would use for BVH ray tracing). While reliable, it is extremely heavy if the nodes aren't pushed into stack in correct order (which doesn't seem as straight forward as I thought will be). This should theoretically have O(log(n)) complexity, assuming I put the children to stack in order to process them "along the ray".   I'd also like to try DDA-like approach (as each time I can determine on which level the empty intersected node is, it should be straight forward to step accordingly to next node), as stated previously - if implemented correctly, it should have O(log(n)) complexity of finding next node along "ray", and 
  11. Posts sketches

  12. Voxel Cone Tracing - Octrees

    And no, I still don't have filtering right. I noticed that 2 items will cause problems, so the first one is filtering inside leaves: Here is a pseudo-example: [sharedmedia=gallery:images:8641] This is our starting point - semi transparent nodes are visualization of part of the tree that is empty (therefore those nodes are not really there, and or course we can detect that there are empty nodes). Top left 4 nodes are full. Bottom left 4 nodes are also full (some with data that have alpha = 0). I will demonstrate filtering along X-axis (as described in paper). So, the first step is: [sharedmedia=gallery:images:8642] Writing data to the right neighbor (of course as tree is sparse we can't really access the 'transparent' - non existing nodes). This will be the result: [sharedmedia=gallery:images:8643] Now, as the data in right-most voxels need to be the same as in the left-most voxels, we need to copy from right back to the left. Like: [sharedmedia=gallery:images:8644] And the problem is: [sharedmedia=gallery:images:8646] Obviously as we can't write into non-existent nodes (due to sparse nature of the tree), the values won't match (even when we assume that value was with alpha = 0 in the previous steps). All the data neighboring non-existent nodes will be invalid ending up in a border. Of course the same problem is hit when computing interior nodes (mip mapped ones) - they will not match properly. Resulting in non-smooth ambient occlusion like this: [sharedmedia=gallery:images:8648] The paper sadly doesn't address this problem at all (it describes scenario when the tree is dense, in which case it is working properly, but as soon as tree is sparse, the problems arise). Any idea how to properly solve this? Obviously the leaf nodes can be solved by detecting non-existent nodes around currently processed node and set values to match (in all 6 directions). But how to perform 'mip-mapping' after (e.g. how to compute interior node values in a way that would make sense)?   My apologize for double-post!
  13. Voxel Cone Tracing - Octrees

    So, work in progress. [sharedmedia=gallery:images:8640] I believe I've got filtering and brick building correct as of now (I've even checked the 3D texture and it seems correct to me. Anyways I'm using the sampling traversal as of now (which is incorrect, as it doesn't give me any advantage of using octree), but I had to switch back to it (for testing the filtering) due to having something wrong in the ray-octree one (which I'll need to update to cone-octree). Whole SVO is built on GPU as of now, and the steps are following: // Build a hierarchy (similar to how it is done in papers) for (i = 0; i < levels; i++) { FlagNodes(); AllocNodes(); InitNodes(); } // Fill base level with data (as per your description, just done in 3D) FillLeaves(); // Filter bricks (as we need neighbor information!) FilterLeaves(); // Build interior nodes (interpolate from lower levels) for (i = 0; i < levels; i++) { BuildInterior(levels - i - 1); } Works like a charm, it is also very fast (I will do actual measurement shortly), and I am over-allocating memory a bit now (as I don't need log2(levels), but log2(levels)-1 depth of tree - as leaves (bricks) actually store one whole level in the 3D texture. I'm going to give figures for performance and memory out once I solve the traversal part (which I'm digging in now). And I haven't even started optimization yet! Thanks for advises in this thread. If everything is successful, then my next post will go into blog here on GameDev, with the figures and maybe some demo/code to show.
  14. Voxel Cone Tracing - Octrees

    No worries, I got the idea. It makes sense but you're right, it will have quite large memory impact (yet doing trilinear filtering in shader seems to have more impact). The octree tends to be quite sparse (For Sponza, the occupancy is about 3-4% (and it tends to be groupped) so there tends to be quite a lot of empty nodes even higher in the hierarchy. Although this might not be general rule for any scene. Anyways, time to do some coding. Thanks!
  15. Voxel Cone Tracing - Octrees

    Thanks for the response. I've read through Quantum Break paper but it doesn't really help much. I'm not really against increasing memory footprint, I just don't have any idea how to store 2x2x2 data in 3x3x3 brick (e.g. basically a brick would hold all 8 voxels in a leaf). Based on my thinking it has to be 4x4x4 brick (you have 2x2x2 interior composed of those 8 values + the border), which doesn't sound that bad. EDIT: They do have this paragraph:     This doesn't really make any sense to me. As in this sense I need to have a function like: // This is pseudo-code! But will do as example. // I'm looping through all voxels and storing them in tree, I can find out respective brick without any problem // But I have no idea how to store the data! // // brick here points to one brick in which I'm storing current voxel that is in processing // color is the value I want to store // location is index into X, Y and Z in voxel array void StoreColorInBrick(uint brick[3][3][3], uint color, uint location[3]) { // ... Now assuming my location is: // location[0] % 2 determines whether current voxel color is to the left or right (e.g. on x-axis) // location[1] % 2 determines whether current voxel color is to the top or bottom (e.g. on y-axis) // location[2] % 2 determines whether current voxel color is to the front or back (e.g. on z-axis) // // For simplicity let's assume I'm doing lower left corner of some node, where do I store color? // should it be brick[0][0][0]? // or added that belong to lower left (4 nodes), e.g. to brick[0..1][0..1][0..1] .. in which case // how to handle the parts that are not in corner (there will be more values written most likely). // Average them? Sum them? } I believe I can make the interpolation work after (for each brick, I can find neighboring brick (if there isn't any, that means there are zeroes ~ that node is empty)). I assume the interpolation would then be only for boundary voxels standard sum/2 for 2 that are directly neighboring between 2 bricks.