• Advertisement

All Activity

This stream auto-updates     

  1. Past hour
  2. Voxelization cracks

    A stripped triangle cube: 0,1,1 1,1,1 0,1,0 1,1,0 1,0,0 1,1,1 1,0,1 0,1,1 0,0,1 0,1,0 0,0,0 1,0,0 0,0,1 1,0,1 This is a cube centered at [0,0,0] with a size of [1,1,1]. I multiply this with my voxel size and add it to the left/lower/near corner const float3 offset = OffsettedUnitCube(i) * g_voxel_size; const float3 p_world = input[0].p_world + offset; inside a GS. Finally, I transform to camera and projection space. input[0].p_world is generated in the VS (executed for resolution^3 points) const uint3 index = UnflattenIndex(vertex_index, uint3(g_voxel_grid_resolution, g_voxel_grid_resolution, g_voxel_grid_resolution)); output.p_world = VoxelIndexToWorld(index); Transforming between voxel indices and world space: int3 WorldToVoxelIndex(float3 p_world) { // Valid range: [-R/2,R/2]x[R/2,-R/2]x[-R/2,R/2] const float3 voxel = (p_world - g_voxel_grid_center) * g_voxel_inv_size * float3(1.0f, -1.0f, 1.0f); // Valid range: [0,R)x(R,0]x[0,R) return floor(voxel + 0.5f * g_voxel_grid_resolution); } float3 VoxelIndexToWorld(uint3 voxel_index) { // Valid range: [-R/2,R/2]x[R/2,-R/2]x[-R/2,R/2] const float3 voxel = voxel_index - 0.5f * g_voxel_grid_resolution; return voxel * g_voxel_size * float3(1.0f, -1.0f, 1.0f) + g_voxel_grid_center; } As a side not, I am not really sure if I should round or floor. If you floor you will look straight at four voxels otherwise just one voxel (for a very low voxel grid resolution). But flooring is a bit more symmetrical I guess.
  3. I would rather say that the main.cpp will be the sample code, since I assume that many derivatives just include the API abstraction layers provided by ImGui. Though, ImGui "itself" contains idd. much global data, resulting in a clean looking API, since you do not pass context around. Though, I am not getting the multi-threading remarks in this whole post. If you just ensure a correct initialization which can be trivially done for singletons while multi-threading, you can use single-threading and multi-threading all the way. Why would a singleton resource factory fail in a multi-threaded environment (and if it does, why targeting the singleness instead of the internal implementation)?
  4. Because it doesn't make sense At least for all other "components". A camera is just a data container, the same for a transform, the same for a mesh, a model, etc. Those are never going to do something in an "update" function. If you want to change them from outside, write a script that invokes their mutators, if you want to change them from inside the engine, just invoke their mutators.
  5. Oh. That's example code that comes with ImGui, showing how to use it, not the internals of ImGui itself (however, ImGui does have an internal global state pointer, which does make it very hard to integrate into multi-threaded engines...). It's almost a tradition for C/C++ API sample code to use global variables in order to keep the structure of the example app as simple as possible and focus on the actual API that it's trying to teach....
  6. This is ImGui code, similar for the snippet listing all "g_" variables above.
  7. The LSP-violation for me is that the sub-types are changing what the actual task performed by the algorithm is. Your algorithm is "for each bit of code, execute the code"... which is a non-algorithm. Regardless, sure, let's say it's a std::function or a callback or a runnable, etc, to avoid OO terminology. The problem with "for each bit of code, execute the code" is that you are throwing out most of the software engineering toolbox. Most bits of code do have different inputs and outputs, and if you deliberately choose to obfuscate their inputs/outputs by forcing them to all conform to the same interface, then you don't get to complain when their inputs/outputs are obfuscated... I mean, why do this pattern just for parts of the gameplay code? Why not rewrite all of the code in the entire project so that the signature of every function has to be void(double) and all communication is done via global state? I don't understand what you're trying to say. If you want to support multiple windows, you definitely shouldn't use a global/singleton there... Even if you only need one window, it still probably shouldn't be a global.
  8. static HWND g_hWnd = 0; as global and as singleton as can be
  9. Voxelization cracks

    Then it should be the visualization. It will probably be better anyway, since the g_max_length seriously caps my radiance.
  10. This has nothing to do with Liskov, and definitely not with violating Liskov. Multiple languages have something like a Runnable (concretely referring to Java here) and again this has nothing to do with Liskov. Your derived class may add all the member variables it wants that are needed for the "Update" or "Run" method implementation. And if you want to have the concrete application of Liskov: void MyScript::Update(double) Precondition: All systems (start enumerating) are in a valid state. Input argument types; double Exception specification: all possible exception since I do not declare noexcept, C++ isn't so strict as Java anyway Output argument types: void Postcondition: All systems (start enumerating) are in a valid state. void MyDerivedScript::Update(double) Precondition: All systems (start enumerating) are in a valid state. (the same or weaker, thus ok) Input argument types; double (the same or weaker, thus ok) Exception specification: all possible exception since I do not declare noexcept, C++ isn't so strict as Java anyway (the same or weaker, thus ok) Output argument types: void (the same or stronger, thus ok) Postcondition: All systems (start enumerating) are in a valid state. (the same or stronger, thus ok)
  11. Voxelization cracks

    Yes. Maybe some exponential mapping for Y, or just a half float. (But i'm still using float4 myself and have not tried anything of this yet...) In OpenGL culling only depends on winding order, IIRC
  12. Today
  13. Voxelization cracks

    Culling? I am confused about the culling terminology of D3D. If the near face of a cube is parallel to the camera near/far plane and inside the camera frustum, and the cube faces have a clockwise winding order: camera far plane far face front -------------- far face back the void of the cube near face back -------------- near face front camera near plane /\ | eye Rasterizer state: CullMode Back FrontCounterClockwise FALSE Is there a possibility that fragments will be generated for the near face front and the far face front (though the normal is pointing away from the camera)? Or does the culling not involve the orientation of the geometric normal at all?
  14. One answer is: don't write code like that. You've deliberately taken some concrete game feature (whatever it is that MyDerivedScript does) and hidden it behind a very limited interface that restricts it's ability to communicate, and then asked how to solve that artificial communication restriction. Globals are the ultimate solution to any kind of restricted communication between components, so they look like the only answer once you've walled yourself off like this... but you can also just not wall yourself off like that in the first place. Inheritance is designed for situations where you can apply the LSP rule and write algorithms that are applicable to any object that implements the interface. In your situation, I doubt that there are any algorithms that are applicable to all your different derived types (i.e. your scripts do not just consume deltaTime as their only input and have zero outputs) so this is a violation of OO's concept of inheritance. Again, like globals, this style of code is common, but technically wrong. As above, the actual problem is the use of inheritance in the first place -- without the incorrect interface, you would just pass the appropriate inputs to the function and have it do the appropriate work... but if you do stick with that style, you'd have to work around the arbitrary interface restriction it by passing the GPU-device / texture-factory / etc into MyDerivedScript's constructor and have it keep it's own copy of the pointers to those systems that it interacts with, or use globals, or a context-object/god-object, etc. Without inheritance, it can just have it's own specific interface that's honest about its real inputs/outputs. A few seconds in, they click the [Create] button and a second Windows window is created, which they they dock back into the original window (destroying the 2nd one). There are multiple windows for parts of it. Do you mean the specifics of the code, or the use-case for the user? Code-wise, you make one D3D swap-chain per window, or in GL you make one GL-context per window and use shared FBO's to copy data from your main context to your "secondary" contexts. Also, none of those variables should be globals. Use-case wise, in an editor/tool, it's nice to be able to drag in-engine-windows out of the OS-window and spawn them into a new OS-window (like in the GIF). You can also use it to do multi-monitor support -- e.g. lots of flight-sim / racing fans will have three monitors and like to run games across all of them. Some drivers let the user set up their three monitors to appear as one and your game doesn't need to do anything, but to be nice, you can support creating a window on each monitor yourself and then rendering the game across all of them.
  15. Voxelization cracks

    You mean Y: 16 bits and UV: 16 bits?
  16. Lets assume a massive script (I am not saying that is the way to go, but it should be possible to write): You want to rotate a camera transform based on the mouse state, translate the camera based on the keyboard state, change the aspect ratio of the camera based on the display resolution, and change a texture of a mesh attached to a camera (I was running out of inspiration :P). Or even simpler, you want to just output the mouse, keyboard, display state, etc.
  17. I am not referring to nor implying that scripting is different from normal programming. Scripting is for me the act of writing scripts, which are components that execute small programs (written by a user of an "engine", as expressive as the "engine" supports it, in a language that the "engine" supports). It does not imply a different language, it does not imply dynamic loading, etc. I literally referred to scripting components. I am concretely talking about C++ itself, in fact a single method of a derived class MyDerivedScript::Update(double delta) overriden from a MyBaseScript::Update(double delta) contained in a child class of MyComponent. Lets assume you want to change a material texture with a fixed color equal to a given inputted RGB, how are you going to create the texture? Do you want to user to pass a ID3D11Device around or invoke a method on some TextureFactory the user first of all needs to have and second needs to find somewhere? Or are you going to wrap all of this in some methods of MyBaseScript or even MyComponent. Both cases require that the all the "context" needs to be accessible by some base class. So how are you going to query key presses? First of all the gif image, you show, is just one window according to the Windows terminology. Second, I was actually referring to this: // Win32 data static HWND g_hWnd = 0; static INT64 g_Time = 0; static INT64 g_TicksPerSecond = 0; static ImGuiMouseCursor g_LastMouseCursor = ImGuiMouseCursor_Count_; // DirectX data static ID3D11Device* g_pd3dDevice = NULL; static ID3D11DeviceContext* g_pd3dDeviceContext = NULL; static ID3D11Buffer* g_pVB = NULL; static ID3D11Buffer* g_pIB = NULL; static ID3D10Blob * g_pVertexShaderBlob = NULL; static ID3D11VertexShader* g_pVertexShader = NULL; static ID3D11InputLayout* g_pInputLayout = NULL; static ID3D11Buffer* g_pVertexConstantBuffer = NULL; static ID3D10Blob * g_pPixelShaderBlob = NULL; static ID3D11PixelShader* g_pPixelShader = NULL; static ID3D11SamplerState* g_pFontSampler = NULL; static ID3D11ShaderResourceView*g_pFontTextureView = NULL; static ID3D11RasterizerState* g_pRasterizerState = NULL; static ID3D11BlendState* g_pBlendState = NULL; static ID3D11DepthStencilState* g_pDepthStencilState = NULL; static int g_VertexBufferSize = 5000, g_IndexBufferSize = 10000; How can you support multiple completely separate windows? There is no law carved in stone saying there should only be at most one window at the same time? How are you going to use the D3D11 capabilities of your dedicated and integrated graphics card at the same time? There is neither a law carved in stone saying you should only use one graphics card.
  18. Choconoa Devlog #5

    This week we analyse our level design practices in a very practical way. Also we make sure the flow is consistent and intuitive. The challenge here is to create organic levels (non grid-based levels), and make sure that the distances are correct etc.. On this picture below, you can see how we balance the safe zone and the threat zone, how we place the rewards, how we force you to learn the basics before progressing further etc.. This gif gives you an idea of the flow and how we make it as smooth as possible for the player. Grayboxing the levels requires to remove all deco, foliage etc.. Using a simple color code is very useful. The red is the threat, the blue and yellow are used for navigation.. In this step, we are blocking all the collisions. that way when we will assemble the art and details all together, the collisions will remain unchanged.
  19. In GLFW, call glfwCreateWindow twice? In ImGui, people are doing stuff like this already: In D3D12 you get a device, many command queues, and many deferred command lists. In D3D11 you get a device, one immediate command list (with built in graphics queue) and zero/many deferred command lists. In GL you get a device (with built-in immediate command list ). There's no difference between "normal programming" and "scripting". Coding and scripting are both just slang for programming. The entire body of software engineering doesn't get thrown out because you're programming in a different language. What language are your game scripts written in?
  20. Disabling online mode?

    Don't know about shut down, but there was a SimCity which has failed its players: https://www.geek.com/games/everything-wrong-with-simcity-1560900/ https://mic.com/articles/29213/simcity-drm-always-online-mode-results-in-disaster-for-gamers
  21. Unreal Level generation in Unreal Engine 4

    Using the BP construction script would be your best bet for generating things in the editor, but note that for your shipped (cooked) builds there are no construction scripts to run anymore, they run once during cooking and that's it. If you add C++ code in a UObject-based class constructor make sure you only do heavy work within a - if (!IsTemplate()) - block, otherwise it could affect your loading times negatively.
  22. Old guy seeks help!

    If you are just getting into games development I recommend starting with a simple 2D side-scroller first to at least ignore the difficulties of developing 3D games. Unity and Unreal are very capable game engines with a comprehensive feature set, but this means the learning curve might be a bit intimidating in the beginning. Start with a simple engine for 2D games and build a small, one-level game where you can just move around and go from there, step by step
  23. DX12 Shader compile step

    Thank you @galop1n and @MJP. You bring up good points. I have been thinking about writing a tool to automatically generate shader permutation, but I also like the ideas you mentioned, like "spread compiles over the network", but I guess even just spreading it over multiple cores would benefit the compile times. I also like the idea that for example the editor could reference the shader compiler and do automatic compiling of shaders when they change. I am playing with the idea to have an app that has a higher level of notion of shader library (knowledge of engine, permutations, etc), and I could even call it from VS to build before the engine and also call the built executable and initiate shader compile as the next build step. I guess if I want to do something like it and restructure shader compilation, this would be the best time before it becomes more complicated.
  24. Hi all this is my first post on this forum. First of all i want to say you that i've searched many posts on this forum about this specific argument, without success, so i write another one.... Im a beginner. I want use GPU geometry clipmaps algorithm to visualize virtual inifinte terrains. I already tried to use vertex texture fetch with a single sampler2D with success. Readed many papers about the argument and all speak about the fact that EVERY level of a geometry clipmap, has its own texture. What means this exactly? i have to upload on graphic card a sampler2DArray? With a single sampler2D is conceptually simple. Creating a vbo and ibo on cpu (the vbo contains only the positions on X-Z plane, not the heights) and upload on GPU the texture containing the elevations. In vertex shader i sample, for every vertex, the relative height to te uv coordinate. But i can't imagine how can i reproduce various 2d footprint for every level of the clipmap. The only way i can imagine is follow: Upload the finer texture on GPU (entire heightmap). Create on CPU, and for each level of clipmap, the 2D footprints of entire clipmap. So in CPU i create all clipmap levels in terms of X-Z plane. In vertex shader sampling these values is simple using vertex texture fetch. So, how can i to sample a sampler2DArray in vertex shader, instead of upload a sampler2D of entire clipmap? Sorry for my VERY bad english, i hope i have been clear.
  25. I am using just Lua and a little helper header called Luna to easily bind C++ objects to be accessible from a lua script. I think this should be all you need, really easy to set up and you introduce no additional dependencies. Though I don't understand what you mean by "work with dx12", you certainly don't want to call dx12 functions from a lua script? You could do it in theory, but I wouldn't, let Lua scripting be a part of your game logic, or calling engine functions, but don't use it to replace performance sensitive low-level code.
  26. Yeah this is pretty much exactly same code that I am using. I think the problem for me has something to do with conversion part in java. But I am not sure
  27. highp vec4 pack(highp float depth) { const highp vec4 bitSh = vec4(256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0); const highp vec4 bitMsk = vec4(0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0); highp vec4 comp = fract(depth * bitSh); // fract returns the fractional part of x. This is calculated as x - floor(x). comp -= comp.xxyz * bitMsk; return comp; } highp float getShadowFactor(highp vec2 pos) { highp vec4 packedZValue = texture2D(shadow_map, pos); const highp vec4 bitShifts = vec4(1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1); highp float shadow = dot(packedZValue , bitShifts); return shadow; } Both codes are for fragment shader only
  1. Load more activity
  • Advertisement