Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

14 Neutral

About Valakor

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Valakor

    Engine UI

    Some more input if you like the HTML route: Coherent Labs' various products [Paid] (aka Scaleform's more modern alternative) The names keep changing, but I think the new thing is Coherent Gameface? https://coherent-labs.com/products/coherent-gameface/ Ultralight [Free for individuals and small Indies] (aka the lighter-weight, open alternative to Scaleform and Coherent) Essentially just an HTML rendering engine with Javascript binding support Just got to version 1.0 Missing lots of features in comparison to Scaleform/Coherent https://ultralig.ht/
  2. Valakor

    Compute shader return values

    HLSL is a pretty small subset of C, so I'm unsure if it supports aggregate initialization like that. You probably just need to fill in each element like: corners[0] = x0; corners[1] = y0; corners[2] = z0; corners[3] = x0; corners[4] = y1; corners[5] = z0; ... MSDN's pages about HLSL syntax and features might tell you what type of array initialization syntaxes are supported. As for the last error, you don't specify parameters as 'out' at the callsite like you do in C#, calling the function should just look like: int corners[8 * 3]; CornerPoints(x, y, z, corners);
  3. Valakor

    Compute shader return values

    I don't remember what the syntax is (or if it's possible) to return a statically-sized array like that in HLSL, but you can definitely use an out parameter. Something like: void CornerPoints(float x, float y, float z, out int[8*3] points) Note that you will likely need to do the x/y indexing manually for multidimensional arrays because I don't believe HLSL supports that (iirc).
  4. That is correct. An allocation returned by a memory allocator (malloc, your own custom allocator, whatever) is always a contiguous chunk of memory to the caller. If there is no contiguous chunk large enough, this is an indication of memory fragmentation. The allocator itself can manage ranges of allocations however it sees fit (in a linked-list, a pool, etc.).
  5. I think @ryt is perhaps confusing a single allocation returned from malloc(), and how the system memory allocator manages groups of allocations. When you call malloc, the pointer you get back is a contiguous chunk of memory of size (however much you requested). This memory isn't chunked nor does it exist at multiple locations in 'actual' memory (ignoring funniness with virtual memory). When you free this memory back to the system, the system allocator can manage this chunk however it sees fit - this entire chunk might be kept track of in a linked-list, or may be part of a pool of similarly-sized chunks, etc.
  6. You could call Draw multiple times, but since you're drawing primitive data (colored lines basically), you can probably just submit a single vkCmdDraw per large vertex buffer. (You've essentially recorded all the 'commands' you need inline in the vertex buffer, as long as you don't need any state changes between them). If you need more state changes, then yes you'll probably need to record your own mini command-buffer format while building up a vertex buffer over the frame, then replay those commands with appropriate state changes during your render pass.
  7. You can have an immediate-mode API without actually submitting immediate-mode-like API calls to Vulkan. I would use your existing API, but build up a single vertex buffer and related state over the course of the frame (or multiple based on differences in pipeline, say 1 for wireframe and another for filled tris). Then at the end of the frame you submit a single draw per buffer.
  8. Valakor

    Multithreaded Rendering

    I'm by no means a Vulkan expert, but I'm in a similar position to you in my current attempts to use newer-gen graphics APIs effectively. From my understanding, the most optimal back-end multithreaded design might be something like the following. (Please anyone with more experience correct me if I'm totally off here). Command pools shouldn't be re-used while existing command buffers are in flight, so starting with an approach like DOOM3-BFG is best (a command pool per swapchain image, entirely reset in one go using vkResetCommandPool instead of resetting command buffers individually). This is the bare minimum, and if you're only recording to command buffers from a single thread this is all you'll need. If you want to record from multiple threads, things get a bit more complicated. Command pools (and command buffers allocated from them) are externally synchronized, so you'll want one per recording thread; this means you now have a multiplex of NxM command pools (N recording threads by M swap chain images). Once they're all recorded, you can then submit them all to the graphics queue using vkQueueSubmit - they'll be processed in the order they are in the array used as input. (I'm somewhat unclear about whether this is technically correct, but read here in the Vulkan spec and maybe more light bulbs will go off in your head than did in mine: https://vulkan.lunarg.com/doc/view/ This is where I'm currently stuck in planning: ensuring the correct order of things when recorded from multiple threads. The best solution I can come up with is to use a single 'main' recording thread for low-volume things (frame setup, post-processing), and only fork out to multiple threads for high-volume things (main rendering passes with lots and lots of commands). To ensure the correct ordering, I'm planning on having each thread record in linear chunks (worker 1 gets draws 0-511, worker 2 gets 512-1023, etc.) and then submitting each thread-owned command buffer in the correct array-order when passed to vkQueueSubmit. EDIT: Something I just thought of which is maybe obvious to people more familiar with Vulkan is that my worker recording threads should be creating Secondary command buffers, and once they're done the main recording thread can then record these secondary buffers into its main Primary command buffer, and then just submit that.
  9. TL;DR: Libraries like you want exist, but are complicated and (likely) expensive. I would recommend going with a game engine and using their native UI libraries if possible, or finding a library with an existing integration into a game engine and use that. Long version: Unless you want to get down and dirty with game engine programming, you'll likely want to use an existing game engine (Unity, Unreal Engine, Ogre, there's a myriad of options depending on your requirements) in addition to an HTML UI library. A separate concern is your desire to use HTML for UI design - there are various services out there for exactly that. Some examples that I know about that use familiar HTML + Javascript for UI layout and functionality: Scaleform [Paid] (aka the Old Guard of this style - used by Blizzard in various games like Starcraft II iirc) I'm unsure if this even exists anymore or is supported https://www.autodesk.com/products/scaleform/features/all/gallery-view Existing integrations in Unity and Unreal (I think) Coherent Labs' various products [Paid] (aka Scaleform's more modern alternative) The names keep changing, but I think the new thing is Coherent Gameface? https://coherent-labs.com/products/coherent-gameface/ Existing integrations in Unity and Unreal Ultralight [Free for individuals and small Indies] (aka the lighter-weight open alternative to Scaleform and Coherent) Currently in Beta Not as widely supported (no existing engine integrations I know of, you need to integrate it yourself in C / C++) Missing lots of features in comparison to Scaleform/Coherent https://ultralig.ht/ There are probably more, but those are the ones I know of. Maybe they'll be a good start for Googling As to creating an in-game console, I don't see why you couldn't create that using HTML/Javascript (the visual aspects anyway). You basically want to give users a basic 'scripting' API that your console understands - when they enter a script / command you call an appropriate function in your API (the API itself could even be Javascript, but you'll have to somehow bind that API to an underlying game engine or game functionality to make things actually happen). Example: > base.build scout 4 Tell the base object to build 4 scouts Somehow you've hooked up a little interpreter that translates that into actual function calls in your API that do the equivalent to the user pressing the 'Build Scout' button in the base UI 4 times Warning: From my personal experience (and some minimal communication with industry devs who've used Scaleform/Coherent stuff) there are some possible issues with building game UI this way. A full HTML/Javascript rendering engine is a relatively heavyweight thing, and you can easily run into performance issues with complicated UI. Integrating these systems into your engine / codebase / application can be a relatively complicated task requiring knowledge of C++. Expensive! (For the Big 2 anyway)
  10. Assuming you have some decent algorithm for figuring out where to place objects on your terrain (not half in a cliff, not on a super steep slope, etc.), the first thing that comes to mind is to define a custom terrain mesh / chunk / whatever you want to call it as part of the placed object (at least for large objects like a castle). When you want to place that object, use the custom terrain that is a part of it to deform the procedurally-generated terrain; most of your terrain remains procedural, but special locations that require human eyes-on get a custom treatment. At the edges of this custom terrain mesh you could interpolate terrain heights between the custom verts and the procedurally-generated verts to get nicer blending.
  11. Horizon: Zero Dawn is a great recent example of vegetation placement - their system is broadly categorized into authored source data and tooling, and the runtime placement system. Source data (and authoring) Painted / generated per-world-tile texture maps, e.g. Rivers, Big Trees, Biome X, etc. These can be whatever information is useful to you in your placement algorithm. It's common that things like 'Rivers' or 'Roads' are actually Signed Distance Fields so you can do things like query '10m away from roads'. You'll generally want access information derived from your terrain as well, like Slope, Curvature, etc. 'Placement Graphs', e.g. some way to express how you want to combine all your source data to place different assets. This could be a textual expression language, a node graph, defined in code, whatever, but the point is that you can take all your source data and define a set of rules for how something is placed Probability(my_tree_a) = (distance(water) < 10) * keyframe(elevation, <some keyframes>) * ... Runtime placement When you approach a terrain tile in Horizon, they start placing nearby objects using their placement algorithm. This involves taking all the source data + placement graphs and evaluating them. In Horizon's case, they do this on the GPU in a massively parallel fashion to efficiently place thousands of objects. Objects can be placed at pre-defined (ish) locations. You can generate these locations using blue noise, hex packing, etc. but you just want some kind of irregular-ish grid to get natural-looking uniform random placement. At each location, evaluate all the possible placement graphs that could go there (grouped however makes sense to you). Generate some random value and randomly choose what to place at that location based on the evaluated results (it might be nothing!). 'Placing' something really means recording a pair <model ID, position>. You can also generate other information (scale, rotation, tilt, whatever). What you end up withs is basically a point cloud of various models scattered across your landscape. You can transform this and input it into your normal rendering pipeline however works best for you. The more traditional approach is to move both steps 1 and 2 to offline tooling, and package up all your models and instances to be loaded by the game, but Horizon's approach allows some pretty awesome in-engine iteration (paint to 'Trees' texture and see things respond in real time). Far Cry 5 is an example of the more traditional offline approach iirc. Edit: Tl;dr, you need some way to come up with a point cloud of models to render. You can do that on the GPU or on the CPU depending on the number of things in your environment, perf requirements, etc. If you're generating chunks, it makes sense to perform your placement algorithm when generating a chunk. When drawing a point from your point cloud, you can determine to draw the billboard or full model based on distance from the viewer, etc. like drawing a normal model.
  12. Most AAA games these days dealing with large terrains utilize some form of virtual texturing / clipmaps / etc. for enhanced texture detail near the viewer. A benefit of having all that information laying around is that you can sample from it to blend your objects with the ground. If done right you can 'automatically' create pretty convincing details at the seams between, say, a rock and the sand beneath it (where the sand blends up a little onto the rock). A general improvement to this technique is to blend the normals between the terrain and the object in this small transition area to remove some of the edge discontinuity. If taken to the extreme, this approach can add a lot of detail. By combining geometry + terrain decals + terrain clipmaps, you can create a sort of 'geometric terrain decal'. Imagine that you have a decal of a couple rocks that you render into your terrain clipmap; this creates interesting texture detail, but no geometric detail. You could instead render a model of those (textured) rocks instead, but when those rocks LOD out at some distance you lose your texture detail as well. If you separate your geometry + decal you can get good-looking results at a greater variety of distances. Render the rocks decal into your terrain clipmaps. You'll see this decal in the terrain for as far away as your clipmaps support, but obviously with no geometric detail added. At close-ish distances, render the geometry of the rocks that matches the decal using a technique that purely derives its material properties from the clipmaps underneath it. Profit! Your nearby terrain gains both geometry + texture detail, and further-away terrain gains texture detail, all without having to increase the resolution of your heightmap. I'm not sure if this technique is super widespread (I imagine many studios have some equivalent), but I first saw it last year at GDC (GDC 2018 - "Terrain Rendering in 'Far Cry 5'"). I highly recommend that talk and the related "Procedural World Generation of 'Far Cry 5'" if you have access.
  13. 0.5m between heightmap vertices is common for recent large open-world games. iirc the most recent Far Cry, Ghost Recon: Wildlands, and Horizon: Zero Dawn all use somewhere around (or exactly?) that number. Generally you can then use various tricks to improve the perceived terrain detail (tessellation of nearby tris, parallax occlusion mapping, etc.).
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!