• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
  • entries
    13
  • comments
    55
  • views
    20094

Entries in this blog

eppo

In the game, transportation of goods is handled by little drones you can build/purchase at factories located across the map. You don't control these directly, so it'll require some pathfinding to get them around.

Because the map is too large to cover with a (dense enough) regular grid, I generate a Poisson field with points packed with a minimal distance based on how close they lie to the underlying scene geometry, with points more densely distributed near the terrain surface.

Next, the 1D ordering of sample points gets optimized by sorting them along a Hilbert-like curve, like this:

Crv1.png

This for two reasons: it improves tree construction times, as nearby points are likely to result in similar graphs when passed through an A* search (while reusing the ordering of points from previous searches). Secondly, if points end up with comparable ordering in whatever lookup table they're referred to in, then these tables can easily be compressed RLE-style. This allows me to store and traverse all n[sup]2[/sup] flow fields describing the fastest route from every point to every other point in the graph.

Nodes (1).png

Here I've set the drones to follow the second camera through the flow field:

Bye!

eppo

Hello.

Finally some dynamic indirect lighting in the renderer!

[media]

[/media]

(this uses a single ambient light source (an environment map) only)

It's fairly lo-res, but it's good for things like "walking through a dark cave" or "a giant spaceship hanging overhead".

On the cpu side, I look for a diffuse occlusion term by 'ray tracing' (they're pre-rasterized bit masks I fetch from a LUT) a simplified sphere-representation of the scene into a tetrahedral grid of cube maps / bit fields.

During rendering, I then construct the occlusion term, for a specific vertex, based on its position and orientation in the grid and use it to interpolate between an indirect term grabbed in screen space and whatever light comes directly from the ambient light source itself.


journal_1.png

eppo

Translucency maps

Hello,

A while ago I fiddled a bit with an offline translucency technique as discussed here: https://www.gamedev.net/topic/653094-baking-a-local-thickness-map/. Never officially made it part of the baking pipeline, even though it should only be a small extension - still just flip and ray test those normals.

Though instead of averaging all sampled ray distances, I split them up into small 4x4 8-bits precision sets, so as to break the uniform thickness up into several view dependent values. These grids snug nicely into a uint32[4] vertex map component and can be sampled in a vertex shader based on the camera-to-vertex view tangent.

sss-1.png

sss-2.png

eppo

Hello again,

Updated my lightmapper to bake full indirect diffuse as opposed to only an ambient occlusion term. It now also captures a viewpoint dependent occlusion mask that lets the environment cast a silhouette onto any reflective surfaces:

SMW.png

Also added a 'bias & gain' option to the multitexturing interface that lets you shift the gradient ramp and contrast of vertex map masks. This makes them usable as e.g. damage/wear maps, even with their lo-res nature:

Mailboxes.png

[media]

[/media]

eppo

Bokeh to the Future

Hi!

Wasted an entire week trying to salvage this plan I had for a new depth of field shader. To up the performance, I came up with this really awkward mipmapping scheme, because for some reason I was convinced the weighted blending filter needed wasn't separable. A few minutes reconsidering this just now and it turns out a two pass method is perfectly feasible - so a quick rewrite later and I have this:

019_a.png

This version gets rid of the worst bleeding artifacts in both the near and far field by adjusting any texel's circle-of-confusion radius based on the depth values of neighboring (and occluding) texels.

019_b.png

I apply a basic dilation filter to bring out the brighter values, but I think I'm probably going to have to render some small point sprites to really have those distinct iris blade patterns pop out.

[media]

[/media]

eppo

Cloud.gen

'allo


Something I've vowed several times I'd never do again: hand-paint a skybox.

The outdoor cloud-ish type specifically:

cloudmap.png

Painting clouds is all fun and fulfilling, until you need to fill out a full 360 degree view with dozens of them.

I preferred to draw a cloud scene as a spherical environment map; stretch around the outer regions is somewhat easier to deal with than the discontinuities along borders of the six faces in a cube map. (I always do my painting in Painter, so no clever 3D painting across seams for me.) Then you get the obvious issues with lighting (time of day etc.), that often require a complete repaint if you do not plan properly.


Anyway, the interface of the 'cloud tracer'-tool as it is now: smile.pngoptions = [ "mesh" => loadmesh([ "path" => "D:/Temp/Cloud.lxo", "mask" => "Bake" ]), "env" => loadimg([ "path" => "D:/Temp/Cloud_Env_2048.exr", "gamma" => 1.0 ]), "coverage" => loadimg([ "path" => "D:/Temp/Cloud_Coverage.png" ]), "normal" => loadimg([ "path" => "D:/Temp/Cloud_Normal.png", "channelmap" => [ 0, 2, 1 ], "levelmin" => -1.0, "levelmax" => 1.0 ]), "density" => 0.025, "densityhaze" => 0.00025, "tilelevelmin" => 0.4, "tilelevelmax" => 1.0];saveimg([ "textures" => tracer(options) ]);
The "mesh":

cloudrep.png

CREEPY. unsure.png

I skipped the simulation pass I initially planned on implementing, and simply modelled and replicated a series of basic cloud shapes. These cloud parts are all made up of smaller convex elements, which makes ray-testing fast and robust (occlusion tests against translucent convex objects make up the bulk of the work). I've also been meaning to try and run a radiosity solver on a low-resolution proxy mesh and have its results mapped back onto higher detail geometry - these convex bits seemed like a good candidate.

"environment":

cloudprobe.png

Illumination comes from a single light probe for now.

"coverage"/"normal":
cloudslab.png

I got to doodle some more clouds by hand: geometry is expanded using these normal mapped 'depth billboards', which then have results from the lighting pass projected onto them. They're cheaper to render than volumetric textures, as there's no ray-marching involved. Also, 2D textures are far more straightforward to produce. I might come back to volume textures for other reasons later though.

Then there's some stuff about global cloud density and fog values for space in the scene that's not occupied by cloud geometry.



LET'S RENDER AND SEE:

clouds4.png


k.

- Clouds need to be a bit more whispy here and there.
- The in-modeller point cloud replicator I used does a fairly good job at coming up with interesting cloud shapes, but I feel a custom distributor could still do better.
- The radiosity pass finishes pretty quickly as I had hoped, but tracing the final 2048 * 2048 image takes a while running on the cpu - needs more Compute.



Apologies for all the going back-and-forth with the camera.


Lastly, a bit of blatant self-promotion: I've been looking to do some graphics/tools programming for a U.S./European studio. If anyone can think of someone who might be interested, I'd love to hear (doesn't have to be game related per se).
eppo
The thing with architecture is that so much about it is up to whim and any procedural creation algorithm needs to be guided by a large amount of rules and grammars before it'll know how to churn out any remotely useable pieces of real-estate.

I got to the point where I could generate objects similar to these: render024_synth.png

These were actually the most acceptable out of several hundred bakes (only three front doors!) and I'm afraid getting the tool to generate acceptable output at all times is becoming way too time consuming. sad.png Instead, I'm simply going to hand-model and texture a series of basic solids and then pass sets of these through the CSG boolean tool:

render024_blocks.png

render024_boolean.png

There's no way to deform (like stretching them out without distorting them) these in any way yet (apart from orienting and scaling them), but it serves for now. Also tried to merge the whole scenery into a single solid (trees and foliage are still instanced) to trim away as much overdraw (especially the terrain is texfetch-expensive) as possible.

render024.png

render024_detail.png
eppo
Time for yet another rewrite of my CSG tool.
Instead of going down the iso-surfacing road like earlier attempts, I now take a more boolean-like approach. This in the sense that I no longer convert the input geometry into an intermediate distance field, but keep it in its original b-rep form throughout the merging process. The advantage being that it suffers far less from aliasing issues that occur when you bake a mesh down into a volumetric grid or tree structure. Secondly, there's no loss of information on connectivity within both the base geometry as well as any vertex map data (discontinuous uvs etc.) that comes with it.

the base parts:

render012.png

after running the CSG operation and texturing:

render013.png

and the accompanying uv map:

render013_uv.png

The algorithm behaves very similar to the construction of a constrained Delaunay triangulation. It looks for intersections between features of the
individual CSG parts and either splits geometry or spins edges until all overlaps are resolved, while at the same time trying to maintain the Delaunay (he was Russian; don't think he's French smile.png) property within constraining boundaries.

Obviously there are things the contouring method allows you to do that are simply not possible using strictly mesh-based booleans. You lose, for example, the ability to trace or do blending operations on noise functions and volume textures. I have to resort to basic displacement mapping or alternatively, replicate some actual geometry across the surface of the mesh and use the tool to merge all that into a single manifold:

render014.png

render015.png

Placement of the rock blocks is purely arbitrary and results in some undesired cut throughs in curved areas, so I'll have to look for a method that's a bit more controlled without being too involving.
eppo

B-rep displacement mapping

In an earlier post I was considering how to add surface detail to meshes generated by the boolean tool. Back then I ran a union operation on a base mesh and a series of smaller objects replicated over its surface, but it's difficult to create a believable, continuous looking surface simply by glueing multiple rigid objects together without any obvious cracks and bits sticking out (especially with curved surfaces).

Displacement mapping seemed more practical, but I wanted to do it all offline (earlier in the pipeline - allowing auto uv-ing and collision system etc. to pick up on it) and you can't really bring out all the detail in a map unless you adaptively refine/subdivide to the pixel level, which is something you would do at run-time.

A way to solve this would be not to use bitmap data for displacement, but some actual geometry:

render023_relief.png

That's a 1x1 (tileable) piece of geometry that's, just like an image map, treated as if it exists in UV(W) space. As a standard boundary represented chunk of mesh it has well-defined edges and applying it as a displacement (which is in itself also a boolean operation (in UV space)) is much more conservative when it comes to the additional vertices needed to express its union with some underlying surface.

So using this as the base mesh:

render023_base.png


After displacing the three individual pieces and then combining them:

render023.png


Unlike the earlier method, the relief mesh adapts to stretches/scaling in the base mesh' uv map and properly curves along its surface.
eppo
'allo

Finally got around to add an ambient occlusion baker to the pipeline.

I'm baking it all to vertex maps, as these give generally good results and avoid the hassle and memory demands of having to assign unique texture space to every triangle in a mesh. Meaning I could no longer use the render-to-texture-only routines provided by Modo. Also, lots of instanced meshes (grass, trees, etc.) don't exist until mesh compilation time, which caused them to be excluded from any in-modeller performed ray tests. And then I don't like having to bake all that gpu stuff by hand, so this was always going to be automated one day. smile.png

Occlusion testing itself is pretty straightforward, but here are some findings specific to creating per-vertex maps:

- Don't only evaluate rays at vertex positions, but sample from various points over the surface of the vertex' neighboring triangles - this avoids local undersampling of the surrounding geometry. Worse case example: most of a vertex' connecting face area may actually be un-occluded, but if the vertex itself is occluded by some small object, all rays fired from its position will be too and the vertex will show up completely dark. Multisampling smooths out such deficiencies.

occlusionbake1.png

- Use a series of sample positions combined with a series of ray directions that spread out evenly over respectively a triangle's area and a halfsphere - precomputed Poisson distributions work well. It's not too relevant which point fires which ray; triangles tend to be small compared to their environment, which keeps rays from converging too much once they fan out into the scene.

occlusionbake2.png

- Once you have a bunch of per-triangel occlusion values, you'll need to weld them together to form per-vertex values. Normals are usually welded based on some angle threshold and my first idea was to use a similar threshold, but specific to occlusion values. Turns out it's better to have them weld along with the normal angles: if the normals weld, the occlusion values weld too (or any weight/color map value really). What this really means is that you use any lighting discontinuities caused by unwelded triangle normals as an excuse to keep more triangle-specific occlusion data around as well (not averaging at the vertex will result in higher graphical fidelity), but only then, as to avoid a too-faceted look. It also results in the least amount of duplicated vertices needed in a vertex buffer.

render021.png
eppo

More of the bool-tool

Been trying to build larger scenes using the CSG method.

As base building blocks I use a series of SubD shapes as they avoid making everything look too angular. The older mesher used to make a mess of curved surfaces, but this version handles them well. These basic chunks have all their uvs, splat maps and density maps for foliage replication already applied to them:

render016_blocks.png



Instancing and merging them:

render016.png



Then after adding more detail to the blocks (a block itself can be a collection of blocks, in a recursive manner):


render017.png



There are two main reasons to run a boolean operation on these geometric soups: first is to cull a lot of overlapping geometry which would have resulted in a massive amount of rendering overdraw. Secondly, once you've established connectivity between two shapes, they can exchange information at the intersecting points, like average normals so as to improve lighting continuity, or have the shapes adjust to one another's local triangle resolution resulting in better formed triangles which in turn makes the mesh more suitable for carrying per-vertex data, e.g. a baked global illumination vertex-map:


render017_tris.png
eppo
hallo,


Modo's 'shader tree':


ModoMaterial.png


It's a peculiar thing at first: instead of creating explicit materials, you add shading components (textures etc.) to a single hierarchy. Polygons are then assigned tags which correspond to masking-groups (a sort of polygon-'gateway') located in the tree. During shading a polygon walks the tree and grabs all elements its tags allow it access to. These are added to a flat list which is then evaluated bottom to top to construct a material specific to that polygon. It's great in that it allows you to create different material routes for polygons to follow, but ones that can still have traits in common.

When processing the tree: at the bottom you usually find these 'Material' items, which are just collections of a series of constants (like base diffuse and specular values). These fill out a material's properties' initial values. The remaining items in the tree are used to modulate these base values.

Modo has a whole bunch of different 'layers' you can use for this purpose. These layers are set to a specific 'effect', which is whatever it is you want them modulate (again: diffuse, specular etc.). For example, the image-map layer type reads its output from a texture while a gradient layer samples it from a curve. Any layer type can also be set to act as a mask for other layers, weighting their influence. All the various layer types share common settings like blend modes, a global opacity value, an invert toggle and levelling settings. Very Photoshop-ish. After traversing the tree the final values are used as input for lighting.

Point is: this layered approach seemed very well suitable as something to replicate in-shader, so I wrote an importer. smile.png


[media]
[/media]


It's basically a Modo-to-D3D converter. It looks for unique poly-tag -> material sets and turns them into various state objects, input layouts, vertex streams, block-compressed textures, samplers and shaders.

For anyone familiar with the application: it can also tessellate the newer Pixar-type SubD surfaces, create compositable animations based off Actor-Action sets, and bake those replicator thingies.
eppo
Felt my meshing tool needed another rewrite; getting trickier though.

The idea behind the tool is to construct a mesh using smaller simpler building blocks; kind of like a purely additive CSG modeller. I started out with a Marching Cubes implementation, but I really wanted it to be able to capture sharp edges in the base geometry. Also the triangle quality of the generated meshes needed improving. I prefer to bake data as vertex maps whenever I can, but shading based on interpolation between per-vertex values on awkwardly shaped triangles can quickly give poor results.

Scene are built using standard non-self intersecting meshes as basic building blocks. The advantage of this is that I can use any existing modeller to set these scenes up while allowing for a lot less linear workflow. Blocks are modelled once and can then be instantiated/transformed multiple times into the scene.

The algorithm then starts out by turning the bounding box of the mesh into a set of connected tetrahedrons. From there it keeps sampling the background mesh and using that data to refine the tetrahedral grid until it resembles the original mesh. It ends up with a big blob of tetrahedrons with a single connected triangle manifold on the surface.

What's nice about tetrahedrons is that they're guaranteed to be convex objects, so you basically get a convex decomposition (for collision detection) for free. Though I do a quick merge-pass to see if individual tets can be clustered together to form larger convex chunks. A volumetric description of the mesh allows for more robust intersection testing, but I only really need to keep the outer "crust" of tetrahedrons around, so I discard the inner ones to save on memory,

jsJLEb1.png?1

Any vertex weight map applied to the base mesh is projected back onto the resulting triangle mesh. I use these as masks for textures and density maps for surface particle replication (foliage etc.).

N5DPRWc.png?3

Two Catmull-Rom curves projected as road uv-maps. Locally the vertex resolution needs to be uniform and high enough to capture the curvature of the road. The green weight map masks a grass texture while also modulating the alpha channel of the road:

1TzQbAC.png?4

And the final scene textured and baked in Modo. Modo allows you to set up a pre-shaded environment into which you can simply drop newly generated meshes to have them automatically have their materials etc. applied.

render009.jpg


render010.jpg


render011.jpg