Jump to content
  • Advertisement
  • entries
  • comments
  • views

About this blog

Any interesting changes or discoveries

Entries in this blog


Advances in LPV, Volumetric Lighting and a new Engine Architecture

Welcome once again, and thanks for clickin on my journal!
When we think of light propagation volumes we think of awesomeness with a ton of light bleeding, right? Well I do, so I've been trying to limit the amount of light bleeding and I find the current result acceptable without being too hackyish.

The first change is the injection. Usually I would inject the lighting in the same position as the voxels into a sh lighting map, but, once I do this I don't have much information on which "direction" the lighting came from if I was to propagate the lighting. In that case I would have three choices, light bleeding, expensive occlusion calculation or not propagating (But the last one wouldn't be any fun...). So, what if instead of injecting the lighting at the voxel position I inject it with a small offset, this offset would then be proportional to the approximated normal. In this way the actual lighting information would be injected into the empty space.

The 2nd change I made was simply discarding propagation in any occluded voxels/cells as the new method doesn't require it. Now the issue with this is if the cell size ( How much a voxel/cell occupies in whatever unit your system uses ) is way too big compared to the world the propagation will visually fail and look horrible, so to look good a bit of performance is sucked.

The last change is when sampling the final indirect gi I apply a small offset as at all the lighting information is in the "empty" cells, now one might say that this is a crude approximation but I don't find it that horrible.

So, there you have it, that's my current recipe to a LPV system without bleeding, there are still lots of things to fix but it's a start.

In my last entry I talked about a cascaded LPV system, however this has slightly changed. You can still configure multiple cascades; however the way it works is slightly different. In each cascade the system will create two grids, a high frequency grid and a low frequency grid (The dimensions of the grid is still intact). The low frequency grid represents the low frequency lighting information, and the high frequency grid will represent the slightly higher frequency lighting information. The two grids are treated as separate grids with different cell sizes but when rendered the energy proportion is taken into account.

So I'm fairly happy how my LPV system has progressed and I find the results acceptable, now obviously there's the issue with the "blocky" look ( If you want an acceptable performance ), which I'll try and mess around with and share my results later on.

Now, let's steer slightly away from that and think about volumetric fog! Yes! That's right!

Volumetric Lighting!

So to make the volumetric lighting feel more "part" of the scene I integrated the indirect gi system. Currently I have a very basic volumetric lighting setup, raymarch from the camera to the world space position at the pixel and slowly accumulate the lighting (The method I used to calculate the lighting is based on "Lords of the Fallen's" [Game] Volumetric Lighting). So each raymarch I also sample the indirect gi from the propagated lighting map and multiply that in. And I'm really liking the results!

(I know the roughness / specular looks wrong, I still need to integrate the rougness / specular maps from the sponza scene) (And I seriously improved the quality of the gifs...)

Now! The only issue with this is... performance! All of that added together is asking your hardware to commit suicide, at least, mine did. Since I'm an addict to the game Dota 2, I was having a casual game with some friends and decided to program in the background, now for some reason I was writing and reading from an unbound UAV in my compute shader ( I didn't realize this ). The result was the gpu completely freezing ( I could still talk and hear my friends, whilst freaking out ), I waited for the tdr duration however the tdr did not occur. So in the end I had to force shut down and restart quickly in order to participate in the game ( We won though! ). I was actually scared to start it again even though I bound the uav...

Looking aside from that I've also implemented some basic debugging tools for the lpv system, such as getting the lighting information from each cell position ( It's extremely simple to implement, but really helps a lot ):

Previously my engine has a pretty horrible architecture, because I'm horrible at architecture, I'm a horrible person. So I decided to attempt at improving the architecture of the engine. I decided to split the engine up in:
Helpers : Just general things, common math stuff / etc...
Native Modules : Shaders, Containers, etc
User Modules : An example would be a custom voxel filler or whatever, depends on the type
Chains : Responsible for higher level actions, such as Shadow Mapping, Voxel GI, etc...
Device : Basically combining chains and working with them

Now I'm not saying that this is ideal or even good, but I find it nice and functional. Now the user modules are a bit special, the user modules are custom modules that the programmer can create. However each module has to derive from a module type. An example is the gi system, the gi system has a special module type that allows the modification of the lighting maps before the propagation. The programmer would then inherit from this type and override the pure virtual functions, and then push this module to a queue. I made a small module that would "approximate" the indirect radiance from the "sky" (Assuming that there is one) just to test around. The native c++ code is farily straight forward. Although this specific module type has a bunch of predefinitions and preprocessors in a shader file to ease the process, the shader code for this testing module:#include "module_gridshfill.hlsl" // Our basic module definition MODULE((8, 1, 8), (uint3 CellPos : MODULE_CELLID) { // Testing data, This is just magic and stuff, not correct at all float fFactor = 0.01f; float3 f3Color = float3(0.658, 0.892, 1); float3x4 f3x4AmbientSH = { fFactor.xxxx * f3Color.x, fFactor.xxxx * f3Color.y, fFactor.xxxx * f3Color.z }; // Raymarch Down [loop] for (CellPos.y = g_fVoxelGridSize-1; CellPos.y >= 0; CellPos.y--) { // Get the voxel VoxelData voxel = FETCH(CellPos - uint3(0, 1, 0)); // If this voxel is occupied break the march // TODO: Semi occluded voxels (10) if (voxel.fOcclusion > 0) { break; } // Write the new value on top of the current value WRITE_ADDITION(CellPos, f3x4AmbientSH); } });
Now some of the stuff will change for sure although it works fine for now. The result of the above is an indirect radiation from the "sky". And it looks alright! So I'm pretty happy with the module system.

In the complete other hand I suddenly have this weird crave to work on my scripting language again... (I know I know, just use an existing one... But where would the fun be in that!? ) And I soon need to reimplement some sort of physics engine into this version of my engine. So, there's still lots of fun!

Looking away from some more or less small changes and additions, that's more or less it folks! It's been a heavy week though, lots of things happening. Fx my dog found out that a full day barbecue party is extremely tiring, he didn't want to walk or anything, slept like a stone... (He loves walks).

See you next time!




Cascaded Light Propagation Volumes, VS RC 2015, Retarded Calculators + More stuff

Well, let's begin shall we! ( This article isn't very focused, it's just small notes and such )

For a while I've been thinking about working on cascaded light propagation volumes, so I finally did. For now I just have a 64 (detailed) + a 32 ( less detailed ) grid that are filled using the voxel caches. Although I have not worked on the energy ratio yet ( My solution is hacky ), I like the result.

(Images scaled to fit, originally rendered at resolution 1920x1080. The whitish color is because I've got some simple volumetric lighting going on, although it doesn't respond to the LPV yet) (PS. Still lots of work, so there are issues + light bleeding ) ( And there's no textures on the trees, for... reasons and stuff )

I've also worked on my BRDF shading model which is based on Disneys solution, and integrated my BRDF shading model into the LPV system ( Although it's a simplified version, as we don't need all the detail and some computations are meaningless in this context ). And I really think it made the indirect colors feel more part of the scene.

A poor quality gif showing how the light propagates through the scene:

On the complete other side, as I'm rewriting the engine I felt like upgrading to the RC Version of VS 2015 ( And dear god I recommend it to anyone ). And so I needed to recompile lots of libraries, such as SFML ( + most dependecies ), AntTweakBar, +small stuff. Now the AntTweakBar case was special, as it really only supports SFML 1.6. It contains a minified version of the SFML 1.6 events that it then uses, although when the memory layout changes in SFML 2.3 it all fucks up (Sorry). So I had to change some of the minified internal version of SFML to make it work, for anyone here is the modified part of the minified sfml (It's hackyish, mostly c&p from the sfml sources, so there's most likely errors and such, but for now it does the job ):namespace sf { namespace Key { enum Code { Unknown = -1, ///
On top of that the performance of my engine in VS 2015 strangely increased by a few milliseconds which really surprised me. I'm not completely sure what it is. And in VS 2013 I had a strangely huge overhead when starting my application inside VS which made the file io incredibly slow, in VS 2015 this issue is gone and this huge waiting time is gone ( 20 seconds to a minute... ) .

I finally got to redesign my gbuffer, and while there's lots of work to be done, it all fits nicely, general structure:2Channel: x = Depth, y = Packed(metallicness, anisotropicness),4Channel: xy = Normal, z = Packed(subsurface, thickness), w = Packed(specular, roughness)4Channel: xyz = Diffuse, z = Packed(clear_coat, emmision)
The tangent is then reconstructed later, and it's pretty cheap and works fine for my needs. Now all the user has to do is call GBuffer_Retrieve(...) from their shaders and then all the data is decompressed which they then can use, the final data container looks somewhat like the following:struct GBufferData{ float3 Diffuse; float3 PositionVS; float3 TangentVS; float3 NormalVS; float3 Position; float3 Normal; float3 Tangent; float SpecPower; float Roughness; float Metallic; float Emmision; float ClearCoat; float Anisotropic; float SubSurface; float Thickness;};
Now, you might say "But what if I don't want to use it all, huge overhead", which is true, but, compilers! The cute little compiler will optimize out any computations that aren't needed, so if you don't need a certain element decompressed, it wont be (Yay)! So all of that fits together nicely.

But at the same time I think I've got an issue with the performance concerning filling the gbuffer stage, as it's huge compared to everything else. Perhaps it's the compression of the gbuffer, not sure yet.

But, it's acceptable for now, although I think I can squeeze some cute little milliseconds out of it .

On a side note I've also been trying to work on some basic voxel cone tracing but it's far from done. And I seriously underestimated the performance issues, but it's pretty fun.

Now due to family related issues I had to take my brother to our beach house ( Nothing fancy ), and there I allocated some time to work on my retarded calculator! It's a small application based on a very basic neural network, I didn't have time to work on my bias nodes or even my activation function, for now the output of the neuron is simply weight * data, although it actually produces acceptable results. The network is composed of 4 layers:
10 Neurons
7 Neurons
5 Neurons
1 Neuron

Again, this was just for fun, I didn't even adapt the learning rate during the back propagation, it was just to fill out a bit of time. The output from the application:Starting trianing of neural network Train iteration complete, error 0.327538 Train iteration complete, error 0.294999 Train iteration complete, error 0.266 Train iteration complete, error 0.240112 Train iteration complete, error 0.216965 Train iteration complete, error 0.196237 Train iteration complete, error 0.177651 Train iteration complete, error 0.160962 Train iteration complete, error 0.145959 Train iteration complete, error 0.132454 Train iteration complete, error 0.120285 ......... a few milliseconds later Training completed, error falls within treshold of 1e-06!=============================== Final testing stage Feeding forward the neural network Final averaged testing error: 0.0178298===============================Please enter a command...>> f var(a0) Input: #0 -> 2 #1 -> 4 #2 -> 3 #3 -> 1 #4 -> 4 #5 -> 5 #6 -> 2 #7 -> 3 #8 -> 4 #9 -> 1 Feeding forward the neural network Layer Dump: #0 = 29.346>> e var(a0) algo({sum(I)}) Evaluating error: (a0) Error: 0.345961
So, overall, I'm pretty happy with it all. But I haven't been able to allocate enough time ( You know, life and stuff, school or whatever everybody suddenly expects of you ). But if anybody is reading this, can you comment on the colors of the images, meaning do you find it natural or cartoony, I find them a bit cartoony. Well, thanks for even reaching the bottom!





Less bullshit... More code!

The word "bullshit" is probably an exaggeration, although I honestly dislike it like everyone else does ( At least to my understanding ). What I'm talking about is the glorious exams, not university exams just regular high school exams. But that part is over now, and I'm more or less satisfied with the result. Now, since that part is over, I got lots of time to code and stuff...

Just since the beginning of the exams to now, I've been rewriting my engine. And got pretty much everything implemented. The performance was the main goal. The previous version of my engine ran at ~100 fps at half of the screen resolution, the newely written engine runs at ~120 fps at full HD resolution (i.e. 1920 x 1080). Now this is a big thing because I've got a lot of post processing effects that really torture the gpu bandwith, such as volumetric scattering, so the resolution seriously affects the frame time. And together with that the architecture of the new system is seriously better together with the new material system that I wrote about in my last entry ( It has been upgraded a bit from that point )

There's still lots of small optimizations to do, and still lots of unfinished "new" features. But one of the main things I changed is the way my voxelization system works. Every single time a new mesh/object/whateverpeoplecallit has been added the scene the mesh is voxelized without any transformation applied to a "cache" buffer, this cache buffer is added to some list of a sort. Then there's the main cache buffer that represents the final voxel structure around the camera. So each frame ( through a compute shader ) all voxel caches are iterated through, then each cell of the caches are first transformed by the current transformation matrix of the mesh ( As each cache represents a mesh without any transformations ) and then fitted inside the main voxel cache ( With some magic stuff that speeds this up ). The awesome thing about this is that every single time the camera is moved / the mesh is moved, scaled or even rotated there no need to revoxelize the mesh at all ( Less frame time, yay ).

Although I chose to disable screen space reflections as IMHO there were too many artifacts which were too noticeable. So in the meantime I have a secret next gen way to perform pixel perfect indirect specular reflections ( I WISH )

Currently, all effects combined minus the ssr, showing off volume shadows. Nothing fancy.

Oversaturated example of the diffuse gi:

Dynamic filling of the voxel cache, oriented around the player:

So while playing around, I found this "mega nature pack" in the Unreal Engine 4 marketplace. So I purchased the package and started messing around, just programmer art . Now all this shows is that I have some serious work to do with my shading model, and I need to invest some time into some cheap subsurface scattering... Btw in the image below the normals are messed up, so the lighting appears weird in some points. And the volumetric scattering is disabled, since it also desaturates the image a bit ( For valid reasons ).

So I tried messing around with the normals and used the SV_IsFrontFace to determine the direction of the normal on the leaf, and got something like this: ( Volumetric scattering disabled ) ( Btw, quality is lost due to gifs! I love dem gifs ) ( Ignore the red monkey )

The following is the shader used for the tree, which is written by the user: ( Heavily commented )
.Shader { // Include the CG Data Layouts #include "cg_layout.hlsl" // Define the return types of the shader // This stage is really important to allow the parser // to create the geometry shader for voxelization // and also if the user has created his own geometry shader, so that it can figure // out a way to voxelize the mesh properly, in this way the user // can use ALL stages of the pipeline (VS, HS, DS, GS, PS) without // voxelization not being possible. // The only problem is well, he has to write the stuff below: ( Even more if he used more stages ) #set CG_RVSHADER Vertex // Set the return type of vertex shader #set vert CG_VSHADER // [Opt] Set the name of the vertex shader instead of writing CG_VSHADER #set pix CG_PSHADER // [Opt] Set the name of the pixel shader instead of writing CG_PSHADER // This is his stuff // He can do whatever he wants! Texture2D T_Diffuse : register(t0); Texture2D T_Normal : register(t1); // Basic VS -> PS Structure // This structure inherits the "base" vertex, stuff that the engine can crunch on struct Vertex : CG_VERTEXBASE { // Empty }; // Now include some routines that's needed on the end of all stages #include "cg_material.hlsl" // Vertex shader Vertex vert(CG_ILAYOUT IN) { // Zero set vertex Vertex o = (Vertex)0; // Just let the engine process it // Although we could do it outselves, but there's no need CG_VSPROCESS(o, IN); // Return "encoded" version CG_VSRETURN(o); } // Pixel Shader // In this case the return type is FORCED! As it's a deferred setup CG_GBUFFER pix(Vertex v, bool IsFrontFace : SV_IsFrontFace) { // Basic structure containing info about the surface Surface surf; // Sample color float4 diff = CG_TEX(T_Diffuse, v.CG_TEXCOORD); // Simple alpha test for vegetation // We want it to clip at .a = 0.5 so add a small offset clip(diff.a - 0.5001); // Fill out the surface information surf.diffuse = diff; surf.normal = CG_NORMALMAP( // Do some simple normal mapping T_Normal, v.CG_TEXCOORD, v.CG_NORMAL * ((IsFrontFace) ? 1 : -1), // Flip the normal if backside for leaves v.CG_TANGENT, v.CG_BINORMAL ); surf.subsurface = 1; // I've got a simple version of some sss, but it's not very good yet. surf.thickness = 0.1; // For the sss surf.specular = 0.35; surf.anisotropic = 0.2; surf.clearcoat = 0; surf.metallic = 0; surf.roughness = 0.65; surf.emission = 0; // Return "encoded" version // Aka compress the data into the gbuffer! CG_PSRETURN(v, surf); }};
So in the progress of all of this, I'm trying to fit in some SMAA and color correction. And the moment I looked into color correction using LUT, I face palmed, because how the hell did I not think of that!? ( Not in a negative way, it's just so simple and elegant and pure awesome! ) So messing around with that and spending 5 hours on a loading problem which turned out to be too simple, it returns some kewl results: ( Just messing around )

So that's more or less it. I'll keep improving my engine, working on stuff and more stuff. I think I'll leave the diffuse gi system where it is for a while, since it works pretty well and produces pretty results, now I need to work on some specular gi stuff since I really don't have a robust system for that yet that doesn't produce ugly artifacts.

See you next time people of the awesome GDNet grounds!




New Shading Model and Material System!

PS: No screenshots this time, just me talking!

I haven't really been active lately because of the glorious exams that are nearing me, but it's still nice to know that it's close to over ( At least this round ).

So as the title says, I've been working on a new shading model that tries to support all of the modern techniques. Now two features that I'm really excited about is anisotropic surfaces and subsurface scattering with direct light sources. However I still have to improve my implementation of the clearcoat shading, as I'm still missing some important ideas about it.

On the other hand I decided to rewrite my material system, which is the one that the user will write for his own custom surface shaders ( For meshes ). Now previously I did a ton of string parsing but honestly it's just unnecessary and it didn't give me the freedom I needed. So, I went full on BERSERK MODE with macros. Now it may not seem like there's much macro work, but there is !. So I simply have a file full of macros, and when the user requests to load a material file, it simply pastes his code into the file ( Well after a bit of parsing the material file ) and compiles it as a shader.

Example material:Input{ Texture2D g_tNormalMap, float3 g_f3Color = (0.7, 0.7, 0.7), float g_fSubsurfaceIntensity = 0, float g_fAnisotropicIntensity = 0, float g_fClearcoatIntensity = 0, float g_fMetallicIntensity = 0, float g_fSpecularIntensity = 0, float g_fRoughness = 0,};Shader { #set vert CG_VSHADER #set pix CG_PSHADER // I have a deep dark fear of "frag" // Basic VS -> PS Structure struct Vertex { // This is a must! In the future I'll allow him to create his entire own structure // as not much work is needed for it, but it still simplifies a lot of his work CG_VERTEXBASE // The user could pass any other variable he wanted here }; Vertex vert(CG_ILAYOUT IN) { // Zero set vertex Vertex o = (Vertex)0; // Just let the engine process it, the user may do this on his own // but in usual cases he really doesnt want to CG_VSPROCESS(o, IN); // Return encoded version CG_VSRETURN(o); } CG_GBUFFER pix(Vertex v) { float3 Normal = CG_NORMALMAP( v.CG_NORMAL, CG_SAMPLE(g_tNormalMap, v.CG_TEXCOORD) ); // the same can be done for parallax mapping or whatever the user desires // Set up the surface properties Surface surf; surf.diffuse = g_f3Color; surf.normal = Normal; surf.subsurface = g_fSubsurfaceIntensity; surf.specular = g_fSpecularIntensity; surf.roughness = g_fRoughness; surf.metallic = g_fMetallicIntensity; surf.anisotropic = g_fAnisotropicIntensity; surf.clearcoat = g_fClearcoatIntensity; // Doesnt work yet! // Return encoded version CG_PSRETURN(v, surf); }};
And that's about it!
As always, until next time!




Screen Space Reflections ( SSR ) - CONTINUED ( Aka Improvements )

So I've been working on my screen space reflections (SSR) and have been trying to eliminate artifacts. The next step will be the somehow make it more physically, because currently I just base the strength of the reflection linearly on the roughness ( sorta ).

Sponza Scene ( Yes, again ) ( Oh, and I decreased the intensity, although its configurable by the user, as the previous intensity was WAY too high ):
PS: Notice the weird line artifact below the arches, I still have to figure out what that is, including a few other artifacts. . And I forgot to disable fog, so the colors are a bit dimmed down.

Another testing scene, the direction of the sun is very low on purpose to enhance the reflections. This is the scene WITHOUT SSR:


And, as always, that's it!




Screen Space Reflections ( SSR ) - We must all accept yoshi_lol as our lord and true saviour!

So I finally got a basic implementation of Screen Space Reflections ( SSR ), aside from the fact that its screen space and some artifacts it's actually ok. Now, you may wonder why the title is as following:

"We must all accept yoshi_lol as our lord and true saviour!"

I based my implementation on the article from Casual Effects:

However I was in trouble as there were a few conversion problems from GLSL -> HLSL, not the syntax conversion. So there yoshi_lol came, gave me his implementation and from there I saw how he converted it to D3D-HLSL. Thanks yoshi_lol! So now we must accept him as our true lord and saviour.


Screenshots! ( There are many artifacts, it's a very early implementation, so there are many areas that look really messed up! )

And that's about it!

Until next time! :)




What drives you in whatever you like?

This isn't really an update, it's just me rambling about stuff.

Motivation is awesome, and horrifying, at least I think so. It allows you to completely focus on something for days without thinking twice, or it leaves you staring at the screen thinking "What am I doing with my life...". So how does one find this "motivation?", well I frankly have no general answer, but I do know what drives me.

I love graphics, I love messing with new techniques, but what drives me is nature. E.g. A picture I took this morning going to school:

For some this may mean nothing, but for me this means everything. Even though we may not be there yet, the day where we finally get interactive graphics at this level is the day where... I'm not sure, It's just going to be a good day :).

The following is too something awesome:

So, well, thats it. If anyone reached the bottom I'd like to ask you:

[indent=1]What drives you?

Thanks guys.




Trees! - And that's about it

This isn't really a new big update, just me talking a bit and showing some pictures.

Well, as the title suggests I wanted to play with trees, to see how my shading model handles vegetation together with GI. Now aside from the performance as I've disabled any frustum culling, the performance is not too bad. However there's still LOTS of work in the shading model of surfaces where light is guaranteed to pass through, so the images might be a bit weird...

There's also a few problems with my volumetric lighting. Currently I find the vector between the world space position and the world space position of the pixel, but, if the ray is TOO long, then what? I know there's some really nice research published by Intel that describes large scale outdoor volumetric lighting, however I'm not going to dive into that right now as it's a lot of work.

So, as people want to see pictures, I give you pictures!

Now, for the fun of it, why not render 6000 trees!

Now, as always, until next time!




Color Grading! - Yay

Another entry!

So something I never got a basic implementation of was color grading, so today I decided to get a rough implementation. There's still lots to work on, its based on the NVIDIA's post complement sample (http://developer.download.nvidia.com/shaderlibrary/webpages/shader_library.html).

Color Grading DISABLED vs ENABLED:

And that's all, until next time! And enough about the damn white/gold/purple/brown/etc... dress!





Volumetric Lighting!

So one topic that we all hear over and over is VOLUMETRIC LIGHTING ( caps intended ). Why? Because its so damn awesome. Why not? Because it can get expensive depending on the hardware. So after countless of tries I scrapped the code I've been wanting to shoot then resurrect then shoot again and just wrote what made sense, and it worked! :)

The implementation is actually really simple, in simple terms I did it like this: ( I havent optimized it yet, E.g. I should do it all in light view space )// Number of raymarchessteps = 50// Get world space positionpositionWS = GetPosition();// Get world space position of the pixelrayWS = GetWorldSpacePixelPos();// Get ray between world space position and pixel world space posv = positionWS - rayWS;vStep = v / steps;color = 0,0,0for i = 0 to steps rayWS += vStep; // Calculate view and proj space rayWS rayWSVS = ... rayWSPS = ... // Does this position recieve light? occlusion = GetShadowOcclusion(..., rayWSPS); // Do some fancy math about energy energy = ... * occlusion * ... color += energy.xxx;return color * gLightColor;
Results: ( Its not done yet )


Thats all! Until next time! :)




The beginning of particle simulation

Last Entry: https://www.gamedev.net/blog/1882/entry-2260844-got-new-particle-rendering-up-and-running-simulation-next/

So I got a basic backbones of the simulation system up and running. The simulation happens in a compute shader, and everything just works out, which is great! So to test it out I put two point masses with low intensity a bit from eachother, and this was the result.
Next step will to be stretch the particles based on velocity for a fake like motion blur, and then allowing the particles to collide with the objects around them.


Until next time!




Beginning on terrain rendering

Terrains are awesome, so therefore I'm trying to mess around with them.

I didnt really need to change anything in my engine because I really dont see any reason to seperate a mesh from a terrain like many engines do. So using World machine to generate a small section of a terrain, I imported the mesh file and heightmap to generate and displace the terrain. In the process I saw some really weird SSAO errors which I still dont get why:

The voxelization for GI:

The diffuse output which is just a really simple shader that lerps between height and normals:

But in the process I saw this nightmare when voxelizing, not good.


The shader if anyone is interested. Really simple.A simple shader file that the engine parsesshader "Simple Terrain"{ Properties() { info = "A simple terrain shader that lerps between 4 textures"; } // Considered to be global input() { Texture2D tgrass; Texture2D trock; Texture2D tsnow; Texture2D tdarkdirt; } pass(cull = true;) { pixel() { float2 tex = input.positionWS.xz * 2.5f; float3 rock = trock.Sample(ss, tex); float3 grass = tgrass.Sample(ss, tex); float3 snow = tsnow.Sample(ss, tex); float3 dirt = tdarkdirt.Sample(ss, tex); float NormalLerp = saturate( lerp( 0.0f, 1.0f, 1 - dot( input.normalWS, float3( 0.0, 1.3, 0.0 ) ) ) ); float3 fvColor = lerp( lerp(grass, dirt, NormalLerp), lerp(snow, rock, NormalLerp), saturate(input.positionWS.y / 20.0f - 0.5)); output.dffXYZTrA.xyz = fvColor; // Yeah yeah its hackyish... // Sets the specular level to 0.2f SetSpecular((0.2f).xxx); } }}
Thats it, just a bit of progress!




New Editor - 2 Days Progress!

So we all hate manually ajusting positions and such. "Ohh it should be 0.1 to the right, ohh well, better recompile.... Hmm no actually 0.05" and so on... So I decided I wanted to make an editor, or start.

Now I had a great problem, which was the fact that my project was built under the /MT option. This was mainly because the normally distributed Physx libraries were built under /MT too. But Qt was built under /MD, and if I recompile Qt under /MT there are certain bugs such as the designer not working ( They also document this ). So, what to do!? So theflamingskunk ( user here ) came to action and said: "But wait! There is a Physx MD ,here!". I mean, cmon, what are the exact chances of this happening, first time I ever see this guy in chat ever ( Been off chat for a while though ), and he has the perfect solution, awesome!

So after building my project under /MD and reconfiguring ( Takes time, so many linker errors! Argh! ), I finally successfully linked to Qt. However I cant build under /MDd for some reason, not sure why... So no qt debug mode, but it hasnt stopped me, hehe. Screenshots!
Main Editor, where you place stuff and stuff. Currently suppots Assets ( A preconfigured mesh ) and Directional Lights.

Now the asset editor works pretty well in my opinion. The first tab is the base per mesh options, such as material, and so on. However they can change with textures and such.

The second tab is the resource tab, it allows for integrated types, such as normal speular, displacement maps bla bla bla. But also allows for custom textures currently, more to come though:

So with this its easy to add new assets to the scene, go into the asset list and double click it and it spawns, then edit it: ( Asset list is on the far left middle tab )

Now I also made some changes to my voxelization pass to allow wandering around without going outside the bonds of the voxel grid. So I move the voxel grid. However I had loads of artifacts which were HORRIBLE. So moving was not an option. Instead I rendered the voxelization from 0,0,0 and as culling and all of that is disabled, I simply check the position in the pixel shader, and "assume" that the voxel grid is there. No artifacts no nothing. The VERY red, not red, is outside of the voxel grid:

Funny thing: I ended up calling the class that handles the asset configuration AssConfig, because I could.

That's all folks! Just a summery of the early stages of the editor, so its still pretty rough...





Dont ever use Ez Update for BIOS, Just dont...

So I was an idiot and used Asus Ez update to update a few things, and by mistake selected a new bios. Now the problem is that Ez Update isnt really good for bios updates. Right, so I didn't cancel it in time and it just got stuck on 99%. Now I was really afraid of cancelling the update, because you know, I might brick my bios. After like half an hour of staying 99%, I just restarted and crossed my fingers. Luckily it actually somehow updated the bios without any errors.

Just... don't use Ez update for bios stuff... Its a nightmare...
Thats all!




Dont let Hieroglyph 3's scene grow!

I finally got my new pc up and running, shes working smooth and quiet with no issues yet. After reinstalling everything, I had some fun with the scene that comes with the Hieroglyph 3 framework, so I added a bunch of plants and grass. This was the result:

Just ignore the red dot and the complete black background.

And thats all, nothing more, just a bit of fun!




Minour updates

So I'm just going to show off some images of progression, actually its just a minor update as not much has changed, however my DOF is much better. And for some reason SSAO is bugged so its turned off. Also the sponza material file is bugged so I made a few changes.
- I had a sign error in the DOF testing, now its fixed, and the DOF is based on the "Skylanders DOF".
- The shadowing has also improved using better filtering.
- Now the engine has better support for specular maps.
- Displacement is working although is a Work In Progress.
I'm planning on making the GI work on a large scale. Using a cascaded approach.
EDIT: Forgot to disable debug mode, I promise that the fps is higher!
GI: ( Really bright because light is shining directly on the cloth )


Until next time!




To rewrite, or not to rewrite?

When I first started doing "semi serious" graphics programming, I called it Cuboid Engine, because it sounded cool and my first accomplishment was, well, a cube! So througout the development I was basically just putting lots of things together, and my only concern was:

Does it look pretty, and realistic!?

And if it didnt, it wasnt worth it. So I eventually came to a point where I actually had a graphics engine which wasnt really a game engine, however it was a decent graphics engine, with some performance issues. After a while the code base was pretty messy and there were some naughty hacks that I werent proud of. I didnt really have the patience to move things around and clean up, because it would simply be too much work. So, what can I do?

Rewrite it!

I'm pretty scared of that word, because it reminds me of a lot of work. Rewriting is a lot of work and when I created a new project called "Cuboid Engine 2." I was pretty happy, because of the 2. But the blank screen with the god damn int main()... Personally its a weird feeling, however my codebase is nowhere near the size of some of your guys work, but its all I can do.

So I began thinking back of the previous troubles I had, always writing the same direct3d code every single time! So, I needed a wrapper, thats what people call it, right? So I made my very own cute "CEDX11" class that took care of all the naughty work! Whats the advantage of this?
No need to always cuddle with Directx ( Even though its a nice SDK )
No need to repeat code
Easy to modify
Easy to upgrade! (DX12, if I ever get my hands on it!)

But Im just a random internet guy whos rabbling about my "easy" problems, because this is candy compared to some stuff!

After a while I had written another graphical engine with some game elements, however a much better engine! It can ( today! ) do some things that Im, well, a bit proud of. Physical Based Rendering ( BRDF ), Light Propagation Volumes, Voxel Cone Tracing ( almost done ) and other things with loads of post processing effects ( Luminance Adaptation, Bloom, SSAO, bla bla bla ). But is it a game engine? Heck no!

Its far from a game engine! Sure you can script in it, run the script and get movable things and skeletal animations or not use a script and communicate directly to the engine. But when I say a game engine, I'm talking about an engine that allows the user to create a game, not a benchmark. Sure in my small engine its easy to create a benchmark! And its pretty flexible in my opinion, Fx. I worked hard on custom coded materials that behaved the way the user wanted them to behave without breaking out of deferred rendering ( That allows the user to work with all the shader stages available ). But a game? No.

The architecture of my engine is not designed to be a game engine, at all. So, actually this does seem like the first case? However the code base isnt messy, its just not designed for this "new" purpose. So what do I do? Well, what can I do:
+Cleaner and better result
+Can just use the same wrapper again CEDX11 ( Its actually a pretty solid thing, does its job well )
-The effort! + Time!
Move things around and clean up
+Less time and efford!
-The result wont be as good as rewriting

So well, this was it, these are my current thoughts, and sometimes its just nice to write them down even though its actually a pretty small thing.

Until next time!




Found some awesome models - With gifs!

So I've always had a hard time finding any "good" foilage models online, so searching a bit more I found the ( It's most likely not completely legal, but, its good for testing ) TF3DM, and found some awesome tree models.

Result? Eye candy!

Link if anyone is interested:

Until next time!




Global Illumination - Emmision - With Gifs!

So after a long battle with an invisible enemy, I defeated the evil bug, with some awesome help from the forums! ( kudos to bact and the gd chat! ):

So below theres a gif that shows a mantis ( thats what theyre called right? ) floating. The mantis emits a color without any lighting hitting it, as seen at the bottom of the grass.

This doesnt seem very difficult, but the problem was that over the few past days I was tyring to figure out a bug, which turned out to be a blending issue. Anyway, the result!

I made it look eye candy ( for me atleast ), so the tone mapping is slightly ... wrong
The background changes a bit because of luminance adaptation, because the object is seriously bright, emission is at max.

Until next times fellas!


Yes yes, feed my children, feed! Feed! FEEED!




First try at DOF, its something!

So, I never actually went into depth of field, but its an important part that I always wanted to integrate, but was way to lazy to do it. So I maned up, and did it whilst failing a couple of times. The actual effect is still not very good but its getting there. And as always, the only reason why youre really here, is pics, so:

Until next time!




Better Light Propagation Volumes

In my last entry I described how the interpolation was poor, so I decided to "smoothe" them out. I used the compute shader to "propagate" over the neighbouring cells. This progress was done 10 passes. Below is a comparison between new and old results, the new are much better!

New Results:

Last Results:

Personally I find these results much better than before.

Until next time!




LPV: Really, after all that time, thats it!?

For a long while I've known theres been a problem when testing for shadow visibility with my voxels during my injection of voxel lighting. But as usual I got too lazy and left it. So I got along with the problem, I called it Steve, I created lots of hackish solutions to mimic the "correct" shadow testing.

But after a while I got tired of Steve, so I double checked the light injection pass, and noticed that the light intensity calculation was acting "weirdly". So, what was the problem? THIS:... = VoxelGetNormal(nDotL, voxel.maskNormal, -lDirection);
What could possibly be wrong with Steve here? Well it turns out that Steve was a challanged kid:... = VoxelGetNormal(nDotL, voxel.maskNormal, lDirection); // Woala
More correct output:

Currently I'm testing the shadow maps on the voxel with a PCF filter, but the interpolation is still jagged ( like cubes ), because theyre voxels. So I might look into how to smootly interpolate, I'm just afraid of multi sampling different near by voxels ( Performance ).

Im not a pro at MS Paint yet, so the interpolation isnt really.... correct.

The problem will be called Jack, for some reason.

Until next time!




Global Illumination Progress Update

The Global Illumination by Light Propagation Volumes is progressing in a nice way. Even though this is an artifact in a weird way, it still produces a nice result. The surface is too thin for the grid to not capture both sides. But it looks nice.

But not all is good, there's still weird artifacts:

Sometimes the shadow testing fails when injecting lighting into the grid, I guess this is to do with the resolution but I'm still not sure, I think there's more to it. Example: The shadow testing should have succeeded, but no indirect illumination is present.

Light Bleeding:

Until next time!



  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!