Jump to content
  • Advertisement
  • entries
    29
  • comments
    99
  • views
    51924

About this blog

Any interesting changes or discoveries

Entries in this blog

 

Dont ever use Ez Update for BIOS, Just dont...

So I was an idiot and used Asus Ez update to update a few things, and by mistake selected a new bios. Now the problem is that Ez Update isnt really good for bios updates. Right, so I didn't cancel it in time and it just got stuck on 99%. Now I was really afraid of cancelling the update, because you know, I might brick my bios. After like half an hour of staying 99%, I just restarted and crossed my fingers. Luckily it actually somehow updated the bios without any errors.

Just... don't use Ez update for bios stuff... Its a nightmare...
Thats all!

Migi0027

Migi0027

 

Beginning on terrain rendering

Terrains are awesome, so therefore I'm trying to mess around with them.

I didnt really need to change anything in my engine because I really dont see any reason to seperate a mesh from a terrain like many engines do. So using World machine to generate a small section of a terrain, I imported the mesh file and heightmap to generate and displace the terrain. In the process I saw some really weird SSAO errors which I still dont get why:



The voxelization for GI:


The diffuse output which is just a really simple shader that lerps between height and normals:


But in the process I saw this nightmare when voxelizing, not good.

.

The shader if anyone is interested. Really simple.A simple shader file that the engine parsesshader "Simple Terrain"{ Properties() { info = "A simple terrain shader that lerps between 4 textures"; } // Considered to be global input() { Texture2D tgrass; Texture2D trock; Texture2D tsnow; Texture2D tdarkdirt; } pass(cull = true;) { pixel() { float2 tex = input.positionWS.xz * 2.5f; float3 rock = trock.Sample(ss, tex); float3 grass = tgrass.Sample(ss, tex); float3 snow = tsnow.Sample(ss, tex); float3 dirt = tdarkdirt.Sample(ss, tex); float NormalLerp = saturate( lerp( 0.0f, 1.0f, 1 - dot( input.normalWS, float3( 0.0, 1.3, 0.0 ) ) ) ); float3 fvColor = lerp( lerp(grass, dirt, NormalLerp), lerp(snow, rock, NormalLerp), saturate(input.positionWS.y / 20.0f - 0.5)); output.dffXYZTrA.xyz = fvColor; // Yeah yeah its hackyish... // Sets the specular level to 0.2f SetSpecular((0.2f).xxx); } }}
Thats it, just a bit of progress!

Migi0027

Migi0027

 

Screen Space Reflections ( SSR ) - We must all accept yoshi_lol as our lord and true saviour!

So I finally got a basic implementation of Screen Space Reflections ( SSR ), aside from the fact that its screen space and some artifacts it's actually ok. Now, you may wonder why the title is as following:

"We must all accept yoshi_lol as our lord and true saviour!"

I based my implementation on the article from Casual Effects:
http://casual-effects.blogspot.dk/2014/08/screen-space-ray-tracing.html

However I was in trouble as there were a few conversion problems from GLSL -> HLSL, not the syntax conversion. So there yoshi_lol came, gave me his implementation and from there I saw how he converted it to D3D-HLSL. Thanks yoshi_lol! So now we must accept him as our true lord and saviour.

Anyway...

Screenshots! ( There are many artifacts, it's a very early implementation, so there are many areas that look really messed up! )




.
And that's about it!

Until next time! :)

Migi0027

Migi0027

 

Cascaded Light Propagation Volumes, VS RC 2015, Retarded Calculators + More stuff

Well, let's begin shall we! ( This article isn't very focused, it's just small notes and such )

======
For a while I've been thinking about working on cascaded light propagation volumes, so I finally did. For now I just have a 64 (detailed) + a 32 ( less detailed ) grid that are filled using the voxel caches. Although I have not worked on the energy ratio yet ( My solution is hacky ), I like the result.

(Images scaled to fit, originally rendered at resolution 1920x1080. The whitish color is because I've got some simple volumetric lighting going on, although it doesn't respond to the LPV yet) (PS. Still lots of work, so there are issues + light bleeding ) ( And there's no textures on the trees, for... reasons and stuff )


I've also worked on my BRDF shading model which is based on Disneys solution, and integrated my BRDF shading model into the LPV system ( Although it's a simplified version, as we don't need all the detail and some computations are meaningless in this context ). And I really think it made the indirect colors feel more part of the scene.

A poor quality gif showing how the light propagates through the scene:


.
======
On the complete other side, as I'm rewriting the engine I felt like upgrading to the RC Version of VS 2015 ( And dear god I recommend it to anyone ). And so I needed to recompile lots of libraries, such as SFML ( + most dependecies ), AntTweakBar, +small stuff. Now the AntTweakBar case was special, as it really only supports SFML 1.6. It contains a minified version of the SFML 1.6 events that it then uses, although when the memory layout changes in SFML 2.3 it all fucks up (Sorry). So I had to change some of the minified internal version of SFML to make it work, for anyone here is the modified part of the minified sfml (It's hackyish, mostly c&p from the sfml sources, so there's most likely errors and such, but for now it does the job ):namespace sf { namespace Key { enum Code { Unknown = -1, ///
On top of that the performance of my engine in VS 2015 strangely increased by a few milliseconds which really surprised me. I'm not completely sure what it is. And in VS 2013 I had a strangely huge overhead when starting my application inside VS which made the file io incredibly slow, in VS 2015 this issue is gone and this huge waiting time is gone ( 20 seconds to a minute... ) .

I finally got to redesign my gbuffer, and while there's lots of work to be done, it all fits nicely, general structure:2Channel: x = Depth, y = Packed(metallicness, anisotropicness),4Channel: xy = Normal, z = Packed(subsurface, thickness), w = Packed(specular, roughness)4Channel: xyz = Diffuse, z = Packed(clear_coat, emmision)
The tangent is then reconstructed later, and it's pretty cheap and works fine for my needs. Now all the user has to do is call GBuffer_Retrieve(...) from their shaders and then all the data is decompressed which they then can use, the final data container looks somewhat like the following:struct GBufferData{ float3 Diffuse; float3 PositionVS; float3 TangentVS; float3 NormalVS; float3 Position; float3 Normal; float3 Tangent; float SpecPower; float Roughness; float Metallic; float Emmision; float ClearCoat; float Anisotropic; float SubSurface; float Thickness;};
Now, you might say "But what if I don't want to use it all, huge overhead", which is true, but, compilers! The cute little compiler will optimize out any computations that aren't needed, so if you don't need a certain element decompressed, it wont be (Yay)! So all of that fits together nicely.

But at the same time I think I've got an issue with the performance concerning filling the gbuffer stage, as it's huge compared to everything else. Perhaps it's the compression of the gbuffer, not sure yet.


But, it's acceptable for now, although I think I can squeeze some cute little milliseconds out of it .

On a side note I've also been trying to work on some basic voxel cone tracing but it's far from done. And I seriously underestimated the performance issues, but it's pretty fun.

======
Now due to family related issues I had to take my brother to our beach house ( Nothing fancy ), and there I allocated some time to work on my retarded calculator! It's a small application based on a very basic neural network, I didn't have time to work on my bias nodes or even my activation function, for now the output of the neuron is simply weight * data, although it actually produces acceptable results. The network is composed of 4 layers:
10 Neurons
7 Neurons
5 Neurons
1 Neuron

Again, this was just for fun, I didn't even adapt the learning rate during the back propagation, it was just to fill out a bit of time. The output from the application:Starting trianing of neural network Train iteration complete, error 0.327538 Train iteration complete, error 0.294999 Train iteration complete, error 0.266 Train iteration complete, error 0.240112 Train iteration complete, error 0.216965 Train iteration complete, error 0.196237 Train iteration complete, error 0.177651 Train iteration complete, error 0.160962 Train iteration complete, error 0.145959 Train iteration complete, error 0.132454 Train iteration complete, error 0.120285 ......... a few milliseconds later Training completed, error falls within treshold of 1e-06!=============================== Final testing stage Feeding forward the neural network Final averaged testing error: 0.0178298===============================Please enter a command...>> f var(a0) Input: #0 -> 2 #1 -> 4 #2 -> 3 #3 -> 1 #4 -> 4 #5 -> 5 #6 -> 2 #7 -> 3 #8 -> 4 #9 -> 1 Feeding forward the neural network Layer Dump: #0 = 29.346>> e var(a0) algo({sum(I)}) Evaluating error: (a0) Error: 0.345961
.
======
So, overall, I'm pretty happy with it all. But I haven't been able to allocate enough time ( You know, life and stuff, school or whatever everybody suddenly expects of you ). But if anybody is reading this, can you comment on the colors of the images, meaning do you find it natural or cartoony, I find them a bit cartoony. Well, thanks for even reaching the bottom!

-MIGI0027

Migi0027

Migi0027

 

Screen Space Reflections ( SSR ) - CONTINUED ( Aka Improvements )

So I've been working on my screen space reflections (SSR) and have been trying to eliminate artifacts. The next step will be the somehow make it more physically, because currently I just base the strength of the reflection linearly on the roughness ( sorta ).

Sponza Scene ( Yes, again ) ( Oh, and I decreased the intensity, although its configurable by the user, as the previous intensity was WAY too high ):
PS: Notice the weird line artifact below the arches, I still have to figure out what that is, including a few other artifacts. . And I forgot to disable fog, so the colors are a bit dimmed down.



.
Another testing scene, the direction of the sun is very low on purpose to enhance the reflections. This is the scene WITHOUT SSR:

.
Then, WITH SSR:



.
And, as always, that's it!

Migi0027

Migi0027

 

Less bullshit... More code!

The word "bullshit" is probably an exaggeration, although I honestly dislike it like everyone else does ( At least to my understanding ). What I'm talking about is the glorious exams, not university exams just regular high school exams. But that part is over now, and I'm more or less satisfied with the result. Now, since that part is over, I got lots of time to code and stuff...

Just since the beginning of the exams to now, I've been rewriting my engine. And got pretty much everything implemented. The performance was the main goal. The previous version of my engine ran at ~100 fps at half of the screen resolution, the newely written engine runs at ~120 fps at full HD resolution (i.e. 1920 x 1080). Now this is a big thing because I've got a lot of post processing effects that really torture the gpu bandwith, such as volumetric scattering, so the resolution seriously affects the frame time. And together with that the architecture of the new system is seriously better together with the new material system that I wrote about in my last entry ( It has been upgraded a bit from that point )

There's still lots of small optimizations to do, and still lots of unfinished "new" features. But one of the main things I changed is the way my voxelization system works. Every single time a new mesh/object/whateverpeoplecallit has been added the scene the mesh is voxelized without any transformation applied to a "cache" buffer, this cache buffer is added to some list of a sort. Then there's the main cache buffer that represents the final voxel structure around the camera. So each frame ( through a compute shader ) all voxel caches are iterated through, then each cell of the caches are first transformed by the current transformation matrix of the mesh ( As each cache represents a mesh without any transformations ) and then fitted inside the main voxel cache ( With some magic stuff that speeds this up ). The awesome thing about this is that every single time the camera is moved / the mesh is moved, scaled or even rotated there no need to revoxelize the mesh at all ( Less frame time, yay ).

Although I chose to disable screen space reflections as IMHO there were too many artifacts which were too noticeable. So in the meantime I have a secret next gen way to perform pixel perfect indirect specular reflections ( I WISH )

Currently, all effects combined minus the ssr, showing off volume shadows. Nothing fancy.
.

.
Oversaturated example of the diffuse gi:
.

.
Dynamic filling of the voxel cache, oriented around the player:
.

.
So while playing around, I found this "mega nature pack" in the Unreal Engine 4 marketplace. So I purchased the package and started messing around, just programmer art . Now all this shows is that I have some serious work to do with my shading model, and I need to invest some time into some cheap subsurface scattering... Btw in the image below the normals are messed up, so the lighting appears weird in some points. And the volumetric scattering is disabled, since it also desaturates the image a bit ( For valid reasons ).
.

.
So I tried messing around with the normals and used the SV_IsFrontFace to determine the direction of the normal on the leaf, and got something like this: ( Volumetric scattering disabled ) ( Btw, quality is lost due to gifs! I love dem gifs ) ( Ignore the red monkey )
.


.
The following is the shader used for the tree, which is written by the user: ( Heavily commented )
.Shader { // Include the CG Data Layouts #include "cg_layout.hlsl" // Define the return types of the shader // This stage is really important to allow the parser // to create the geometry shader for voxelization // and also if the user has created his own geometry shader, so that it can figure // out a way to voxelize the mesh properly, in this way the user // can use ALL stages of the pipeline (VS, HS, DS, GS, PS) without // voxelization not being possible. // The only problem is well, he has to write the stuff below: ( Even more if he used more stages ) #set CG_RVSHADER Vertex // Set the return type of vertex shader #set vert CG_VSHADER // [Opt] Set the name of the vertex shader instead of writing CG_VSHADER #set pix CG_PSHADER // [Opt] Set the name of the pixel shader instead of writing CG_PSHADER // This is his stuff // He can do whatever he wants! Texture2D T_Diffuse : register(t0); Texture2D T_Normal : register(t1); // Basic VS -> PS Structure // This structure inherits the "base" vertex, stuff that the engine can crunch on struct Vertex : CG_VERTEXBASE { // Empty }; // Now include some routines that's needed on the end of all stages #include "cg_material.hlsl" // Vertex shader Vertex vert(CG_ILAYOUT IN) { // Zero set vertex Vertex o = (Vertex)0; // Just let the engine process it // Although we could do it outselves, but there's no need CG_VSPROCESS(o, IN); // Return "encoded" version CG_VSRETURN(o); } // Pixel Shader // In this case the return type is FORCED! As it's a deferred setup CG_GBUFFER pix(Vertex v, bool IsFrontFace : SV_IsFrontFace) { // Basic structure containing info about the surface Surface surf; // Sample color float4 diff = CG_TEX(T_Diffuse, v.CG_TEXCOORD); // Simple alpha test for vegetation // We want it to clip at .a = 0.5 so add a small offset clip(diff.a - 0.5001); // Fill out the surface information surf.diffuse = diff; surf.normal = CG_NORMALMAP( // Do some simple normal mapping T_Normal, v.CG_TEXCOORD, v.CG_NORMAL * ((IsFrontFace) ? 1 : -1), // Flip the normal if backside for leaves v.CG_TANGENT, v.CG_BINORMAL ); surf.subsurface = 1; // I've got a simple version of some sss, but it's not very good yet. surf.thickness = 0.1; // For the sss surf.specular = 0.35; surf.anisotropic = 0.2; surf.clearcoat = 0; surf.metallic = 0; surf.roughness = 0.65; surf.emission = 0; // Return "encoded" version // Aka compress the data into the gbuffer! CG_PSRETURN(v, surf); }};
.
So in the progress of all of this, I'm trying to fit in some SMAA and color correction. And the moment I looked into color correction using LUT, I face palmed, because how the hell did I not think of that!? ( Not in a negative way, it's just so simple and elegant and pure awesome! ) So messing around with that and spending 5 hours on a loading problem which turned out to be too simple, it returns some kewl results: ( Just messing around )
.

.
So that's more or less it. I'll keep improving my engine, working on stuff and more stuff. I think I'll leave the diffuse gi system where it is for a while, since it works pretty well and produces pretty results, now I need to work on some specular gi stuff since I really don't have a robust system for that yet that doesn't produce ugly artifacts.

See you next time people of the awesome GDNet grounds!
-Migi0027

Migi0027

Migi0027

 

New Shading Model and Material System!

PS: No screenshots this time, just me talking!

I haven't really been active lately because of the glorious exams that are nearing me, but it's still nice to know that it's close to over ( At least this round ).

So as the title says, I've been working on a new shading model that tries to support all of the modern techniques. Now two features that I'm really excited about is anisotropic surfaces and subsurface scattering with direct light sources. However I still have to improve my implementation of the clearcoat shading, as I'm still missing some important ideas about it.

On the other hand I decided to rewrite my material system, which is the one that the user will write for his own custom surface shaders ( For meshes ). Now previously I did a ton of string parsing but honestly it's just unnecessary and it didn't give me the freedom I needed. So, I went full on BERSERK MODE with macros. Now it may not seem like there's much macro work, but there is !. So I simply have a file full of macros, and when the user requests to load a material file, it simply pastes his code into the file ( Well after a bit of parsing the material file ) and compiles it as a shader.

Example material:Input{ Texture2D g_tNormalMap, float3 g_f3Color = (0.7, 0.7, 0.7), float g_fSubsurfaceIntensity = 0, float g_fAnisotropicIntensity = 0, float g_fClearcoatIntensity = 0, float g_fMetallicIntensity = 0, float g_fSpecularIntensity = 0, float g_fRoughness = 0,};Shader { #set vert CG_VSHADER #set pix CG_PSHADER // I have a deep dark fear of "frag" // Basic VS -> PS Structure struct Vertex { // This is a must! In the future I'll allow him to create his entire own structure // as not much work is needed for it, but it still simplifies a lot of his work CG_VERTEXBASE // The user could pass any other variable he wanted here }; Vertex vert(CG_ILAYOUT IN) { // Zero set vertex Vertex o = (Vertex)0; // Just let the engine process it, the user may do this on his own // but in usual cases he really doesnt want to CG_VSPROCESS(o, IN); // Return encoded version CG_VSRETURN(o); } CG_GBUFFER pix(Vertex v) { float3 Normal = CG_NORMALMAP( v.CG_NORMAL, CG_SAMPLE(g_tNormalMap, v.CG_TEXCOORD) ); // the same can be done for parallax mapping or whatever the user desires // Set up the surface properties Surface surf; surf.diffuse = g_f3Color; surf.normal = Normal; surf.subsurface = g_fSubsurfaceIntensity; surf.specular = g_fSpecularIntensity; surf.roughness = g_fRoughness; surf.metallic = g_fMetallicIntensity; surf.anisotropic = g_fAnisotropicIntensity; surf.clearcoat = g_fClearcoatIntensity; // Doesnt work yet! // Return encoded version CG_PSRETURN(v, surf); }};
And that's about it!
As always, until next time!

Migi0027

Migi0027

 

Trees! - And that's about it

This isn't really a new big update, just me talking a bit and showing some pictures.

Well, as the title suggests I wanted to play with trees, to see how my shading model handles vegetation together with GI. Now aside from the performance as I've disabled any frustum culling, the performance is not too bad. However there's still LOTS of work in the shading model of surfaces where light is guaranteed to pass through, so the images might be a bit weird...

There's also a few problems with my volumetric lighting. Currently I find the vector between the world space position and the world space position of the pixel, but, if the ray is TOO long, then what? I know there's some really nice research published by Intel that describes large scale outdoor volumetric lighting, however I'm not going to dive into that right now as it's a lot of work.

So, as people want to see pictures, I give you pictures!


.
Now, for the fun of it, why not render 6000 trees!


.
Now, as always, until next time!

Migi0027

Migi0027

 

Volumetric Lighting!

So one topic that we all hear over and over is VOLUMETRIC LIGHTING ( caps intended ). Why? Because its so damn awesome. Why not? Because it can get expensive depending on the hardware. So after countless of tries I scrapped the code I've been wanting to shoot then resurrect then shoot again and just wrote what made sense, and it worked! :)

The implementation is actually really simple, in simple terms I did it like this: ( I havent optimized it yet, E.g. I should do it all in light view space )// Number of raymarchessteps = 50// Get world space positionpositionWS = GetPosition();// Get world space position of the pixelrayWS = GetWorldSpacePixelPos();// Get ray between world space position and pixel world space posv = positionWS - rayWS;vStep = v / steps;color = 0,0,0for i = 0 to steps rayWS += vStep; // Calculate view and proj space rayWS rayWSVS = ... rayWSPS = ... // Does this position recieve light? occlusion = GetShadowOcclusion(..., rayWSPS); // Do some fancy math about energy energy = ... * occlusion * ... color += energy.xxx;return color * gLightColor;
Results: ( Its not done yet )




.

Thats all! Until next time! :)

Migi0027

Migi0027

 

Advances in LPV, Volumetric Lighting and a new Engine Architecture

Welcome once again, and thanks for clickin on my journal!
When we think of light propagation volumes we think of awesomeness with a ton of light bleeding, right? Well I do, so I've been trying to limit the amount of light bleeding and I find the current result acceptable without being too hackyish.

The first change is the injection. Usually I would inject the lighting in the same position as the voxels into a sh lighting map, but, once I do this I don't have much information on which "direction" the lighting came from if I was to propagate the lighting. In that case I would have three choices, light bleeding, expensive occlusion calculation or not propagating (But the last one wouldn't be any fun...). So, what if instead of injecting the lighting at the voxel position I inject it with a small offset, this offset would then be proportional to the approximated normal. In this way the actual lighting information would be injected into the empty space.

The 2nd change I made was simply discarding propagation in any occluded voxels/cells as the new method doesn't require it. Now the issue with this is if the cell size ( How much a voxel/cell occupies in whatever unit your system uses ) is way too big compared to the world the propagation will visually fail and look horrible, so to look good a bit of performance is sucked.

The last change is when sampling the final indirect gi I apply a small offset as at all the lighting information is in the "empty" cells, now one might say that this is a crude approximation but I don't find it that horrible.

So, there you have it, that's my current recipe to a LPV system without bleeding, there are still lots of things to fix but it's a start.

In my last entry I talked about a cascaded LPV system, however this has slightly changed. You can still configure multiple cascades; however the way it works is slightly different. In each cascade the system will create two grids, a high frequency grid and a low frequency grid (The dimensions of the grid is still intact). The low frequency grid represents the low frequency lighting information, and the high frequency grid will represent the slightly higher frequency lighting information. The two grids are treated as separate grids with different cell sizes but when rendered the energy proportion is taken into account.

So I'm fairly happy how my LPV system has progressed and I find the results acceptable, now obviously there's the issue with the "blocky" look ( If you want an acceptable performance ), which I'll try and mess around with and share my results later on.

Now, let's steer slightly away from that and think about volumetric fog! Yes! That's right!

Volumetric Lighting!

So to make the volumetric lighting feel more "part" of the scene I integrated the indirect gi system. Currently I have a very basic volumetric lighting setup, raymarch from the camera to the world space position at the pixel and slowly accumulate the lighting (The method I used to calculate the lighting is based on "Lords of the Fallen's" [Game] Volumetric Lighting). So each raymarch I also sample the indirect gi from the propagated lighting map and multiply that in. And I'm really liking the results!

(I know the roughness / specular looks wrong, I still need to integrate the rougness / specular maps from the sponza scene) (And I seriously improved the quality of the gifs...)


Now! The only issue with this is... performance! All of that added together is asking your hardware to commit suicide, at least, mine did. Since I'm an addict to the game Dota 2, I was having a casual game with some friends and decided to program in the background, now for some reason I was writing and reading from an unbound UAV in my compute shader ( I didn't realize this ). The result was the gpu completely freezing ( I could still talk and hear my friends, whilst freaking out ), I waited for the tdr duration however the tdr did not occur. So in the end I had to force shut down and restart quickly in order to participate in the game ( We won though! ). I was actually scared to start it again even though I bound the uav...

Looking aside from that I've also implemented some basic debugging tools for the lpv system, such as getting the lighting information from each cell position ( It's extremely simple to implement, but really helps a lot ):


Previously my engine has a pretty horrible architecture, because I'm horrible at architecture, I'm a horrible person. So I decided to attempt at improving the architecture of the engine. I decided to split the engine up in:
Helpers : Just general things, common math stuff / etc...
Native Modules : Shaders, Containers, etc
User Modules : An example would be a custom voxel filler or whatever, depends on the type
Chains : Responsible for higher level actions, such as Shadow Mapping, Voxel GI, etc...
Device : Basically combining chains and working with them

Now I'm not saying that this is ideal or even good, but I find it nice and functional. Now the user modules are a bit special, the user modules are custom modules that the programmer can create. However each module has to derive from a module type. An example is the gi system, the gi system has a special module type that allows the modification of the lighting maps before the propagation. The programmer would then inherit from this type and override the pure virtual functions, and then push this module to a queue. I made a small module that would "approximate" the indirect radiance from the "sky" (Assuming that there is one) just to test around. The native c++ code is farily straight forward. Although this specific module type has a bunch of predefinitions and preprocessors in a shader file to ease the process, the shader code for this testing module:#include "module_gridshfill.hlsl" // Our basic module definition MODULE((8, 1, 8), (uint3 CellPos : MODULE_CELLID) { // Testing data, This is just magic and stuff, not correct at all float fFactor = 0.01f; float3 f3Color = float3(0.658, 0.892, 1); float3x4 f3x4AmbientSH = { fFactor.xxxx * f3Color.x, fFactor.xxxx * f3Color.y, fFactor.xxxx * f3Color.z }; // Raymarch Down [loop] for (CellPos.y = g_fVoxelGridSize-1; CellPos.y >= 0; CellPos.y--) { // Get the voxel VoxelData voxel = FETCH(CellPos - uint3(0, 1, 0)); // If this voxel is occupied break the march // TODO: Semi occluded voxels (10) if (voxel.fOcclusion > 0) { break; } // Write the new value on top of the current value WRITE_ADDITION(CellPos, f3x4AmbientSH); } });
Now some of the stuff will change for sure although it works fine for now. The result of the above is an indirect radiation from the "sky". And it looks alright! So I'm pretty happy with the module system.

In the complete other hand I suddenly have this weird crave to work on my scripting language again... (I know I know, just use an existing one... But where would the fun be in that!? ) And I soon need to reimplement some sort of physics engine into this version of my engine. So, there's still lots of fun!

Looking away from some more or less small changes and additions, that's more or less it folks! It's been a heavy week though, lots of things happening. Fx my dog found out that a full day barbecue party is extremely tiring, he didn't want to walk or anything, slept like a stone... (He loves walks).



See you next time!

Migi0027

Migi0027

 

To rewrite, or not to rewrite?

When I first started doing "semi serious" graphics programming, I called it Cuboid Engine, because it sounded cool and my first accomplishment was, well, a cube! So througout the development I was basically just putting lots of things together, and my only concern was:

Does it look pretty, and realistic!?

And if it didnt, it wasnt worth it. So I eventually came to a point where I actually had a graphics engine which wasnt really a game engine, however it was a decent graphics engine, with some performance issues. After a while the code base was pretty messy and there were some naughty hacks that I werent proud of. I didnt really have the patience to move things around and clean up, because it would simply be too much work. So, what can I do?

Rewrite it!

I'm pretty scared of that word, because it reminds me of a lot of work. Rewriting is a lot of work and when I created a new project called "Cuboid Engine 2." I was pretty happy, because of the 2. But the blank screen with the god damn int main()... Personally its a weird feeling, however my codebase is nowhere near the size of some of your guys work, but its all I can do.

So I began thinking back of the previous troubles I had, always writing the same direct3d code every single time! So, I needed a wrapper, thats what people call it, right? So I made my very own cute "CEDX11" class that took care of all the naughty work! Whats the advantage of this?
No need to always cuddle with Directx ( Even though its a nice SDK )
No need to repeat code
Easy to modify
Easy to upgrade! (DX12, if I ever get my hands on it!)

But Im just a random internet guy whos rabbling about my "easy" problems, because this is candy compared to some stuff!

After a while I had written another graphical engine with some game elements, however a much better engine! It can ( today! ) do some things that Im, well, a bit proud of. Physical Based Rendering ( BRDF ), Light Propagation Volumes, Voxel Cone Tracing ( almost done ) and other things with loads of post processing effects ( Luminance Adaptation, Bloom, SSAO, bla bla bla ). But is it a game engine? Heck no!

Its far from a game engine! Sure you can script in it, run the script and get movable things and skeletal animations or not use a script and communicate directly to the engine. But when I say a game engine, I'm talking about an engine that allows the user to create a game, not a benchmark. Sure in my small engine its easy to create a benchmark! And its pretty flexible in my opinion, Fx. I worked hard on custom coded materials that behaved the way the user wanted them to behave without breaking out of deferred rendering ( That allows the user to work with all the shader stages available ). But a game? No.

The architecture of my engine is not designed to be a game engine, at all. So, actually this does seem like the first case? However the code base isnt messy, its just not designed for this "new" purpose. So what do I do? Well, what can I do:
Rewrite
+Cleaner and better result
+Can just use the same wrapper again CEDX11 ( Its actually a pretty solid thing, does its job well )
-The effort! + Time!
Move things around and clean up
+Less time and efford!
-The result wont be as good as rewriting

So well, this was it, these are my current thoughts, and sometimes its just nice to write them down even though its actually a pretty small thing.

Until next time!

Migi0027

Migi0027

 

Color Grading! - Yay

Another entry!

So something I never got a basic implementation of was color grading, so today I decided to get a rough implementation. There's still lots to work on, its based on the NVIDIA's post complement sample (http://developer.download.nvidia.com/shaderlibrary/webpages/shader_library.html).

Color Grading DISABLED vs ENABLED:

.
And that's all, until next time! And enough about the damn white/gold/purple/brown/etc... dress!

-MIGI0027

Migi0027

Migi0027

 

Better Light Propagation Volumes

In my last entry I described how the interpolation was poor, so I decided to "smoothe" them out. I used the compute shader to "propagate" over the neighbouring cells. This progress was done 10 passes. Below is a comparison between new and old results, the new are much better!


New Results:






Last Results:






Personally I find these results much better than before.

Until next time!

Migi0027

Migi0027

 

Minour updates

So I'm just going to show off some images of progression, actually its just a minor update as not much has changed, however my DOF is much better. And for some reason SSAO is bugged so its turned off. Also the sponza material file is bugged so I made a few changes.
- I had a sign error in the DOF testing, now its fixed, and the DOF is based on the "Skylanders DOF".
- The shadowing has also improved using better filtering.
- Now the engine has better support for specular maps.
- Displacement is working although is a Work In Progress.
I'm planning on making the GI work on a large scale. Using a cascaded approach.
EDIT: Forgot to disable debug mode, I promise that the fps is higher!
GI: ( Really bright because light is shining directly on the cloth )



DOF:



Until next time!

Migi0027

Migi0027

 

Dont let Hieroglyph 3's scene grow!

I finally got my new pc up and running, shes working smooth and quiet with no issues yet. After reinstalling everything, I had some fun with the scene that comes with the Hieroglyph 3 framework, so I added a bunch of plants and grass. This was the result:

Just ignore the red dot and the complete black background.



And thats all, nothing more, just a bit of fun!

Migi0027

Migi0027

 

The beginning of particle simulation

Last Entry: https://www.gamedev.net/blog/1882/entry-2260844-got-new-particle-rendering-up-and-running-simulation-next/

So I got a basic backbones of the simulation system up and running. The simulation happens in a compute shader, and everything just works out, which is great! So to test it out I put two point masses with low intensity a bit from eachother, and this was the result.
Next step will to be stretch the particles based on velocity for a fake like motion blur, and then allowing the particles to collide with the objects around them.

GIF:



Until next time!

Migi0027

Migi0027

 

Found some awesome models - With gifs!

So I've always had a hard time finding any "good" foilage models online, so searching a bit more I found the ( It's most likely not completely legal, but, its good for testing ) TF3DM, and found some awesome tree models.

Result? Eye candy!



Link if anyone is interested:
http://tf3dm.com/3d-model/trees-9-53338.html
http://tf3dm.com/3d-model/trees-2-67110.html

Until next time!

Migi0027

Migi0027

 

New Editor - 2 Days Progress!

So we all hate manually ajusting positions and such. "Ohh it should be 0.1 to the right, ohh well, better recompile.... Hmm no actually 0.05" and so on... So I decided I wanted to make an editor, or start.

Now I had a great problem, which was the fact that my project was built under the /MT option. This was mainly because the normally distributed Physx libraries were built under /MT too. But Qt was built under /MD, and if I recompile Qt under /MT there are certain bugs such as the designer not working ( They also document this ). So, what to do!? So theflamingskunk ( user here ) came to action and said: "But wait! There is a Physx MD ,here!". I mean, cmon, what are the exact chances of this happening, first time I ever see this guy in chat ever ( Been off chat for a while though ), and he has the perfect solution, awesome!

So after building my project under /MD and reconfiguring ( Takes time, so many linker errors! Argh! ), I finally successfully linked to Qt. However I cant build under /MDd for some reason, not sure why... So no qt debug mode, but it hasnt stopped me, hehe. Screenshots!
Main Editor, where you place stuff and stuff. Currently suppots Assets ( A preconfigured mesh ) and Directional Lights.



Now the asset editor works pretty well in my opinion. The first tab is the base per mesh options, such as material, and so on. However they can change with textures and such.


The second tab is the resource tab, it allows for integrated types, such as normal speular, displacement maps bla bla bla. But also allows for custom textures currently, more to come though:


So with this its easy to add new assets to the scene, go into the asset list and double click it and it spawns, then edit it: ( Asset list is on the far left middle tab )



Now I also made some changes to my voxelization pass to allow wandering around without going outside the bonds of the voxel grid. So I move the voxel grid. However I had loads of artifacts which were HORRIBLE. So moving was not an option. Instead I rendered the voxelization from 0,0,0 and as culling and all of that is disabled, I simply check the position in the pixel shader, and "assume" that the voxel grid is there. No artifacts no nothing. The VERY red, not red, is outside of the voxel grid:




Funny thing: I ended up calling the class that handles the asset configuration AssConfig, because I could.

QAss::GetResources...
QAss::CreateAssConfig...
QAss::...
That's all folks! Just a summery of the early stages of the editor, so its still pretty rough...

Thanks!

Migi0027

Migi0027

 

First Entry - What I'm doing

Hi fellow GameDev's!

This will be my first journal entry, and my last ( nah, it wont be ), so throughout this journal I'll hopefully post the most interesting developments in my engine or my game.

Just so you're warned, I have absolutely 0 experience in how to write these journal entries, so either it's boring or worth reading right now. And there's no screenshots of anything since I'm not near my pc for a few weeks.

So, my real beauty is my engine, which I have put a lot of work into, although development is going rather slow due to obstacles ( school, etc... ). I like to call her Cuboid Engine, the reason for this name is because the first accomplishment I made was the rendering of a brilliant simple 3d cube ( And the code wasnt even mine! ). From there on I continued to add onto the sample code and encapsulated everything into classes, and things went fast from there. Although after a while, I saw that my code base was messy as hell, so after a good while, I decided to re-do everything, which surprisingly took about 2 weeks ( with the obstacles... ), and the final product was much faster, nicer and cleaner. The part about the engine that I'm very satisfied with is its ability to be very general, in the sence that it doesn't serve a strict purposes, although it's mainly a game engine (Although I like to put the focus on the rendering).

When coding, my greatest fear/problem is making classes as general as possible, allowing me to re-use them later on, sadly sometimes this isn't as easy as it sounds.

Something that I learnt the hard way was that I like to start something and then quit very early on, but I finally think I got something that can keep me occupied, a game of mine in development, called Cuboid Space using my slowly developing engine. The game is block based (No, its not another Minecraft copy), the envoirernment is located in space. The actual plot is not made up, and the development is still very early, all you're left with is a platform and some base materials/blocks that you can place.

The game is very modding friendly, both for modders and the developers, this is due to the structure of the game: ( MS Paint FTW )



So basically Game uses the Singleton DLL Interface (Yeah, Singletons are bad, I know...), now the job of Interface is to keep track of all Blocks registered ( the register basically tells the Game that "this" block exists ), and to hold the pointers to the engine (Rendering, Physics, Sound, etc...) so that the Blocks can access these components, since I do not want the Blocks to be able to access the Game. The way I made the Interface aware of the Blocks was by employing a little hackish solution:

( I dont have the sources at the moment since I'm away from the main PC for a while, so this is from my memory )struct ObjectRegister{ObjectRegister(Object* (*c)(), cex::CEString &s) : _call(c), name(s){}Object* (*_call)();cex::CEString name;};class _BASEAPI Interface{private:std::vector g_vRegister;public:// Material ManagerMaterialManager *pMaterials;// Texture ManagerTextureManager *pTextures;// Sound Enginecex::CESoundCore *pSound;// Enginecex::CEEngineCore *pEngine;// Create the interfacevoid Create(cex::CEEngineCore *p, cex::CESoundCore *ps);// Get Object RegisterObjectRegister* GetObjectRegister(cex::CEString &p);// Global Object Register Functionbool Register(Object* (*in)(), cex::CEString &Name);// Get Singletonstatic Interface* Instance();};
As you see there, the Blocks themselves are responsible to call Register(...). Now this is where it gets messy and tricky, so I chose to make it easy for the block to register itself like this:

The header file of "a" block#pragma once#include "Blocks.h"// Declare Auto Register ModuleBLOCKS_ARMODULE(Metal);// Metal ClassBLOCKS_START(Metal){public:// Create Objectvoid Create(Interface *pInteface);// Update Objectvoid Update(Interface *pInteface);// On Signalvoid OnSignal(Interface *pInterface, int signal);};BLOCKS_END
The source file of "a" block:#include "stdafx.h"#include "CMetal.h"// NamespaceBLOCKS_USE;// Run Auto Register Module Constructor, identifier "Metal"BLOCKS_ARRUN(Metal, "Metal");void Blocks::Metal::Create(Interface *pInteface){// Set Texture MetalSetTexture(pInteface, "metal01_small.jpg");SetNormalMap(pInteface, "metal01_small_NORM.tga");SetMaterial(pInteface, "texture_norm.cs");}void Blocks::Metal::Update(Interface *pInteface){}void Blocks::Metal::OnSignal(Interface *pInterface, int signal){}
So basically the auto register module is a class with a constructor which is instantiated inside the source file, now since this is a DLL, when loading this DLL file the constructors will get called and the blocks will get registered.

Is this messy? Indeed!
Singletons (Check!)
Global Instantiated Classes (Check!)

Will it change? Yes, but it works great!

And then finally in the Game, it creates the Interface class, and then loads the Blocks DLL to register all Blocks. Then when the user selects a new block the game simply finds the block by name, like: _pBlock = pInterface.GetObjectRegister("metal"); Although the searching method will also change. So basically everything will change, eventually...

Now the physics part, the idea of blocks made the management way easier. The way I manage my blocks is with two classes, BlockGroup and Block, each single block is attached to a group, and the actual position of a block is always in local space with respect to the group position (This makes it very easy to find blocks close to other blocks, like sending a power signal to a neighbour block ). Now the group is the living thing, so when rendering each single block is transformed by the transformation of the group to create the movement. Each single time a new block is placed, it is assigned to a group, the group is found by finding the block that the user has clicked on, since each block holds a pointer to the group. Then the physics library receives an updated shape of the group, and the same happens when a block is removed.

Enough chit-chat.

So this was basically a start to my journal, hopefully I'll continue to add entries with developments. ( And screenshots )

A bit of personal information, currently I haven't had much time for further development, since my brother has to share his pc ( which happens rarely, though I assembled it ), since mine died. And by the obstacles I'm talking mainly about the school, since it's putting a whole lot of pressure on us. And currently I'm on a vacation with my family, then I'll get one week of break, then onto a school related travel, and then one week left of the summer holidays. So it's not the holiday I had hoped for, since all I asked for was some time to relax and, well, code on .

Although, you'll have to take my word for this, the engine is much more organized and cleaner than this system, as this almost fits for a coding horror!

The fact that you even reached the bottom is amazing! Thanks for your time!

Migi0027

Migi0027

 

A little update - Progressions

Pictures: (because thats why people are here).

PS. Im no artist, so this is programming art, and, some pictures are deliberately oversaturated, the vegetation looks to be different colors, as the diffuse textures are different. In the 3rd image the engine doesnt clear the depth nor normal textures, so artifacts occur, but in a complete environment there should be no "voids".





Some progression I've made over the months in my engine:
HDR Luminance Adaptation ( Pretty happy about this, the only problem is when the areas get too dark, but I'll think of something )
Improved Shadow mapping - Added PCF
Better BRDF support, had a few bugs and miscalculations, now vegetation actually looks like vegetation!
Better Tone Mapping Support ( Thanks MJP, your samples were a good guideline! )
Much better bloom, downsampling by 8 instead of 2, and much cheaper than before
Got some better SSAO in, although I'll re-implement it soon
Simple GUI
Skinned Animations

Actually I spent a while on the adaptation part, because weird gradients were forming on the screen, turns out that a 2x1 texture for the adapted luminance map isnt a good idea, so I made the mip maps one layer deeper and woala, nice luminance map.

And I'm currently working on:
Better GUI
Static poses from skinned animations
Complete Font Rendering ( Almost done )
Light Propagation Volumes ( Its like youre a kid with candy in your hands! )

What I want to implement:
Everything!
Translucency
Subsurfacescattering
Destruction module for Physx
Apex particles, for some reason it crashes when I enable apex particles, no clue why, so I disabled it for now.

What I still miss to implement from my old engine:
Produceal Lens Flares! ( I miss them )
Tessellation
In world debugging ( Like sphere visualizations for point lights and other debugging stuff )

The gui framework works the same way all my materials does, the artist ( in this case me ) writes a simple shader file, such as:

For a GUI Material:Some Code up here that the parser doesnt care aboutshader "Simple Diffuse"{ Properties() { Info = "Some info that the parser might care about"; } // Considered to be global, any input that the shader requires // On the native side the slot is fetched by pMat->GetSlot("box") input() { Texture2D box; Texture2D image; } pass() // You can have multi-pass materials { linker() { // In this stage you could send info from the vs to the ps, or from any stage to another } vertex() { // Any transform you'd like to take care off.. } pixel() { // Set Outlined Shape clip((box.Sample(ss, input.texcoord).a
For a general mesh, more complex:The parser still doesnt careshader "Simple Diffuse"{ Properties() { Info = "It might care about this..."; } // Considered to be global input() { Texture2D hex; } pass(cull = true; normalmap=true; roughnessmap=true;) // available properties: roughnessmap, normalmap, cull, cullmode, specularmap, dispacementmap, and some minor ones... When enabled, the material will support them. { pixel() { output.dffXYZTrA.xyz = hex.Sample(ss, input.texcoord).xyz; } // available shader stages: // vertex, pixel, domain, hull, geometry, linker ( special one ) // the geometry pass has its own properties, such as: // geometry(verticesout = 4; inputtopology = POINT/TRI..., outputtopology.... general geometry specific stuff ) }}
And, well, thats it!

Migi0027

Migi0027

 

What drives you in whatever you like?

This isn't really an update, it's just me rambling about stuff.

Motivation is awesome, and horrifying, at least I think so. It allows you to completely focus on something for days without thinking twice, or it leaves you staring at the screen thinking "What am I doing with my life...". So how does one find this "motivation?", well I frankly have no general answer, but I do know what drives me.

I love graphics, I love messing with new techniques, but what drives me is nature. E.g. A picture I took this morning going to school:

.
For some this may mean nothing, but for me this means everything. Even though we may not be there yet, the day where we finally get interactive graphics at this level is the day where... I'm not sure, It's just going to be a good day :).

The following is too something awesome:


So, well, thats it. If anyone reached the bottom I'd like to ask you:

[indent=1]What drives you?

Thanks guys.

Migi0027

Migi0027

 

Global Illumination Progress Update

The Global Illumination by Light Propagation Volumes is progressing in a nice way. Even though this is an artifact in a weird way, it still produces a nice result. The surface is too thin for the grid to not capture both sides. But it looks nice.



But not all is good, there's still weird artifacts:

Sometimes the shadow testing fails when injecting lighting into the grid, I guess this is to do with the resolution but I'm still not sure, I think there's more to it. Example: The shadow testing should have succeeded, but no indirect illumination is present.



Light Bleeding:




Until next time!

Migi0027

Migi0027

 

Light Propagation Volumes - First Prototype!

Photos at the top! The gallery stuff!
( The photos are ONLY showing the GI, and are oversaturated to show the effect )

This is a very early prototype of my implementation of Light Propagation Volumes. I spent loads of hours on a very simple error, that turned out to be ridiculous:void CE_NAMESPACE::DX11::RenderInstanced(int vertex_count, int instance_count, int vertex_start /*= 0*/, int instance_start /*= 0*/){ pDevcon->DrawInstanced(vertex_start, instance_count, vertex_start, instance_start); DrawInstanced(vertex_count, instance_count, vertex_start, instance_start);
As I usually dont draw an instanced mesh without vertices nor indices, this function hasnt been called before, but, its fixed now. It was used for the directional voxel lighting passes.

Theres still A LOT of light bleeding, so I'll have to take care of that. And for some reason when the surface normal is parallel with an axis, the voxilization fails a lot. So, theres still a lot of work...

But it works! Sorta.

Migi0027

Migi0027

 

First try at DOF, its something!

So, I never actually went into depth of field, but its an important part that I always wanted to integrate, but was way to lazy to do it. So I maned up, and did it whilst failing a couple of times. The actual effect is still not very good but its getting there. And as always, the only reason why youre really here, is pics, so:



Until next time!

Migi0027

Migi0027

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!