Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

4631 Excellent

About Migi0027

  • Rank
    Advanced Member

Personal Information

  • Industry Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This is seriously awesome, thanks!
  2. Migi0027

    2D Transforms 101

  3. Welcome once again, and thanks for clickin on my journal! When we think of light propagation volumes we think of awesomeness with a ton of light bleeding, right? Well I do, so I've been trying to limit the amount of light bleeding and I find the current result acceptable without being too hackyish. The first change is the injection. Usually I would inject the lighting in the same position as the voxels into a sh lighting map, but, once I do this I don't have much information on which "direction" the lighting came from if I was to propagate the lighting. In that case I would have three choices, light bleeding, expensive occlusion calculation or not propagating (But the last one wouldn't be any fun...). So, what if instead of injecting the lighting at the voxel position I inject it with a small offset, this offset would then be proportional to the approximated normal. In this way the actual lighting information would be injected into the empty space. The 2nd change I made was simply discarding propagation in any occluded voxels/cells as the new method doesn't require it. Now the issue with this is if the cell size ( How much a voxel/cell occupies in whatever unit your system uses ) is way too big compared to the world the propagation will visually fail and look horrible, so to look good a bit of performance is sucked. The last change is when sampling the final indirect gi I apply a small offset as at all the lighting information is in the "empty" cells, now one might say that this is a crude approximation but I don't find it that horrible. So, there you have it, that's my current recipe to a LPV system without bleeding, there are still lots of things to fix but it's a start. In my last entry I talked about a cascaded LPV system, however this has slightly changed. You can still configure multiple cascades; however the way it works is slightly different. In each cascade the system will create two grids, a high frequency grid and a low frequency grid (The dimensions of the grid is still intact). The low frequency grid represents the low frequency lighting information, and the high frequency grid will represent the slightly higher frequency lighting information. The two grids are treated as separate grids with different cell sizes but when rendered the energy proportion is taken into account. So I'm fairly happy how my LPV system has progressed and I find the results acceptable, now obviously there's the issue with the "blocky" look ( If you want an acceptable performance ), which I'll try and mess around with and share my results later on. Now, let's steer slightly away from that and think about volumetric fog! Yes! That's right! Volumetric Lighting! So to make the volumetric lighting feel more "part" of the scene I integrated the indirect gi system. Currently I have a very basic volumetric lighting setup, raymarch from the camera to the world space position at the pixel and slowly accumulate the lighting (The method I used to calculate the lighting is based on "Lords of the Fallen's" [Game] Volumetric Lighting). So each raymarch I also sample the indirect gi from the propagated lighting map and multiply that in. And I'm really liking the results! (I know the roughness / specular looks wrong, I still need to integrate the rougness / specular maps from the sponza scene) (And I seriously improved the quality of the gifs...) Now! The only issue with this is... performance! All of that added together is asking your hardware to commit suicide, at least, mine did. Since I'm an addict to the game Dota 2, I was having a casual game with some friends and decided to program in the background, now for some reason I was writing and reading from an unbound UAV in my compute shader ( I didn't realize this ). The result was the gpu completely freezing ( I could still talk and hear my friends, whilst freaking out ), I waited for the tdr duration however the tdr did not occur. So in the end I had to force shut down and restart quickly in order to participate in the game ( We won though! ). I was actually scared to start it again even though I bound the uav... Looking aside from that I've also implemented some basic debugging tools for the lpv system, such as getting the lighting information from each cell position ( It's extremely simple to implement, but really helps a lot ): Previously my engine has a pretty horrible architecture, because I'm horrible at architecture, I'm a horrible person. So I decided to attempt at improving the architecture of the engine. I decided to split the engine up in: Helpers : Just general things, common math stuff / etc... Native Modules : Shaders, Containers, etc User Modules : An example would be a custom voxel filler or whatever, depends on the type Chains : Responsible for higher level actions, such as Shadow Mapping, Voxel GI, etc... Device : Basically combining chains and working with them Now I'm not saying that this is ideal or even good, but I find it nice and functional. Now the user modules are a bit special, the user modules are custom modules that the programmer can create. However each module has to derive from a module type. An example is the gi system, the gi system has a special module type that allows the modification of the lighting maps before the propagation. The programmer would then inherit from this type and override the pure virtual functions, and then push this module to a queue. I made a small module that would "approximate" the indirect radiance from the "sky" (Assuming that there is one) just to test around. The native c++ code is farily straight forward. Although this specific module type has a bunch of predefinitions and preprocessors in a shader file to ease the process, the shader code for this testing module:#include "module_gridshfill.hlsl" // Our basic module definition MODULE((8, 1, 8), (uint3 CellPos : MODULE_CELLID) { // Testing data, This is just magic and stuff, not correct at all float fFactor = 0.01f; float3 f3Color = float3(0.658, 0.892, 1); float3x4 f3x4AmbientSH = { fFactor.xxxx * f3Color.x, fFactor.xxxx * f3Color.y, fFactor.xxxx * f3Color.z }; // Raymarch Down [loop] for (CellPos.y = g_fVoxelGridSize-1; CellPos.y >= 0; CellPos.y--) { // Get the voxel VoxelData voxel = FETCH(CellPos - uint3(0, 1, 0)); // If this voxel is occupied break the march // TODO: Semi occluded voxels (10) if (voxel.fOcclusion > 0) { break; } // Write the new value on top of the current value WRITE_ADDITION(CellPos, f3x4AmbientSH); } }); Now some of the stuff will change for sure although it works fine for now. The result of the above is an indirect radiation from the "sky". And it looks alright! So I'm pretty happy with the module system. In the complete other hand I suddenly have this weird crave to work on my scripting language again... (I know I know, just use an existing one... But where would the fun be in that!? ) And I soon need to reimplement some sort of physics engine into this version of my engine. So, there's still lots of fun! Looking away from some more or less small changes and additions, that's more or less it folks! It's been a heavy week though, lots of things happening. Fx my dog found out that a full day barbecue party is extremely tiring, he didn't want to walk or anything, slept like a stone... (He loves walks). See you next time!
  4. *pew* *pew* *PEW* *pew* *pew* *PEW*
  5. Well, let's begin shall we! ( This article isn't very focused, it's just small notes and such ) ====== For a while I've been thinking about working on cascaded light propagation volumes, so I finally did. For now I just have a 64 (detailed) + a 32 ( less detailed ) grid that are filled using the voxel caches. Although I have not worked on the energy ratio yet ( My solution is hacky ), I like the result. (Images scaled to fit, originally rendered at resolution 1920x1080. The whitish color is because I've got some simple volumetric lighting going on, although it doesn't respond to the LPV yet) (PS. Still lots of work, so there are issues + light bleeding ) ( And there's no textures on the trees, for... reasons and stuff ) I've also worked on my BRDF shading model which is based on Disneys solution, and integrated my BRDF shading model into the LPV system ( Although it's a simplified version, as we don't need all the detail and some computations are meaningless in this context ). And I really think it made the indirect colors feel more part of the scene. A poor quality gif showing how the light propagates through the scene: . ====== On the complete other side, as I'm rewriting the engine I felt like upgrading to the RC Version of VS 2015 ( And dear god I recommend it to anyone ). And so I needed to recompile lots of libraries, such as SFML ( + most dependecies ), AntTweakBar, +small stuff. Now the AntTweakBar case was special, as it really only supports SFML 1.6. It contains a minified version of the SFML 1.6 events that it then uses, although when the memory layout changes in SFML 2.3 it all fucks up (Sorry). So I had to change some of the minified internal version of SFML to make it work, for anyone here is the modified part of the minified sfml (It's hackyish, mostly c&p from the sfml sources, so there's most likely errors and such, but for now it does the job ):namespace sf { namespace Key { enum Code { Unknown = -1, ///< Unhandled key A = 0, ///< The A key B, ///< The B key C, ///< The C key D, ///< The D key E, ///< The E key F, ///< The F key G, ///< The G key H, ///< The H key I, ///< The I key J, ///< The J key K, ///< The K key L, ///< The L key M, ///< The M key N, ///< The N key O, ///< The O key P, ///< The P key Q, ///< The Q key R, ///< The R key S, ///< The S key T, ///< The T key U, ///< The U key V, ///< The V key W, ///< The W key X, ///< The X key Y, ///< The Y key Z, ///< The Z key Num0, ///< The 0 key Num1, ///< The 1 key Num2, ///< The 2 key Num3, ///< The 3 key Num4, ///< The 4 key Num5, ///< The 5 key Num6, ///< The 6 key Num7, ///< The 7 key Num8, ///< The 8 key Num9, ///< The 9 key Escape, ///< The Escape key LControl, ///< The left Control key LShift, ///< The left Shift key LAlt, ///< The left Alt key LSystem, ///< The left OS specific key: window (Windows and Linux), apple (MacOS X), ... RControl, ///< The right Control key RShift, ///< The right Shift key RAlt, ///< The right Alt key RSystem, ///< The right OS specific key: window (Windows and Linux), apple (MacOS X), ... Menu, ///< The Menu key LBracket, ///< The [ key RBracket, ///< The ] key SemiColon, ///< The ; key Comma, ///< The , key Period, ///< The . key Quote, ///< The ' key Slash, ///< The / key BackSlash, ///< The \ key Tilde, ///< The ~ key Equal, ///< The = key Dash, ///< The - key Space, ///< The Space key Return, ///< The Return key BackSpace, ///< The Backspace key Tab, ///< The Tabulation key PageUp, ///< The Page up key PageDown, ///< The Page down key End, ///< The End key Home, ///< The Home key Insert, ///< The Insert key Delete, ///< The Delete key Add, ///< The + key Subtract, ///< The - key Multiply, ///< The * key Divide, ///< The / key Left, ///< Left arrow Right, ///< Right arrow Up, ///< Up arrow Down, ///< Down arrow Numpad0, ///< The numpad 0 key Numpad1, ///< The numpad 1 key Numpad2, ///< The numpad 2 key Numpad3, ///< The numpad 3 key Numpad4, ///< The numpad 4 key Numpad5, ///< The numpad 5 key Numpad6, ///< The numpad 6 key Numpad7, ///< The numpad 7 key Numpad8, ///< The numpad 8 key Numpad9, ///< The numpad 9 key F1, ///< The F1 key F2, ///< The F2 key F3, ///< The F3 key F4, ///< The F4 key F5, ///< The F5 key F6, ///< The F6 key F7, ///< The F7 key F8, ///< The F8 key F9, ///< The F9 key F10, ///< The F10 key F11, ///< The F11 key F12, ///< The F12 key F13, ///< The F13 key F14, ///< The F14 key F15, ///< The F15 key Pause, ///< The Pause key KeyCount ///< Keep last -- the total number of keyboard keys }; } namespace Mouse { enum Button { Left, ///< The left mouse button Right, ///< The right mouse button Middle, ///< The middle (wheel) mouse button XButton1, ///< The first extra mouse button XButton2, ///< The second extra mouse button ButtonCount ///< Keep last -- the total number of mouse buttons }; enum Wheel { VerticalWheel, ///< The vertical mouse wheel HorizontalWheel ///< The horizontal mouse wheel }; } namespace Sensor { enum Type { Accelerometer, ///< Measures the raw acceleration (m/s^2) Gyroscope, ///< Measures the raw rotation rates (degrees/s) Magnetometer, ///< Measures the ambient magnetic field (micro-teslas) Gravity, ///< Measures the direction and intensity of gravity, independent of device acceleration (m/s^2) UserAcceleration, ///< Measures the direction and intensity of device acceleration, independent of the gravity (m/s^2) Orientation, ///< Measures the absolute 3D orientation (degrees) Count ///< Keep last -- the total number of sensor types }; }; namespace Joy { enum Axis { X, ///< The X axis Y, ///< The Y axis Z, ///< The Z axis R, ///< The R axis U, ///< The U axis V, ///< The V axis PovX, ///< The X axis of the point-of-view hat PovY ///< The Y axis of the point-of-view hat }; enum { Count = 4, ButtonCount = 32 }; } typedef unsigned char Uint8; typedef unsigned int Uint32; class Event { public: struct SizeEvent { unsigned int Width, Height; }; struct KeyEvent { Key::Code Code; bool Alt, Control, Shift, System; }; struct TextEvent { Uint32 Unicode; }; struct MouseMoveEvent { int X, Y; }; struct MouseButtonEvent { Mouse::Button Button; int X, Y; }; struct MouseWheelEvent { int Delta, x, y; }; struct MouseWheelScrollEvent { Mouse::Wheel wheel; float Delta; int x, y; }; struct JoystickConnectEvent{ unsigned int joystickId; }; struct JoystickMoveEvent { unsigned int JoystickId; Joy::Axis Axis; float Position; }; struct JoystickButtonEvent { unsigned int JoystickId, Button; }; struct TouchEvent { unsigned int finger; ///< Index of the finger in case of multi-touch events int x; ///< X position of the touch, relative to the left of the owner window int y; ///< Y position of the touch, relative to the top of the owner window }; struct SensorEvent { Sensor::Type type; ///< Type of the sensor float x; ///< Current value of the sensor on X axis float y; ///< Current value of the sensor on Y axis float z; ///< Current value of the sensor on Z axis }; enum EventType { Closed, ///< The window requested to be closed (no data) Resized, ///< The window was resized (data in event.size) LostFocus, ///< The window lost the focus (no data) GainedFocus, ///< The window gained the focus (no data) TextEntered, ///< A character was entered (data in event.text) KeyPressed, ///< A key was pressed (data in event.key) KeyReleased, ///< A key was released (data in event.key) MouseWheelMoved, ///< The mouse wheel was scrolled (data in event.mouseWheel) (deprecated) MouseWheelScrolled, ///< The mouse wheel was scrolled (data in event.mouseWheelScroll) MouseButtonPressed, ///< A mouse button was pressed (data in event.mouseButton) MouseButtonReleased, ///< A mouse button was released (data in event.mouseButton) MouseMoved, ///< The mouse cursor moved (data in event.mouseMove) MouseEntered, ///< The mouse cursor entered the area of the window (no data) MouseLeft, ///< The mouse cursor left the area of the window (no data) JoystickButtonPressed, ///< A joystick button was pressed (data in event.joystickButton) JoystickButtonReleased, ///< A joystick button was released (data in event.joystickButton) JoystickMoved, ///< The joystick moved along an axis (data in event.joystickMove) JoystickConnected, ///< A joystick was connected (data in event.joystickConnect) JoystickDisconnected, ///< A joystick was disconnected (data in event.joystickConnect) TouchBegan, ///< A touch event began (data in event.touch) TouchMoved, ///< A touch moved (data in event.touch) TouchEnded, ///< A touch event ended (data in event.touch) SensorChanged, ///< A sensor value changed (data in event.sensor) Count ///< Keep last -- the total number of event types }; EventType Type; union { SizeEvent size; ///< Size event parameters (Event::Resized) KeyEvent key; ///< Key event parameters (Event::KeyPressed, Event::KeyReleased) TextEvent text; ///< Text event parameters (Event::TextEntered) MouseMoveEvent mouseMove; ///< Mouse move event parameters (Event::MouseMoved) MouseButtonEvent mouseButton; ///< Mouse button event parameters (Event::MouseButtonPressed, Event::MouseButtonReleased) MouseWheelEvent mouseWheel; ///< Mouse wheel event parameters (Event::MouseWheelMoved) (deprecated) MouseWheelScrollEvent mouseWheelScroll; ///< Mouse wheel event parameters (Event::MouseWheelScrolled) JoystickMoveEvent joystickMove; ///< Joystick move event parameters (Event::JoystickMoved) JoystickButtonEvent joystickButton; ///< Joystick button event parameters (Event::JoystickButtonPressed, Event::JoystickButtonReleased) JoystickConnectEvent joystickConnect; ///< Joystick (dis)connect event parameters (Event::JoystickConnected, Event::JoystickDisconnected) TouchEvent touch; ///< Touch events parameters (Event::TouchBegan, Event::TouchMoved, Event::TouchEnded) SensorEvent sensor; ///< Sensor event parameters (Event::SensorChanged) }; };} // namespace sf On top of that the performance of my engine in VS 2015 strangely increased by a few milliseconds which really surprised me. I'm not completely sure what it is. And in VS 2013 I had a strangely huge overhead when starting my application inside VS which made the file io incredibly slow, in VS 2015 this issue is gone and this huge waiting time is gone ( 20 seconds to a minute... ) . I finally got to redesign my gbuffer, and while there's lots of work to be done, it all fits nicely, general structure:2Channel: x = Depth, y = Packed(metallicness, anisotropicness),4Channel: xy = Normal, z = Packed(subsurface, thickness), w = Packed(specular, roughness)4Channel: xyz = Diffuse, z = Packed(clear_coat, emmision) The tangent is then reconstructed later, and it's pretty cheap and works fine for my needs. Now all the user has to do is call GBuffer_Retrieve(...) from their shaders and then all the data is decompressed which they then can use, the final data container looks somewhat like the following:struct GBufferData{ float3 Diffuse; float3 PositionVS; float3 TangentVS; float3 NormalVS; float3 Position; float3 Normal; float3 Tangent; float SpecPower; float Roughness; float Metallic; float Emmision; float ClearCoat; float Anisotropic; float SubSurface; float Thickness;}; Now, you might say "But what if I don't want to use it all, huge overhead", which is true, but, compilers! The cute little compiler will optimize out any computations that aren't needed, so if you don't need a certain element decompressed, it wont be (Yay)! So all of that fits together nicely. But at the same time I think I've got an issue with the performance concerning filling the gbuffer stage, as it's huge compared to everything else. Perhaps it's the compression of the gbuffer, not sure yet. But, it's acceptable for now, although I think I can squeeze some cute little milliseconds out of it . On a side note I've also been trying to work on some basic voxel cone tracing but it's far from done. And I seriously underestimated the performance issues, but it's pretty fun. ====== Now due to family related issues I had to take my brother to our beach house ( Nothing fancy ), and there I allocated some time to work on my retarded calculator! It's a small application based on a very basic neural network, I didn't have time to work on my bias nodes or even my activation function, for now the output of the neuron is simply weight * data, although it actually produces acceptable results. The network is composed of 4 layers: 10 Neurons 7 Neurons 5 Neurons 1 Neuron Again, this was just for fun, I didn't even adapt the learning rate during the back propagation, it was just to fill out a bit of time. The output from the application:Starting trianing of neural network Train iteration complete, error 0.327538 Train iteration complete, error 0.294999 Train iteration complete, error 0.266 Train iteration complete, error 0.240112 Train iteration complete, error 0.216965 Train iteration complete, error 0.196237 Train iteration complete, error 0.177651 Train iteration complete, error 0.160962 Train iteration complete, error 0.145959 Train iteration complete, error 0.132454 Train iteration complete, error 0.120285 ......... a few milliseconds later Training completed, error falls within treshold of 1e-06!=============================== Final testing stage Feeding forward the neural network Final averaged testing error: 0.0178298===============================Please enter a command...>> f var(a0) Input: #0 -> 2 #1 -> 4 #2 -> 3 #3 -> 1 #4 -> 4 #5 -> 5 #6 -> 2 #7 -> 3 #8 -> 4 #9 -> 1 Feeding forward the neural network Layer Dump: #0 = 29.346>> e var(a0) algo({sum(I)}) Evaluating error: (a0) Error: 0.345961 . ====== So, overall, I'm pretty happy with it all. But I haven't been able to allocate enough time ( You know, life and stuff, school or whatever everybody suddenly expects of you ). But if anybody is reading this, can you comment on the colors of the images, meaning do you find it natural or cartoony, I find them a bit cartoony. Well, thanks for even reaching the bottom! -MIGI0027
  6. yes... Yes! You've reached my profile! But what you didn't know, is that is doesn't tell you anything about me!
  7. Migi0027

    Less bullshit... More code!

    The word "bullshit" is probably an exaggeration, although I honestly dislike it like everyone else does ( At least to my understanding ). What I'm talking about is the glorious exams, not university exams just regular high school exams. But that part is over now, and I'm more or less satisfied with the result. Now, since that part is over, I got lots of time to code and stuff... Just since the beginning of the exams to now, I've been rewriting my engine. And got pretty much everything implemented. The performance was the main goal. The previous version of my engine ran at ~100 fps at half of the screen resolution, the newely written engine runs at ~120 fps at full HD resolution (i.e. 1920 x 1080). Now this is a big thing because I've got a lot of post processing effects that really torture the gpu bandwith, such as volumetric scattering, so the resolution seriously affects the frame time. And together with that the architecture of the new system is seriously better together with the new material system that I wrote about in my last entry ( It has been upgraded a bit from that point ) There's still lots of small optimizations to do, and still lots of unfinished "new" features. But one of the main things I changed is the way my voxelization system works. Every single time a new mesh/object/whateverpeoplecallit has been added the scene the mesh is voxelized without any transformation applied to a "cache" buffer, this cache buffer is added to some list of a sort. Then there's the main cache buffer that represents the final voxel structure around the camera. So each frame ( through a compute shader ) all voxel caches are iterated through, then each cell of the caches are first transformed by the current transformation matrix of the mesh ( As each cache represents a mesh without any transformations ) and then fitted inside the main voxel cache ( With some magic stuff that speeds this up ). The awesome thing about this is that every single time the camera is moved / the mesh is moved, scaled or even rotated there no need to revoxelize the mesh at all ( Less frame time, yay ). Although I chose to disable screen space reflections as IMHO there were too many artifacts which were too noticeable. So in the meantime I have a secret next gen way to perform pixel perfect indirect specular reflections ( I WISH ) Currently, all effects combined minus the ssr, showing off volume shadows. Nothing fancy. . . Oversaturated example of the diffuse gi: . . Dynamic filling of the voxel cache, oriented around the player: . . So while playing around, I found this "mega nature pack" in the Unreal Engine 4 marketplace. So I purchased the package and started messing around, just programmer art . Now all this shows is that I have some serious work to do with my shading model, and I need to invest some time into some cheap subsurface scattering... Btw in the image below the normals are messed up, so the lighting appears weird in some points. And the volumetric scattering is disabled, since it also desaturates the image a bit ( For valid reasons ). . . So I tried messing around with the normals and used the SV_IsFrontFace to determine the direction of the normal on the leaf, and got something like this: ( Volumetric scattering disabled ) ( Btw, quality is lost due to gifs! I love dem gifs ) ( Ignore the red monkey ) . . The following is the shader used for the tree, which is written by the user: ( Heavily commented ) .Shader { // Include the CG Data Layouts #include "cg_layout.hlsl" // Define the return types of the shader // This stage is really important to allow the parser // to create the geometry shader for voxelization // and also if the user has created his own geometry shader, so that it can figure // out a way to voxelize the mesh properly, in this way the user // can use ALL stages of the pipeline (VS, HS, DS, GS, PS) without // voxelization not being possible. // The only problem is well, he has to write the stuff below: ( Even more if he used more stages ) #set CG_RVSHADER Vertex // Set the return type of vertex shader #set vert CG_VSHADER // [Opt] Set the name of the vertex shader instead of writing CG_VSHADER #set pix CG_PSHADER // [Opt] Set the name of the pixel shader instead of writing CG_PSHADER // This is his stuff // He can do whatever he wants! Texture2D T_Diffuse : register(t0); Texture2D T_Normal : register(t1); // Basic VS -> PS Structure // This structure inherits the "base" vertex, stuff that the engine can crunch on struct Vertex : CG_VERTEXBASE { // Empty }; // Now include some routines that's needed on the end of all stages #include "cg_material.hlsl" // Vertex shader Vertex vert(CG_ILAYOUT IN) { // Zero set vertex Vertex o = (Vertex)0; // Just let the engine process it // Although we could do it outselves, but there's no need CG_VSPROCESS(o, IN); // Return "encoded" version CG_VSRETURN(o); } // Pixel Shader // In this case the return type is FORCED! As it's a deferred setup CG_GBUFFER pix(Vertex v, bool IsFrontFace : SV_IsFrontFace) { // Basic structure containing info about the surface Surface surf; // Sample color float4 diff = CG_TEX(T_Diffuse, v.CG_TEXCOORD); // Simple alpha test for vegetation // We want it to clip at .a = 0.5 so add a small offset clip(diff.a - 0.5001); // Fill out the surface information surf.diffuse = diff; surf.normal = CG_NORMALMAP( // Do some simple normal mapping T_Normal, v.CG_TEXCOORD, v.CG_NORMAL * ((IsFrontFace) ? 1 : -1), // Flip the normal if backside for leaves v.CG_TANGENT, v.CG_BINORMAL ); surf.subsurface = 1; // I've got a simple version of some sss, but it's not very good yet. surf.thickness = 0.1; // For the sss surf.specular = 0.35; surf.anisotropic = 0.2; surf.clearcoat = 0; surf.metallic = 0; surf.roughness = 0.65; surf.emission = 0; // Return "encoded" version // Aka compress the data into the gbuffer! CG_PSRETURN(v, surf); }}; . So in the progress of all of this, I'm trying to fit in some SMAA and color correction. And the moment I looked into color correction using LUT, I face palmed, because how the hell did I not think of that!? ( Not in a negative way, it's just so simple and elegant and pure awesome! ) So messing around with that and spending 5 hours on a loading problem which turned out to be too simple, it returns some kewl results: ( Just messing around ) . . So that's more or less it. I'll keep improving my engine, working on stuff and more stuff. I think I'll leave the diffuse gi system where it is for a while, since it works pretty well and produces pretty results, now I need to work on some specular gi stuff since I really don't have a robust system for that yet that doesn't produce ugly artifacts. See you next time people of the awesome GDNet grounds! -Migi0027
  8. You've reached the end of the internet fellow surfer? What now, back to life or what?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!