Jump to content
  • Advertisement

Vilem Otte

GDNet+ Basic
  • Content count

    787
  • Joined

  • Last visited

  • Days Won

    2

Vilem Otte last won the day on March 23

Vilem Otte had the most liked content!

Community Reputation

3025 Excellent

4 Followers

About Vilem Otte

  • Rank
    Crossbones+

Personal Information

Social

  • Twitter
    VilemOtte
  • Github
    Zgragselus
  • Steam
    Zgragselus

Recent Profile Visitors

24844 profile views
  1. Vilem Otte

    Back-projection soft shadows

    I've been actually experimenting with various shadowing every evening in past week - apart from the ones I've implemented. So far the PCSS and PCMLSM (it's a PCSS variant where I use mipmaps to get more-smooth shadows using trilinear filtering) looks really good for small lights, but once you increase size it collapses (mainly because penumbra search is going to be wrong). Additionally PCSS suffers from noise, see the following: PCSS - small area light PCSS - large area light In PCMLSM I got rid of PCSS noise, and it looks really good, for small-size area lights: PCMLSM - Small area light PCMLSM - large area light I intentionally didn't do any attenuation at all. For comparison ISM-like approach with lots of small lights instead: This looks by far superior to previous ones, yet the fillrate is huge (notice - you can see the actual shadow maps in the virtual shadow map to the right). This is quite close to the actual result I'd like to see (for large area lights). Cone traced shadows are interesting but most likely a no-go simply because of the resolution of the voxelized scene - I just quickly played with cone size for the ambient occlusion and cone direction (to point towards the light), and you can clearly see that while it is generating shadows - they're not well-defined and suffer of light leaking (due to F.e. empty interior voxels of the sphere). Using SDF instead could be good, but they will still suffer with lower resolution. Not to mention problems with animated models. I'd personally like to try it - yet it might be quite challenging. Back-projection seem quite promising for such scenarios (yet the only working example which is from NVidia doesn't really work well with such scenario unless you invest large amount of steps in it - which again ends up in slowness) - which might actually render it useless in the end. I'm now trying to implement the prototype I have into the actual editor so I share some nice screenshots and finish up the article (which is still in-progress), yet I do have some problems with different light types (point vs. spot), and generally the actual projection which seems incorrect to me.
  2. Vilem Otte

    Back-projection soft shadows

    I still don't think my back-projection shadows code is correct (it sort of works, but needs improvement before it's article-worthy ... which is what is currently slowing down the shadows article I'm writing). Which is why I'm interested in some real implementation - there are just few papers not really going further into details. For soft shadows (small area lights) are solved by penumbra size calculation and then using mip-maps to do smoothly blurred shadows (unlike PCSS which suffers of noise). The problem are large area lights (hence backprojection), I could even use voxel representation of the scene and do cone tracing (which doesn't suffer of light leaking). Currently for each light I allow user to pick from various shadows technique (from basic shadow map, PCF, PCSS, mip-map based penumbrae shadows, etc.). Adding back projection as option for large-scale area lights is a huge advantage for me - the problem seems to be performance (at least my current implementation is a no-go).
  3. Originally I wasn't even sure whether to post the question here - but why not. I do know the technique and how it works (which I'd like to elaborate in blog post - and use it as a reference for comparison), but so far I haven't seen any working example of this technique and neither saw it used in a game (I know there is one NVidia demo actually showing it - or a variant of it). So I'm quite curious - has anybody here ever seen some real example of back-projection soft shadows?
  4. For one of the upcoming projects (which will follow in some of the following posts), and as it had to fit the lore of the game, a black hole was necessary. Before going forward - let me add an image of what result I want to achieve: Artist conception of black hole - NASA/JPL-Caltech While the image is similar to the final effect I wanted to achieve, I did some changes on the effect to be more colorful and bright - but the original idea was from this image. The effect is actually separated in 3 major parts - Disk, Streaks and Post processing.Of course there is also the core of the black hole (which is just a small sphere, with black color). Disk The disk around black hole is actually just matter that rotates around the core. Depending on the distance from the core the average density will increase near the event horizon and decrease further from it. Near the core the density can be so high that it may eventually have temperature close to star - therefore there might be high emissive energy - and therefore light. Also, due to the time dilation (and therefore light having hard time escaping near the event horizon), the emissivity is getting lower very close near the event horizon. Anything beyond event horizon is invisible from the outside, because gravity there is so strong, not even photons can escape. At least that is what I understand from the topic from physics point of view. This actually explained what can be seen on the image and what is going on graphically, that is: The disk rotates around the core The density of disk decreases further from the core The emissive light decreases further from the core, and therefore some (outer) parts of the disk will be lit by inner part ... although inner part around the core has to be somehow darker Which can be solved with simple texturing and some basic lighting of the result. Using whirl-like texture as a basis proved to be a good start for me. I started off by creating a whirl-like texture that would define density in various parts of the disk, which resulted in this: Generating a normal map for lighting from this is actually quite straight forward (and easy in Substance Designer F.e.) - and after short time, I also had a normal map: Putting just these together with basic diffuse lighting (standard N.L) from the center (slightly above the plane) gives us some basic results: Next thing is defining emissivity. This is done simply by using 1D gradient texture for which the coordinate will be distance from the center. The gradient I came up with is: Notice the left part - which is near the event horizon.will give us similar feeling to the image as we're not jumping straight to bright value. Applying emissive value (as both - multiplier for the color, and as emission) gives us this look: Which looks good enough already - I personally played a bit with values (mainly playing with contrast and other multiplication factors - F.e. for alpha channel/transparency), and ended up with this result: Resulting pixel shader is as simple as: fixed4 frag (v2f i) : SV_Target { // Calculate texture coordinate for gradient float2 centric = i.uv * 2.0f - 1.0f; float dist = min(sqrt(centric.x * centric.x + centric.y * centric.y), 1.0f); // Lookup gradient float3 gradient = tex2D(_GradientTex, float2(dist, 0.0f)).xyz; // Light direction (hack - simulates light approx. in the middle, slightly pushed up) float3 lightDir = normalize(float3(centric.x, -centric.y, -0.5f)); // Use normals from normal map float3 normals = normalize(tex2D(_NormalsTex, i.uv).xyz * 2.0f - 1.0f); // Simple N.L is enough for lighting float bump = max(dot(-lightDir, normals), 0.0f); // Alpha texture float alpha = tex2D(_AlphaTex, i.uv).x; // Mix colors (note. contrast increase required for both - lighting and alpha) return fixed4((gradient * bump * bump * bump + gradient) * 0.75f, min(alpha * alpha * 6.0f, 1.0f)); } Streaks There are 2 streaks, directing upwards and downwards from the core. My intention was to make them bright compared to the core and blue-ish - to keep the background more colorful in the end. Each streak is composed from 2 objects, a very bright white sphere (which will take advantage of used post processing effects to feel bright), and a geometry for the streaks (instead of using particles). The geometry is quite simple - looks a bit like rotated and cut hyperbole, notice the UV map on the left (it is important for understanding the next part): This geometry is there 4 times for each direction of the streak, rotated around the origin by 90, 180 and 270 degrees. The actual idea for streaks was simple - have a simple geometry of cut surface, and roll a texture over it. Multiplying with correct color and distance from the beginning of the streak adds color effect that nicely fades into the background. To create a particles-like texture that varies in intensity I used Substance Designer again and come up with: By simply applying this texture as alpha, and moving the X-texture coordinate the streak is animated, like: Multiplying by wanted color gives us: And multiplying by factor given by distance from the origin of the streak results in: Which is actually quite acceptable for me. For the sake of completeness, here is the full pixel shader: fixed4 frag (v2f i) : SV_Target { // Texture coordinates, offset based on external value (animates streaks) float2 uv = i.uv.xy + float2(_Offset, 0.0f); // Alpha texture for streaks fixed alpha = tex2D(_AlphaTex, uv); // Distance from origin factor (calculated from texture coordinates of streaks) float factor = pow(1.0f - i.uv.x, 4.0f); // Multiplication factor (to 'overbright' the effect - so that it 'blooms properly' when applying post-process) float exposure = 6.0f; // Apply resulting color return fixed4(exposure * 51.0 / 255.0, exposure * 110.0 / 255.0, exposure * 150.0 / 255.0, alpha * factor); } Putting the effects together ends up in: Post Processing By using simple bloom effect, we can achieve the resulting final effect as shown in video, which improves this kind of effect a lot. I've added lens dirt texture to bloom. We need to be careful with the actual core - as that needs to stay black (I intentionally let it stay black even through the bloom). You can do this either by using floating-point render target before the bloom and write some low value instead of black (careful with tone mapping though - yet you might want to go even for negative numbers), or just render the core after the bloom effect. The resulting effect looks like: And as promised - a video showing the effect:
  5. Vilem Otte

    Effect: Black hole background

    For one of the upcoming projects (which will follow in some of the following posts), and as it had to fit the lore of the game, a black hole was necessary. Before going forward - let me add an image of what result I want to achieve: Artist conception of black hole - NASA/JPL-Caltech While the image is similar to the final effect I wanted to achieve, I did some changes on the effect to be more colorful and bright - but the original idea was from this image. The effect is actually separated in 3 major parts - Disk, Streaks and Post processing.Of course there is also the core of the black hole (which is just a small sphere, with black color). Disk The disk around black hole is actually just matter that rotates around the core. Depending on the distance from the core the average density will increase near the event horizon and decrease further from it. Near the core the density can be so high that it may eventually have temperature close to star - therefore there might be high emissive energy - and therefore light. Also, due to the time dilation (and therefore light having hard time escaping near the event horizon), the emissivity is getting lower very close near the event horizon. Anything beyond event horizon is invisible from the outside, because gravity there is so strong, not even photons can escape. At least that is what I understand from the topic from physics point of view. This actually explained what can be seen on the image and what is going on graphically, that is: The disk rotates around the core The density of disk decreases further from the core The emissive light decreases further from the core, and therefore some (outer) parts of the disk will be lit by inner part ... although inner part around the core has to be somehow darker Which can be solved with simple texturing and some basic lighting of the result. Using whirl-like texture as a basis proved to be a good start for me. I started off by creating a whirl-like texture that would define density in various parts of the disk, which resulted in this: Generating a normal map for lighting from this is actually quite straight forward (and easy in Substance Designer F.e.) - and after short time, I also had a normal map: Putting just these together with basic diffuse lighting (standard N.L) from the center (slightly above the plane) gives us some basic results: Next thing is defining emissivity. This is done simply by using 1D gradient texture for which the coordinate will be distance from the center. The gradient I came up with is: Notice the left part - which is near the event horizon.will give us similar feeling to the image as we're not jumping straight to bright value. Applying emissive value (as both - multiplier for the color, and as emission) gives us this look: Which looks good enough already - I personally played a bit with values (mainly playing with contrast and other multiplication factors - F.e. for alpha channel/transparency), and ended up with this result: Resulting pixel shader is as simple as: fixed4 frag (v2f i) : SV_Target { // Calculate texture coordinate for gradient float2 centric = i.uv * 2.0f - 1.0f; float dist = min(sqrt(centric.x * centric.x + centric.y * centric.y), 1.0f); // Lookup gradient float3 gradient = tex2D(_GradientTex, float2(dist, 0.0f)).xyz; // Light direction (hack - simulates light approx. in the middle, slightly pushed up) float3 lightDir = normalize(float3(centric.x, -centric.y, -0.5f)); // Use normals from normal map float3 normals = normalize(tex2D(_NormalsTex, i.uv).xyz * 2.0f - 1.0f); // Simple N.L is enough for lighting float bump = max(dot(-lightDir, normals), 0.0f); // Alpha texture float alpha = tex2D(_AlphaTex, i.uv).x; // Mix colors (note. contrast increase required for both - lighting and alpha) return fixed4((gradient * bump * bump * bump + gradient) * 0.75f, min(alpha * alpha * 6.0f, 1.0f)); } Streaks There are 2 streaks, directing upwards and downwards from the core. My intention was to make them bright compared to the core and blue-ish - to keep the background more colorful in the end. Each streak is composed from 2 objects, a very bright white sphere (which will take advantage of used post processing effects to feel bright), and a geometry for the streaks (instead of using particles). The geometry is quite simple - looks a bit like rotated and cut hyperbole, notice the UV map on the left (it is important for understanding the next part): This geometry is there 4 times for each direction of the streak, rotated around the origin by 90, 180 and 270 degrees. The actual idea for streaks was simple - have a simple geometry of cut surface, and roll a texture over it. Multiplying with correct color and distance from the beginning of the streak adds color effect that nicely fades into the background. To create a particles-like texture that varies in intensity I used Substance Designer again and come up with: By simply applying this texture as alpha, and moving the X-texture coordinate the streak is animated, like: Multiplying by wanted color gives us: And multiplying by factor given by distance from the origin of the streak results in: Which is actually quite acceptable for me. For the sake of completeness, here is the full pixel shader: fixed4 frag (v2f i) : SV_Target { // Texture coordinates, offset based on external value (animates streaks) float2 uv = i.uv.xy + float2(_Offset, 0.0f); // Alpha texture for streaks fixed alpha = tex2D(_AlphaTex, uv); // Distance from origin factor (calculated from texture coordinates of streaks) float factor = pow(1.0f - i.uv.x, 4.0f); // Multiplication factor (to 'overbright' the effect - so that it 'blooms properly' when applying post-process) float exposure = 6.0f; // Apply resulting color return fixed4(exposure * 51.0 / 255.0, exposure * 110.0 / 255.0, exposure * 150.0 / 255.0, alpha * factor); } Putting the effects together ends up in: Post Processing By using simple bloom effect, we can achieve the resulting final effect as shown in video, which improves this kind of effect a lot. I've added lens dirt texture to bloom. We need to be careful with the actual core - as that needs to stay black (I intentionally let it stay black even through the bloom). You can do this either by using floating-point render target before the bloom and write some low value instead of black (careful with tone mapping though - yet you might want to go even for negative numbers), or just render the core after the bloom effect. The resulting effect looks like: And as promised - a video showing the effect:
  6. As I have quite a bit more to share, I've decided to divide this post in 2 parts. The first one will do a brief explanation on physical lights, while the second will focus on plausible shadows (which are my first step towards area lighting). Physical Lights Are a nice to have feature. Instead of specifying intensity and color with some arbitrary values - one specifies luminous power (with lm), temperature of light source and an arbitrary value of color. Fig. 1 - From left to right - Tungsten bulb with 500W and 1000W, Simulated point light with temperature of sun and intensity of 10k lumens, Simulated point light with temperature of overcast sky and intensity of 20k lumens. Tone mapping was enabled. Using lumens to describe intensity of point/spot lights, and temperature to describe colors allows to simulate various lights based on their actual parameters. To allow for lights with additional colors (red, green, etc.), additional color parameter is introduced. /// <summary>Convert temperature of black body into RGB color</summary> /// <param name="temperature">Temperature of black body (in Kelvin)</param> Engine::float4 TemperatureToColor(float temperature) { float tmp = temperature / 100.0f; Engine::float4 result; if (tmp <= 66.0f) { result.x = 255.0f; result.y = tmp; result.y = 99.4708025861f * log(result.y) - 161.1195681661f; if (tmp <= 19.0f) { result.z = 0.0f; } else { result.z = tmp - 10.0f; result.z = 138.5177312231f * log(result.z) - 305.0447927307f; } } else { result.x = tmp - 60.0f; result.x = 329.698727446f * pow(result.x, -0.1332047592f); result.y = tmp - 60.0f; result.y = 288.1221695283f * pow(result.y, -0.0755148492f); result.z = 255.0f; } return result / 255.0f; } Fig. 2 - Snippet for calculating color from temperature. I intentionally missed one thing - attenuation - which is very important to make all of this work properly. The main reason to do so was that I'd like to talk about it shortly when I finish my work on area lights. Plausible Shadows Speaking of area lights, the main challenge in realtime rendering and area lights are definitely shadows. As of today I haven't seen any game doing area lights shadows properly. Most of them either don't cast shadows at all, or use a cheap shadow map, possibly with standard NxN filtering (some do precompute light map, which often suffers on poor resolution - and doesn't allow shadow casting from dynamic objects). Before I'm going to finish my implementation of area lights I have to attempt to implement so solid shadowing technique for area lights that works properly with dyanmic objects. First of all, I had to switch from shadow map per light to a solution that has one huge texture where ALL lights that casts shadows render their shadow maps into. This is a necessary requirement for me to easily be able to cast shadows from all lights during F.e. GI computation. On the other hand this makes filtering a bit more tricky, especially for more specific filters. There are still some minor problems with my approach including: Proper claming Removing seams for point lights shadows Those haven't stopped me from trying to implement nice looking PCSS (Percentage Closer Soft Shadows), and compare against standard PCF (Percentage Close Filtering) in terms of quality. Fig. 3 - Left PCSS, Right PCF - While PCSS does look indeed like shadows from area light source, it is far from perfect mainly due to noise and still somehow limited light size. For convenience I'm adding source code for my PCSS. I've taken some of the values from Unity's implementation of PCSS that seems to picked the values quite well. Reference: https://github.com/TheMasonX/UnityPCSS inline float PCSS_Noise(float3 location) { float3 skew = location + 0.2127f + location.x * location.y * location.z * 0.3713f; float3 rnd = 4.789f * sin(489.123f * (skew)); return frac(rnd.x * rnd.y * rnd.z * (1.0 + skew.x)); } inline float2 PCSS_Rotate(float2 pos, float2 rotation) { return float2(pos.x * rotation.x - pos.y * rotation.y, pos.y * rotation.x + pos.x * rotation.y); } inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotation) { int blockers = 0; float avgBlockerDistance = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { float2 offset = PCSS_Samples[i] * searchUV; offset = PCSS_Rotate(offset, rotation); float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x; if (z < projCoord.z) { blockers++; avgBlockerDistance += z; } } avgBlockerDistance /= blockers; return float2(avgBlockerDistance, (float)blockers); } inline float PCSS_PCFFilter(Texture2D<float2> tex, SamplerState state, float3 projCoord, float filterRadiusUV, float penumbra, float2 rotation, float2 grad) { float sum = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { float2 offset = PCSS_Samples[i] * filterRadiusUV; offset = PCSS_Rotate(offset, rotation); sum += tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x < projCoord.z ? 0.0f : 1.0f; } sum /= (float)PCSS_SampleCount; return sum; } inline float ShadowMapPCSS(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, float lightSize) { float2 uv = projCoord.xy; float depth = projCoord.z; float zAwareDepth = depth; float rotationAngle = Random(projCoord.xy) * 3.1415926; float2 rotation = float2(cos(rotationAngle), sin(rotationAngle)); float searchSize = lightSize * saturate(zAwareDepth - .02) / zAwareDepth; float2 blockerInfo = PCSS_BlockerDistance(tex, state, projCoord, searchSize, rotation); if (blockerInfo.y < 1.0) { return 1.0f; } else { float penumbra = max(zAwareDepth - blockerInfo.x, 0.0); float filterRadiusUV = penumbra * lightSize; float2 grad = frac(projCoord.xy * resolution + 0.5f); float shadow = PCSS_PCFFilter(tex, state, projCoord, filterRadiusUV, penumbra, rotation, grad); return shadow; } } Fig. 4 - PCSS source code To allow for more smooth and less noisy shadows I've tried to think off a way using mip-mapped texture atlas. And while having additional problems (possibly solvable), it is possible to achieve very smooth nice looking noise-less penumbrae shadows. Fig. 5 - Soft penumbrae shadows from my attempt. This is probably all for today from me. If possible I'd like to dig a bit more into shadows and area lights next time, but who knows - I might get attracted by something completely different. Thanks for reading!
  7. Vilem Otte

    Structure Changes

    So I've decided to be a bit more productive on writing side here on GameDev, and change form of my contributions here to something more consistent and possibly less boring. Why am I doing that? In short, to keep my motivation ... In long, unless I properly motivate myself and write down a task what I want to do and when it has to be finished - I tend to get distracted by literally everything, I've found that writing everything down suits me quite well in my work - so I'm attempting to apply similar pattern to my hobby projects. For start, I'd like to aim to post 3 times a week on average focused on 3 different things: Game Engine/Tools Development This one will be mainly showing some progress or specific feature and its debugging. For me, this is one of the biggest tools I've made myself completely alone. Game Development Basically something like 'weekly-report' from development of one game that I'm working at the time. I'm going to release some of them, and possibly talk about stats if it is going to be played at least by someone. Crazy Stuff Each week I decide to try some short concept or something, or calculate something. These might even be non-gamedev related, yet somehow I consider those worth sharing somewhere (incl. code snippets)... or even just math, who knows. As this one was very boring post without any image or photo, let me add at least a photo of my new family member, who constantly attempts to attack my hands on keyboard:
  8. Just FYI: Such thing can't exist. Let us have 3 invertible matrices: For all of these following applies: All matrices (A, B and C) are row-equivalent and column equivalent to each other - e.g. are NxN All matrices have non-zero determinant All matrices have full rank All columns of each matrix are linearly independent and form a basis of N-dimensional space There is no eigen number of 0 Now, the following applies: And therefore also applies for: Therefore to answer your note - if the individual transforms are invertible, their composition always has to be invertible.
  9. About 5 lights (4 point lights, 1 spotlight) lighting the scene all with dynamic shadows (PCF & PCSS filtered), with deferred shading (and GI & realtime reflections): Now, I do know there is a lot of space to optimize this a lot more - but the important part is on the bottom right -> atlas for shadow maps. Every single shadow map used is stored there - and in this case it's totally 25 textures, all dynamically updated each frame. Resolution is specified on per-light basis (although I do have some logic to determine resolution - based on number of light (& requried shadow maps) in scene, and some importance based on distance and light type ... it's not used in this example). So using shadow map atlas is definitely an option that gives you good enough results. So how do I know if light has a shadow map bound (or 6 of them)? Simply storing 1 (or 6 for point lights) of integers pointing out which shadowID is used. Each shadow atlas record is then just matrix, offsets and size, like: struct __declspec(align(16)) ShadowAtlasRecord { Engine::mat4 mMatrix; float mOffset[2]; float mSize[2]; }; And I do have an array of these records passed into the shader.
  10. Hello, before going forward - and as you're going to create a tool, I'd recommend you to decide: What is your target architecture? Is the packing tool going to be used by standard PCs (x64 + AVX or SSE?)? Is it going to be GPU-only (Compute shaders or CUDA?)? Targeting mobile market? What performance you expect? Do you require atlas creation per-frame? Or is it going to be pre-processed? Additional features? Generating mip-maps? Do you want to store them in the atlas? Have whole atlas mip-mapped? Different algorithm should be used for stored normal maps, alpha maps, diffuse maps! Licensing Every 3rd party tool you bring in introduces some problems - apart from often adding some unpleasant syntax (which can be encapsulated nice), it is licensing that hurts the most. Why am I asking these questions? Simply because the actual purpose of the tool should be the one deciding the required performance and feature set. If you require building of texture atlas within single frame, then you are most likely not going to do any S3TC compression directly (there might be a way though - in case everything you want to store is already S3TC compressed). There is also possibility of doing S3TC compression on GPU, which might be fast enough for this purpose. So... if you require top-performance, you will have to write quite a lot of code yourself and optimize a lot. If you require just pre-processing, using SOIL, DevIL or even OpenCV might be good. Although be sure to read their licenses! And note, that reading or writing images with these will most likely take at least hundreds of ms or even seconds (which caused quite a lot of trouble for me). As for compression algorithms - I believe https://github.com/castano/nvidia-texture-tools/wiki contains algorithms for S3TC compression. Which could be a good start. Note AMD has also this https://github.com/GPUOpen-Tools/Compressonator - which can be called as a library to compress data block with S3TC or other block compression. You could just call it from your application, therefore saving quite a lot of time either: Copy-pasting S3TC/ETC/... compression and decompression algorithms Writing your own S3TC/ETC/... If you decide for one of the above (copy-pasting or writing), I heavily suggest at least looking at how those algorithms work (you will most likely need it also for Compressonator, as if I'm not mistaken you need to compress per-block through the library, not as a whole image).
  11. Vilem Otte

    Ludum Dare 41

    It wouldn't be me if I wasn't participating actively in Ludum Dare. This time around I went for first person shooter, with sort of mysteries/physics attached to it. Before going further in the article - if you wish you can play it. Take a look here - https://ldjam.com/events/ludum-dare/41/logicatory And if you're too lazy to play - you can watch here: Logicatory Yup that's the name. This time I went again with Unity, as I'm successfully avoiding using my own game engine. For asset creation I used Gimp, Blender, Substance Painter, Substance Designer ... and that's it! This time around I went solo, two of my friends who were thinking about joining changed their plans in last moment - one went just for compo and another one had major change of plans for the weekend. So, I was balancing between teaming up with somebody more random on Ludum Dare, but in the end I decided to go solo. To check whether I still can do it in one man. I went for jam, as regularly, because I really enjoy having that 1 additional day for polishing art and finishing up the game a bit more. What went right? The game concept, prototype development and some part of art. Strictly speaking I had an idea for game concept really early, upon waking up, and I could start working on it. The prototype was finished in matter of few hours, which was awesome - I could run around the map, shoot, bunny-hop - this really boosted my morale. I believe I did well with art, of course most of the guys here, who are actual artists can do a lot better - for me, I just simply like making art, even though I'm terrible at it. I wouldn't dare to include my art in commercial game project, but for Ludum Dare it is perfect - programmer's art. Level design, even though having just one simple level, I think I did my job well to keep player most of the time busy (for those few minutes of gameplay). While it is short, I believe it is a lot better than having long and repetitive game. As this way everyone will finish your game - which is always huge advantage on Ludum Dare. What went wrong? Simply said, Monday. I had to work for most of that day, and it took out large part of my time. When I finished, quite late in the evening, I was seriously thinking about giving up. At that moment I told myself a typical phrase: "GIT GUD" ... and finished it. Cut down the level in half. Added last-minute assets. Added music on last minute, built it and released for Ludum Dare. Thinking about it, this actually should be part of what went right, because if I wouldn't push myself - I would never finish it. So what actually went wrong was light-map baking. I wanted to try light maps from Unity and attempted it, they were computing in the end for four hours in highest quality. I still managed to do this before submission hour. Conclusion I enjoyed Ludum Dare this time, a bit less than usual due to work on Monday, but such is life. I've learned something new and tried something I wanted to try for quite some time - so for me this was a success. I'm very curious to check and play entries from other participants, and looking forward to read their opinions on Logicatory too! Next time, I'll be back with something more interesting!
  12. Vilem Otte

    Stuff

  13. Images time! Yes. Although the quality impact will be huge (I intentionally let reflective sphere in view). Samples have 4x MSAA for deferred shading and 1x MSAA for cone tracing. Fig. 1 - 64^3 volume Fig. 2 - 128^3 volume Fig. 3 - 256^3 volume Fig. 4 - 512^3 volume Note, see how much Voxelization phase took. I intentionally re-voxelized whole scene (no progressive static/dynamic optimization is turned on). The computation is done on Radeon Rx 480 (with Ryzen 7 1700 and 16 GB of DDR4@3GHz if I'm not mistaken). Next image shows for comparison cone tracing with 8 random cones that have angle close to 0 (e.g. they're technically rays). Fig. 5 - Cones with small angle This is pretty unacceptable for realtime rendering, and I assume most GPU path tracers could beat the times for similar quality GI. As you noted, the time grows a lot. The only way how to trace these rays efficiently is to use sparse voxel octree (as octree can be used as acceleration structure for actual ray casting - yet the traversal even for rays is quite complex, and I haven't figured out any optimal way to perform cone tracing in octree - aside from sampling and stepping based on max. reached octree level). Here are some results, with highest resolution (512^3), no MSAA. Fig. 6 - 1 cone Fig. 7 - 5 cones Fig. 8 - 9 cones Note, you're now interested in GlobalIllumination in profiler. Which shows how much time was spent in actual cone tracing. The angles for cones were adjusted to cover the hemisphere. Which brings me to... So this won't make much sense in lighting result - as I intentionally used 9 cones (to have the same amount), but changed the angle - so I could demonstrate how angle can impact performance. Now you're again interested in GlobalIllumination in profiler: Fig. 9 - 9 cones, angle factor 0.3 Fig. 10 - 9 cones, angle factor 0.6 Fig. 11 - 9 cones, angle factor 0.9 Now here is something important - angle factor for me is cosine of half of the apex angle (ratio between radius and height of the cone). You can clearly see that the higher angle factor we have, the higher performance there is (as less steps in cone tracing loop are performed). So, this will be a bit complex. I don't render at half resolution, I do always render at full - but I do have MSAA support for my deferred pipeline, and let me show you example: Fig. 12 - 1x MSAA cone tracing, 4x MSAA rendering Fig. 13 - 4x MSAA cone tracing, 4x MSAA rendering You will probably need to take those images and zoom on them (and subtract in GIMP or other editor of your choice). There is some small difference on few edges (where object in the middle intersects with floor F.e.). There are multiple ways how to do this: Using a simple filter that looks at X neighboring pixels and finds most compatible depth/normal - and selecting according GI sample is a good way to go. Bilateral upsampling etc. Notes I tried to simulate some real-world scenario (with brute-force recomputed voxels), it uses 1 point light and 1 spotlight - both with PCF filtered shadow maps (in first 4 samples I've used PCSS for spotlight). Shadows is computed dynamically (I have virtual shadow maps implementation). All the surfaces are PBR-based materials, reflective surface is using same material as all the others visible. Some objects have alpha masks, transparency is handled using sample-alpha-to-coverage There is some post-processing (filmic tone mapping and bloom) Rendering is done using deferred renderer (in some cases with 4x MSAA enabled, buffer resolve is done AFTER tone mapping step, at the end of the pipeline) Renderer is custom Direct3D 12 based, whole engine runs on Windows 10 Hardware, as metnioned before: Ryzen 7 1700, 16 GB of DDR4 RAM and Radeon Rx 480 - if you want full exact specs I can provide There is some additional overhead due to this being an editor, and not a runtime (teh Imgui!), which is why I intentionally showed profiler - which measures time GPU spent on specific part of the application I'm quite sure I forgot some details! EDIT: And yes 1st thing I forgot, probably one of the most important ones. The actual reflection (specular contribution) is calculated in completely separate pass. That is named Reflection. It always uses just a single additional cone per pixel (and yes all objects do calculate it!). This shows the reflection buffer.
  14. I see, so Photoshop does just replace alpha layer by 255 - are your normal maps normalized right after you replace your alpha by 255, or do they require normalization?
  15. First. I apologize - especially to the moderators - for sending post with large images (yet it's necessary to explain what is going on). Hearing about you using image editor - I know where the devil is, I'm using actually Gimp, and there is one problem -> Alpha channel contains height. This is what image looks like in Gimp (I'm using Ceiling as example): De-composing this into channels gives you these 4 images (Red, Green, Blue and Alpha): You need to decompose to RGBA, and then compose just from RGB (e.g. set alpha to 255) to obtain a normal map. Look at 2 examples, in the first one I set alpha to 255 and re-composed image. In the second one I just removed alpha: I assume you recognize the second one. Now to explain what is going on - you need to look at how software removes alpha channel from the image. Now I will quote here directly from GIMP source code (had to dig there a bit). The callback to remove alpha is this: void layers_alpha_remove_cmd_callback (GtkAction *action, gpointer data) { GimpImage *image; GimpLayer *layer; return_if_no_layer (image, layer, data); if (gimp_drawable_has_alpha (GIMP_DRAWABLE (layer))) { gimp_layer_remove_alpha (layer, action_data_get_context (data)); gimp_image_flush (image); } } So what you're interested in is - gimp_layer_remove_alpha - procedure. Which is: void gimp_layer_remove_alpha (GimpLayer *layer, GimpContext *context) { GeglBuffer *new_buffer; GimpRGB background; g_return_if_fail (GIMP_IS_LAYER (layer)); g_return_if_fail (GIMP_IS_CONTEXT (context)); if (! gimp_drawable_has_alpha (GIMP_DRAWABLE (layer))) return; new_buffer = gegl_buffer_new (GEGL_RECTANGLE (0, 0, gimp_item_get_width (GIMP_ITEM (layer)), gimp_item_get_height (GIMP_ITEM (layer))), gimp_drawable_get_format_without_alpha (GIMP_DRAWABLE (layer))); gimp_context_get_background (context, &background); gimp_pickable_srgb_to_image_color (GIMP_PICKABLE (layer), &background, &background); gimp_gegl_apply_flatten (gimp_drawable_get_buffer (GIMP_DRAWABLE (layer)), NULL, NULL, new_buffer, &background, gimp_layer_get_real_composite_space (layer)); gimp_drawable_set_buffer (GIMP_DRAWABLE (layer), gimp_item_is_attached (GIMP_ITEM (layer)), C_("undo-type", "Remove Alpha Channel"), new_buffer); g_object_unref (new_buffer); } No need to go any further in the code base. As you see, image background is obtained in RGB format from RGBA, the: gimp_context_get_background gimp_pickable_srgb_to_image_color If you would dig in these functions a bit (and you would need to also dig a bit in GEGL), you would find out that the operation actually done when removing alpha is: R_out = R_in * A_in G_out = G_in * A_in B_out = B_in * A_in Such image is then set as output instead of previous image (rest of the functions). Now, I can't tell for Photoshop (I've worked with Gimp quite a lot so far) - but I'd assume they do similar, if not the same transformation. So you're out of luck using it for conversion. What you actually need as an operation is: R = R_in; G = G_in; B = B_in; l = sqrt(R * R + G * G + B * B); R_out = R / l; G_out = G / l; B_out = B / l; Something as simple as this. This can be done in python for Gimp as plugin F.e.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!