Jump to content
  • Advertisement

Vilem Otte

GDNet+
  • Content count

    807
  • Joined

  • Last visited

  • Days Won

    3

Vilem Otte last won the day on August 23

Vilem Otte had the most liked content!

Community Reputation

3054 Excellent

5 Followers

About Vilem Otte

  • Rank
    Crossbones+

Personal Information

Social

  • Twitter
    VilemOtte
  • Github
    Zgragselus
  • Steam
    Zgragselus

Recent Profile Visitors

26929 profile views
  1. I really wanted to try it out - although it was not possible for me. The loading gets probably stuck (or it may just be too huge to download over the network while I'm travelling). I've been trying on Chrome, so that may be another problem. While the project looks interesting, it lacks any information on the site - without being able to run the demo one can't have any idea how it looks/what it plays like (even though as it is in Unity, I might be interested in making at least a bit of VFX, which is kind of a hobby for me).
  2. I can point you to one working example in my code - https://github.com/Zgragselus/SoftwareRenderer - this one uses Sutherland-Hodgman algorithm, yet it isn't done the same way GPU does it. I actually did correct it at one point - yet in the source which I can't share right, it was for high-performance software rendering in browser. I could dig it out and explain a bit, but it won't be earlier than in 2 weeks (as I'm still on holiday). What you want to do is to perform homogenous clipping in 4D space (to avoid nasty problems with perspective divisions, which I didn't get around in the code I'm linking here). Googling for 'homogenous clipping' will give you results you want to read. Few links that could help: http://medialab.di.unipi.it/web/IUM/Waterloo/node51.html - is explaining in brief detail https://www.microsoft.com/en-us/research/publication/clipping-using-homogeneous-coordinates/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F73937%2Fp245-blinn.pdf - is explaining in depth
  3. Welcome to the first part of multiple effect articles about soft shadows. In recent days I've been working on area light support in my own game engine, which is critical for one of the game concepts I'd like to eventually do (if time will allow me to do so). For each area light, it is crucial to have proper soft shadows with proper penumbra. For motivation, let's have the following screenshot with 3 area lights with various sizes: Fig. 01 - PCSS variant that allows for perfectly smooth, large-area light shadows Let's start the article by comparison of the following 2 screenshots - one with shadows and one without: Fig. 02 - Scene from default viewpoint lit with light without any shadows (left) and with shadows (right) This is the scene we're going to work with, and for the sake of simplicity, all of the comparison screenshots will be from this exact same viewpoint with 2 different scene configurations. Let's start with the definition of how shadows are created. Given a scene and light which we're viewing. Shadow umbra will be present at each position where there is no direct visibility between given position and any existing point on the light. Shadow penumbra will be present at each position where there is visibility of any point on the light, yet not all of them. No shadow is everywhere where there is full direct visibility between each point on the light and position. Most of the games tend to simplify, instead of defining a light as area or volume, it gets defined as an infinitely small point, this gives us few advantages: For single point, it is possible to define visibility in a binary way - either in shadow or not in shadow From single point, a projection of the scene can be easily constructed in such way, that definition of shadow becomes trivial (either position is occluded by other objects in the scene from lights point of view, or it isn't) From here, one can follow into the idea of shadow mapping - which is a basic technique for all others used here. Standard Shadow Mapping Trivial, yet should be mentioned here. inline float ShadowMap(Texture2D<float2> shadowMap, SamplerState shadowSamplerState, float3 coord) { return shadowMap.SampleLevel(shadowSamplerState, coord.xy, 0.0f).x < coord.z ? 0.0f : 1.0f; } Fig. 03 - code snippet for standard shadow mapping, where depth map (stored 'distance' from lights point of view) is compared against calculated 'distance' between point we're computing right now and given light position. Word 'distance' may either mean actual distance, or more likely just value on z-axis for light point of view basis. Which is well known to everyone here, giving us basic results, that we all well know, like: Fig. 04 - Standard Shadow Mapping This can be simply explained with the following image: Fig. 05 - Each rendered pixel calculates whether its 'depth' from light point is greater than what is written in 'depth' map from light point (represented as yellow dot), white lines represent computation for each pixel. Percentage-Close-Filtering (PCF) To make shadow more visually appealing, adding soft-edge is a must. This is done by simply performing NxN tests with offsets. For the sake of improved visual quality I've used shadow mapping with bilinear filter (which requires resolving 4 samples), along with 5x5 PCF filtering: Fig. 06 - Percentage close filtering (PCF) results in nice soft-edged shadows, sadly the shadow is uniformly soft everywhere Clearly, none of the above techniques does any penumbra/umbra calculation, and therefore they're not really useful for area lights. For the sake of completeness, I'm adding basic PCF source code (for the sake of optimization, feel free to improve for your uses): inline float ShadowMapPCF(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, int filterSize) { float shadow = 0.0f; float2 grad = frac(projCoord.xy * resolution + 0.5f); for (int i = -filterSize; i <= filterSize; i++) { for (int j = -filterSize; j <= filterSize; j++) { float4 tmp = tex.Gather(state, projCoord.xy + float2(i, j) * float2(pixelSize, pixelSize)); tmp.x = tmp.x < projCoord.z ? 0.0f : 1.0f; tmp.y = tmp.y < projCoord.z ? 0.0f : 1.0f; tmp.z = tmp.z < projCoord.z ? 0.0f : 1.0f; tmp.w = tmp.w < projCoord.z ? 0.0f : 1.0f; shadow += lerp(lerp(tmp.w, tmp.z, grad.x), lerp(tmp.x, tmp.y, grad.x), grad.y); } } return shadow / (float)((2 * filterSize + 1) * (2 * filterSize + 1)); } Fig. 07 - PCF filtering source code Representing this with image: Fig. 08 - Image representing PCF, specifically a pixel with straight line and star in the end also calculates shadow in neighboring pixels (e.g. performing additional samples). The resulting shadow is then weighted sum of the results of all the samples for a given pixel. While the idea is quite basic, it is clear that using larger kernels would end up in slow computation. There are ways how to perform separable filtering of shadow maps using different approach to resolve where the shadow is (Variance Shadow Mapping for example). They do introduce additional problems though. Percentage-Closer Soft Shadows To understand problem in both previous techniques let's replace point light with area light in our sketch image. Fig. 09 - Using Area light introduces penumbra and umbra. The size of penumbra is dependent on multiple factors - distance between receiver and light, distance between blocker and light and light size (shape). To calculate plausible shadows like in the schematic image, we need to calculate distance between receiver and blocker, and distance between receiver and light. PCSS is a 2-pass algorithm that does calculate average blocker distance as the first step - using this value to calculate penumbra size, and then performing some kind of filtering (often PCF, or jittered-PCF for example). In short, PCSS computation will look similar to this: float ShadowMapPCSS(...) { float averageBlockerDistance = PCSS_BlockerDistance(...); // If there isn't any average blocker distance - it means that there is no blocker at all if (averageBlockerDistance < 1.0) { return 1.0f; } else { float penumbraSize = estimatePenumbraSize(averageBlockerDistance, ...) float shadow = ShadowPCF(..., penumbraSize); return shadow; } } Fig. 10 - Pseudo-code of PCSS shadow mapping The first problem is to determine correct average blocker calculation - and as we want to limit search size for average blocker, we simply pass in additional parameter that determines search size. Actual average blocker is calculated by searching shadow map with depth value smaller than of receiver. In my case I used the following estimation of blocker distance: // Input parameters are: // tex - Input shadow depth map // state - Sampler state for shadow depth map // projCoord - holds projection UV coordinates, and depth for receiver (~further compared against shadow depth map) // searchUV - input size for blocker search // rotationTrig - input parameter for random rotation of kernel samples inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotationTrig) { // Perform N samples with pre-defined offset and random rotation, scale by input search size int blockers = 0; float avgBlocker = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { // Calculate sample offset (technically anything can be used here - standard NxN kernel, random samples with scale, etc.) float2 offset = PCSS_Samples[i] * searchUV; offset = PCSS_Rotate(offset, rotationTrig); // Compare given sample depth with receiver depth, if it puts receiver into shadow, this sample is a blocker float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x; if (z < projCoord.z) { blockers++; avgBlockerDistance += z; } } // Calculate average blocker depth avgBlocker /= blockers; // To solve cases where there are no blockers - we output 2 values - average blocker depth and no. of blockers return float2(avgBlocker, (float)blockers); } Fig. 11 - Average blocker estimation for PCSS shadow mapping For penumbra size calculation - first - we assume that blocker and receiver are plannar and parallel. This makes actual penumbra size is then based on similar triangles. Determined as: penmubraSize = lightSize * (receiverDepth - averageBlockerDepth) / averageBlockerDepth This size is then used as input kernel size for PCF (or similar) filter. In my case I again used rotated kernel samples. Note.: Depending on the samples positioning one can achieve different area light shapes. The result gives quite correct shadows, with the downside of requiring a lot of processing power to do noise-less shadows (a lot of samples) and large kernel sizes (which also requires large blocker search size). Generally this is very good technique for small to mid-sized area lights, yet large-sized area lights will cause problems. Fig. 12 - PCSS shadow mapping in practice As currently the article is quite large and describing 2 other techniques which I allow in my current game engine build (first of them is a variant of PCSS that utilizes mip maps and allows for slightly larger light size without impacting the performance that much, and second of them is sort of back-projection technique), I will leave those two for another article which may eventually come out. Anyways allow me to at least show a short video of the first technique in action: Note: This article was originally published as a blog entry right here at GameDev.net, and has been reproduced here as a featured article with the kind permission of the author. You might also be interested in our recently featured article on Contact-hardening Soft Shadows Made Fast.
  4. Vilem Otte

    Which terrain technique is suitable?

    While megatexturing works well (even though software implementation is still a must due to limitations of hardware implementation), it can introduce numerous other complications - like determining which part needs to be loaded at certain resolution. While this is straight forward when you have just a single camera - multiple cameras (like in Half Life 2 F.e. - basically having another camera looking at world and rendering into texture) or reflections (which are a bit of nightmare for megatexturing) complicate this further. You can always blend multiple terrain geometries with different splatmaps - allowing you to have infinite number of textures you mix. While this introduces overdraw terrain was done like this in the past to overcome limitations of number of splatmaps.
  5. Vilem Otte

    RPG game c++ visual studio making process

    It really depends on how you point out different goals. Make a tight plan and try to be on time with it, and do it feature by feature. No matter whether you decide to go 3D or 2D, no matter how complex game mechanics are going to be, etc. You WILL need a plan that is finished (and ideally not delayed without any good reason). Some time ago I was working on RPG (pretty much as solo developer!), it went quite well - we put together a solid tech demo for 3D RPG (it took us few months), with reflective water, shadows, grass, collisions (so you could walk around), nice models ... I'd say quite good for that time. You could walk around that world. And that was pretty much it - there was no solid development plan or anything. And that is the point where it failed - without any plan into future there was nothing we could hold on, eventually the team dissolved and project has died. One important notice, that project - although never finished - taught me a LOT of things. Not just how to do certain mechanics, but also how to manage a small team (and hot not to do so). Also at the time, I was in the college - not even on university (so I didn't have any real world work experience) - we were just few guys that wanted to make a game, made it into the tech demo at the time. My advice is, even if the project looks larger and you aim mainly for learning something - it still may be worth it, even if you won't finish it.
  6. I like the trademark there. Funny part is that hundred people will give you hundred opinions. I was always trying to utilize the right tools for right platform (although for me D3D12 is clear winner for Desktop/Windows, but as @Hodgman said on page 1 on this topic - you can use D3D12 api and Vulkan on other platforms). Not to mention that porting D3D12 to Vulkan isn't that big problem - especially when you already have a layer between D3D12 and your engine, which you should.. My JPY50 (because I'm in Japan on holiday now). We've been using in non-game software OpenGL for more than 5 years (and I have more than a decade of experience with OpenGL), drivers DO BEHAVE completely differently under the same code (this especially counts for Intel GPUs on laptops). This is currently quite large problem - as each of us use a bit different hardware - some of us have application crashing on startup (due to OpenGL driver having no memory - yet machines with less memory, and same Intel embedded GPU run perfectly fine). Sometimes even shaders behave slightly different (last time, Intel GPUs refused to branch properly on condition passed through uniform buffer - while AMD and NVidia branched just fine), which happened 2 weeks back and is still hanging among the bugs. Although to be fair, I have to note that the software is very complex and huge. And so far OpenGL performs very well for most of the time, and each stable 'release' we managed to work out the issues for various hardware. Why am I pointing this out? Simply stating that portability between just hardware with OpenGL is not perfect, and sometimes you need to handle it by hand. So far with D3D12 I had better experiences when running on various GPUs/drivers. Yet even here, issues DID happen.
  7. Thanks for sharing. Yet as every single post process AA technique, it does blur the image in the end - this is especially visible on their screenshot in the paper where the shadows are blurred (Fig. 19, top-right image) No matter how good given post process AA is, whenever I implement the technique with scene that isn't very similar to their reference images, the results are unsatisfying, especially while in motion. Of course this is my personal opinion, and it may differ for others.
  8. Vilem Otte

    Effect: Area light shadows (Pt. 1 - PCSS)

    Whoops! Fixed. Thanks for reading and feedback, I always appreciate it. I started working on the next one, but I'm not sure if I manage to finish it before I go on holiday.
  9. Welcome to the first part of multiple effect articles about soft shadows. In recent days I've been working on area light support in my own game engine, which is critical for one of the game concepts I'd like to eventually do (if time will allow me to do so). For each area light it is crucial to have proper soft shadows with proper penumbra. For motivation, let's have the following screenshot with 3 area lights with various sizes: Fig. 01 - PCSS variant that allows for perfectly smooth, large-area light shadows Let's start the article by comparison of the following 2 screenshots - one with shadows and one without: Fig. 02 - Scene from default viewpoint lit with light without any shadows (left) and with shadows (right) This is the scene we're going to work with, and for the sake of simplicity all of the comparison screenshots will be from this exactly same viewpoint with 2 different scene configurations. Let's start with definition of how shadows are created. Given a scene and light which we're viewing. Shadow umbra will be present at each position where there is no direct visibility between given position and any existing point on the light. Shadow penumbra will be present at each position where there is visibility of any point on the light, yet not all of them. No shadow is everywhere where there is full direct visibility between each point on the light and position. Most of the games tend to simplify, instead of defining a light as area or volume, it gets defined as infinitely small point, this gives us few advantages: For single point it is possible to define visibility in binary way - either in shadow, or not in shadow From single point, a projection of the scene can be easily constructed in such way, that definition of shadow becomes trivial (either position is occluded by other objects in scene from lights point of view, or it isn't) From here, one can follow into the idea of shadow mapping - which is a basic technique for all others used here. Standard Shadow Mapping Trivial, yet should be mentioned here. inline float ShadowMap(Texture2D<float2> shadowMap, SamplerState shadowSamplerState, float3 coord) { return shadowMap.SampleLevel(shadowSamplerState, coord.xy, 0.0f).x < coord.z ? 0.0f : 1.0f; } Fig. 03 - code snippet for standard shadow mapping,where depth map (stored 'distance' from lights point of view) is compared against calculated 'distance' between point we're computing right now and given light position. Word 'distance' may either mean actual distance, or more likely just value on z-axis for light point of view basis. Which is well known to everyone here, giving us basic results, that we all well know, like: Fig. 04 - Standard Shadow Mapping This can be simply explained with following image: Fig. 05 - Each rendered pixel calculates whether its 'depth' from light point is greater than what is written in 'depth' map from light point (represented as yellow dot), white lines represent computation for each pixel. Percentage-Close-Filtering (PCF) To make shadow more visually appealing, adding soft-edge is a must. This is done by simply performing NxN tests with offsets. For the sake of improved visual quality I've used shadow mapping with bilinear filter (which requires resolving 4 samples), along with 5x5 PCF filtering: Fig. 06 - Percentage close filtering (PCF) results in nice soft-edged shadows, sadly the shadow is uniformly soft everywhere Clearly, none of the above techniques does any penumbra/umbra calculation, and therefore they're not really useful for area lights. For the sake of completeness, I'm adding basic PCF source code (for the sake of optimization, feel free to improve for your uses): inline float ShadowMapPCF(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, int filterSize) { float shadow = 0.0f; float2 grad = frac(projCoord.xy * resolution + 0.5f); for (int i = -filterSize; i <= filterSize; i++) { for (int j = -filterSize; j <= filterSize; j++) { float4 tmp = tex.Gather(state, projCoord.xy + float2(i, j) * float2(pixelSize, pixelSize)); tmp.x = tmp.x < projCoord.z ? 0.0f : 1.0f; tmp.y = tmp.y < projCoord.z ? 0.0f : 1.0f; tmp.z = tmp.z < projCoord.z ? 0.0f : 1.0f; tmp.w = tmp.w < projCoord.z ? 0.0f : 1.0f; shadow += lerp(lerp(tmp.w, tmp.z, grad.x), lerp(tmp.x, tmp.y, grad.x), grad.y); } } return shadow / (float)((2 * filterSize + 1) * (2 * filterSize + 1)); } Fig. 07 - PCF filtering source code Representing this with image: Fig. 08 - Image representing PCF, specifically a pixel with straight line and star in the end also calculates shadow in neighboring pixels (e.g. performing additional samples). The resulting shadow is then weighted sum of the results of all the samples for given pixel. While the idea is quite basic, it is clear that using larger kernels would end up in slow computation. There are ways how to perform separable filtering of shadow maps using different approach to resolve where the shadow is (Variance Shadow Mapping for example). They do introduce additional problems though. Percentage-Closer Soft Shadows To understand problem in both previous techniques let's replace point light with area light in our sketch image. Fig. 09 - Using Area light introduces penumbra and umbra. The size of penumbra is dependent on multiple factors - distance between receiver and light, distance between blocker and light and light size (shape). To calculate plausible shadows like in the schematic image, we need to calculate distance between receiver and blocker, and distance between receiver and light. PCSS is a 2-pass algorithm that does calculate average blocker distance as the first step - using this value to calculate penumbra size, and then performing some kind of filtering (often PCF, or jittered-PCF for example). In short, PCSS computation will look similar to this: float ShadowMapPCSS(...) { float averageBlockerDistance = PCSS_BlockerDistance(...); // If there isn't any average blocker distance - it means that there is no blocker at all if (averageBlockerDistance < 1.0) { return 1.0f; } else { float penumbraSize = estimatePenumbraSize(averageBlockerDistance, ...) float shadow = ShadowPCF(..., penumbraSize); return shadow; } } Fig. 10 - Pseudo-code of PCSS shadow mapping The first problem is to determine correct average blocker calculation - and as we want to limit search size for average blocker, we simply pass in additional parameter that determines search size. Actual average blocker is calculated by searching shadow map with depth value smaller than of receiver. In my case I used the following estimation of blocker distance: // Input parameters are: // tex - Input shadow depth map // state - Sampler state for shadow depth map // projCoord - holds projection UV coordinates, and depth for receiver (~further compared against shadow depth map) // searchUV - input size for blocker search // rotationTrig - input parameter for random rotation of kernel samples inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotationTrig) { // Perform N samples with pre-defined offset and random rotation, scale by input search size int blockers = 0; float avgBlocker = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { // Calculate sample offset (technically anything can be used here - standard NxN kernel, random samples with scale, etc.) float2 offset = PCSS_Samples[i] * searchUV; offset = PCSS_Rotate(offset, rotationTrig); // Compare given sample depth with receiver depth, if it puts receiver into shadow, this sample is a blocker float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x; if (z < projCoord.z) { blockers++; avgBlockerDistance += z; } } // Calculate average blocker depth avgBlocker /= blockers; // To solve cases where there are no blockers - we output 2 values - average blocker depth and no. of blockers return float2(avgBlocker, (float)blockers); } Fig. 11 - Average blocker estimation for PCSS shadow mapping For penumbra size calculation - first - we assume that blocker and receiver are plannar and parallel. This makes actual penumbra size is then based on similar triangles. Determined as: penmubraSize = lightSize * (receiverDepth - averageBlockerDepth) / averageBlockerDepth This size is then used as input kernel size for PCF (or similar) filter. In my case I again used rotated kernel samples. Note.: Depending on the samples positioning one can achieve different area light shapes. The result gives quite correct shadows, with the downside of requiring a lot of processing power to do noise-less shadows (a lot of samples) and large kernel sizes (which also requires large blocker search size). Generally this is very good technique for small to mid-sized area lights, yet large-sized area lights will cause problems. Fig. 12 - PCSS shadow mapping in practice As currently the article is quite large and describing 2 other techniques which I allow in my current game engine build (first of them is a variant of PCSS that utilizes mip maps and allows for slightly larger light size without impacting the performance that much, and second of them is sort of back-projection technique), I will leave those two for another article which may eventually come out. Anyways allow me to at least show a short video of the first technique in action:
  10. Vilem Otte

    Old School VS Ray Tracing accelerated by GPU

    This is actually a good question. I have a GPU ray tracing library for quite a while, and while DXR brings in something new (and it's quite welcome for its features for me) - it has quite huge flaws (usability being the first, it's still not officially out - and I noticed it crashes a lot, not to mention that MiniEngine sample doesn't even start on most GPUs that are able to run basic examples, etc.). So, what is going to change? I don't think there will be too much. Full path tracing that could eliminate most of the effects (like shadows or reflections, GI hacks, etc.) is still quite far off (while MiniEngine samples gives you tens of MRays/s or even over a hundred, it's still quite far off of doing path tracing even at FullHD at 60 fps, with enough samples to eliminate noise - even for bright scenes). It could help a bit with reflections (on arbitrary surfaces - which makes rays divergent, which ends up in being slower though), yet again the performance is not even nearly close to something one expects from reflections - which in most cases are quite subtle effect - except large bodies like water - having them pixel perfect is nice, but the performance hit may be too high to bear as of now. With shadows … nope. Hard shadows are bad, even soft shadows are bad - we need plausible shadows and there is again, too much noise. I know there is post filtering to remove noise, but it never looked good enough in motion - always introduced some kind of acne, artifacts or noise. ... So it won't change much now, some games will allow features like perfect reflections (which you will turn on 2080s … and the reflection buffer will be rendered at half resolution … and therefore for some scenarios even worse than cubemap. In a decade, maybe when we're able to do full path tracing at 60 fps … but there is a problem, standard shifts to 4K - which means 4 times number of paths you need to calculate.
  11. Vilem Otte

    Aspiring Engine Developer -Help-

    I'd also second Game Engine Architecture. I believe any edition might be viable (incl. third one) - I read the first and second personally and both of them were quite good. Out of other books I'd recommend, although neither of those two is directly related to game engines are - http://gameprogrammingpatterns.com/ (it's a book, although available online too) and https://www.pbrt.org/ - the first one will definitely be helpful in general, the second one is a must in case you're going for graphics. While oriented on offline ray tracing, it gives quite an insight into math behind (and acceleration structures). I wouldn't discard online materials, I literally learned everything from those - and met the named books further later on. I didn't really have much choice, as those books really didn't exist at the time I was learning. Online materials have huge advantage of easy accessibility and weight compared to books ... disadvantage is that somehow they are quite bad (although much more rare, but even book can be quite bad).
  12. @lonewolff This gets more problematic with growing companies, the bigger the company, the harder it tends to be to reach them in case of any error. You will always be dependent on some other software/library at some point - the point is picking the good one, and leaving the bad ones. Sometimes you may need to change libraries. Generally two years to fix a bug (or don't fix at all) can happen with non-paid support (or generally open source code). I literally never forced myself to fix bugs in (older) source I've put in public domain, simply because I lost interest in it some time before that ... unless I got back to the project some time after, and did some bug fixing and re-factoring before continuing on. Although as mentioned - it was a company, so I assume supported software. Which is strange indeed.
  13. Vilem Otte

    Effect: Black Hole Background

    @DerekB It is up technically up to think off something with 'event horizon', Physically taken, with the amount of debris you wouldn't most likely see anything (it would just be too small). Using perfect sphere with ray tracer could be a way around (as it's quite easy to do nice antialiasing on it). @yueyang liu I could build you one with Unity almost instantly. If you really wish for ThreeJS one, if I'm able to find some spare time it may be possible
  14. This all depends on what kind of effect are you going to achieve. There is vast difference between dynamic water (flooding, etc.) and static water (ponds, lakes, ocean, rivers, puddles, etc.). I assume you want the latter - focusing on rivers, ocean, etc. You will find out that (if possible) you will need a separate solution for each case (at least in my engine it has been separated), each using a bit different shaders and maps - ends up in acceptable result. There is no solution that would 'rule them all'. Generally each water surface will have refrection & reflection. For larger water surfaces you will need to calculate also color (greenish/blueish effect that depends on depth of water), some will require large waves (ocean - I believe tessellated FFT based ocean is still state of the art), smaller just bump maps. Rivers will also require flow maps and particles. Fluid dynamics can make this more alive. For reflection you want to use cubemaps (puddles, rivers) and plannar reflections (ponds, lakes, ocean) ... or raytraced/voxel based if you can afford those (cheap screen space ray tracing works too). For refraction - just render your scene without all water, that should be enough for refraction - use camera matrix to project this to water surfaces when rendering those, and with help of normal/dudv/flow maps - do a distortion. Mixing between refraction and reflection should be based on fresnel. If you have any particular question - feel free to ask specifically (i.e.'How to do reflection' or 'How to animate rivers'). I'll try to answer in detail.
  15. Vilem Otte

    Hint: Version Control

    Sooner or later I wanted to do at least a bit of posting about version control systems which I've worked with and which one of those I find the best for me. In past years I've worked with various version control systems, with various interfaces. I personally have few years of experience with both, SVN and GIT - which both I used (and still regularly use) for various projects. The named are most famous ones, and personally I like and prefer git a bit more for having a lot of advantages over svn (and yeah, git has better memes - might be important too!). For quite long I've stored some of my personal, or conceptual projects just on drive and copied it back and forth (not just old ones, incl. few very recent ones!). Knowing the advantage of version control (and actually having some interesting projects archived on my own Github account) I said to myself that it's definitely time to change this. Now, as these projects are internal and I don't like paying Github for service, that can run on the server(s) which I'm already paying, I finally installed git - yet I was hunting for some visual interface to be able to see what goes on, add issues (which serve often also as to-do list for me), etc. Out of the user interfaces one can host himself, I've used GitLab (which can do a lot more than I require - including CI, CD, etc.), which is perfectly suitable for the job, and it even exists as package in Ubuntu! There is one huge downside - it is extremely HEAVY (at least for the feature set I require). I indeed set it up, but I didn't like how much resources it used, despite actually not doing anything and working with just single user at that point. Gitlab is huge, it has all the feature one can wish for. But after hours burnt on setting it up to make it work properly ... I asked myself, isn't there something better, that could be light, better fit for my simple use cases and easy to setup? At this point I've searched and found GOGS, which is very simple to use (compared to GitLab, which often requires quite some time to setup - especially on servers running Apache), small, fast, etc. I couldn't find any real disadvantage to it. Did the 5 minute setup and here I go... I was amazed, this hasn't happened to me in software for well... at least few months. How easy this was to setup in the end - and how fast (anything is opened within 100ms). This brings me to: On one side, I admire complex and large software, like GitLab. It can do a TON of things, and it is very good in all of them. You want automatic-build and deployment each time you merge into master? You can set it up. You want strict permissions settings? You can set those. And so on... On the other side, well I'm fan of simple software. That can be installed fast, used quickly without additional hours of setup and that is fast. I've switched from KDE to XFCE on my Linux boxes for exactly this reason. Sometimes less is more...
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!