Jump to content
  • Advertisement

SkavenPlanet

Member
  • Content Count

    18
  • Joined

  • Last visited

Community Reputation

350 Neutral

About SkavenPlanet

  • Rank
    Member

Personal Information

  • Interests
    Art
    Audio
    Design
    Production
    Programming
  1. SkavenPlanet

    Procedural GPU-Generated Normal Map Artifacts

    After replicating my normal map shader on the CPU I decided to convert basically everything to double precision and carefully began re-replacing the doubles with singles while avoiding loosing precision in the resultant normal map and surprisingly I wasn't too far off from getting everything right when I rewrote the procedural noise function and normal generation code, there were only a few more things that needed to use doubles... I found that the ultimate result of the procedural noise function and the procedural noise input both need to be in double precision when being sent to the normal map generation shader which isn't ideal because, as I stated above, this requires using two 259x259 (not 256 actually, because the shader needs some extra texels) floating point textures - one to store the high parts of the input position components and the height (highX, highY, highZ, highHeight -> r, g, b,a) and one to store the low parts (lowX, lowY, lowZ, lowHeight -> r,g,b,a). Fortunately, both textures get deleted from video memory after the normal map is generated however the amount of required texture lookups is doubled in the normal map generation shader. More texture lookups and having to use emulated double math will unfortunately slow the normal map shader down but it's not too bad of a price to pay for artifact-free normal maps. Still I wonder how Ysaneya got his system to work without having to do a lot of the additional double emulation required for my system to work. I still have to apply my fixes to the shaders so hopefully, once I do, these annoying artifacts will finally go away.   PS: Really wish every GPU could use doubles...   EDIT: Apparently I can go even higher than 18 depth levels (although I probably won't ever have to)
  2. SkavenPlanet

    Procedural GPU-Generated Normal Map Artifacts

    Well, I decided to rewrite the procedural noise function (properly this time) to use emulated double precision until it was no longer relevant (until the values tied to the input positions get floored and converted to integers) and I noticed no improvement, the artifacts are still very much present. Then I also rewrote the normal map generation code to use emulated double precision (this required using two 256x256 floating point textures which I don't think is suitable for an actual implementation) for most of the process but after a few tests I figured that this wasn't improving anything anyway. Still, help?
  3. This is sort of a continuation of this thread: http://www.gamedev.net/topic/660629-procedural-planet-gpu-normal-map-artifacts/ which details the problem of running out of precision when generating normal maps on the GPU for procedural planets and ending up with noisy artifacts at high subdivision levels (>15):     In the original thread linked above Ysaneya stated that double precision GPU emulation (storing the high and low parts of the "emulated" double as separate floats) was the solution that he used in the I-Novae engine which I attempted to use and implemented double precision emulation for generating the inputs for my noise functions however I haven't seen any noticeable changes in my results. The problem is that once the positions have to be plugged into the noise functions they have to go from emulated doubles to single floats meaning that the extra precision gets lost anyway which is probably why my results don't look noticeably different; Ysaneya mentioned that this is the extent to which emulated double precision is used for I-Novae, however, and most of the noise generation is done with non-emulated arithmetic but I'm really unsure as to how this fixes the issue because as I stated it seems like the extra precision gets lost before it's plugged into the noise function anyway. After checking the rest of the maths involved in generating the normal maps I'm not quite sure where the precision issue really is and trying to pin it down has left me going in circles. Help?
  4. SkavenPlanet

    Star System N-Body Simulation Over Long Time Periods

      I would use an on-rails system except that with the way the star systems are generated, in the initial state, objects inevitably collide (Theia and Earth for example) and moons may get sent on escape trajectories and get recaptured elsewhere, etc and there isn't really a way to account for this with an on-rails system, in addition there aren't solutions to most 3-body systems and for rare, but possible n>3-body systems there is no real solution. Of course I could restrict the procedural generator so that certain configurations would never appear but that would end up being less accurate to the way star systems actually form (which is the opposite of what I'm going for) than an n-body sim would and would result in less varied, unique, and interesting solar systems (major problem with most games that use proc-gen).
  5. SkavenPlanet

    Star System N-Body Simulation Over Long Time Periods

      "Good-enough" is sufficient, using RK4 integration along with the Barnes-Hut method (for adaptive time steps and less complexity), together allowing me to increase time steps might do the trick. The system tends to become more stable over time and is initially rather chaotic with objects colliding frequently which is where more accuracy would be necessary. Anyway, I will see how that goes and report back later.
  6. SkavenPlanet

    Star System N-Body Simulation Over Long Time Periods

      Well I assumed those methods were more useful when N is really large, in the case of most simulations. If I use a tree method, the trees that represent smaller systems thus requiring smaller time steps would still be simulated for a million years with hour to day-long time steps which is were the bottleneck is, if I understand correctly, N could be 5 (in the case of a small leaf) but we still have to use a small time step and we would still have problems, it would be an improvement but still not a viable solution.   EDIT: That being said I'm still definitely going to implement some sort of octree system considering I could get the complexity down to O(n log n) as opposed to O(n^2) but I'm not sure that entirely solves the problem.
  7. I've been working on a rather complex game that requires procedural generation of stars, planets, and macroscopic organisms along with realistically simulating these systems over very long to extremely short periods of time. In order to simulate the evolution of star systems (just one at the moment) I implemented an n-body physics simulator that also simulates the collisions of objects (they simply combine masses and the velocity vector is recalculated) that uses a time scale of seconds when the game is running in realtime. Here N maxes out at 200 (including stars, planets, planetoids, and moons) so it's not very expensive to run the simulation on the CPU. However, my game has a feature where the user can "time-warp" anywhere from days to millions of years into the future, therefore the n-body sim uses larger time steps, scaling up for days or even one year ins't too much of a problem, however one million years is a different story. Most n-body simulations that run over the course of millions of years are of galaxies or star clusters and use a time step on the order of thousands of years which is reasonable considering the massive distances between stars. However for a system as small (in comparison) as a star system, I speculate that the largest time step that could be used and retain enough accuracy would be a few days, maybe up to a week. Considering the number of days in one million years (365 million-ish) simply scaling up the simulation isn't a viable option so I'm wondering if there is any way of simulating 200 bodies over the course of a million years without sacrificing too much accuracy, another big problem is that the simulation can't take much longer than a minute for gameplay purposes. 
  8. SkavenPlanet

    Ocean Wave 'Fake' SSS

      Thanks for that resource! Finally took a stab at this and I feel like I'm pretty close but not quite sure that I have it down completely:     The effect in my shader seems far less noticeable.
  9. I recently read this article about AC3's ocean rendering and was very impressed with the effectiveness of faking subsurface scattering with a simple color and transparency ramp so I decided to implement the color and alpha ramping based on a terrain height map into my fast-fourier transform based ocean renderer which works pretty nicely and is relatively simple. However I am a little unsure as to how to implement SSS based on backlit waves as described in the article. It states that a simple test based on sun position, camera position, and slope is used to determine the color ramping but doesn't go into much more detail than that. Here's the specific text that I was referencing:  and a few screenshots of the desired effect (from AC3 and Black Flag):     I'm not sure exactly how the interpolationValue or "tintAmount" is determined by those three variables, I assume it would essentially be like normal-based lighting except in the direction facing away from the sun but I'm exactly sure how the camera position is supposed to be involved. Has anyone done this or something similar before?
  10. Nevermind... think I got it figured out.
  11. The game's world isn't a flat terrain but consists of multiple planets on a realistic scale. If you look in the first image the camera is viewing the entire planet at once, the far clipping plane is way further out than the planet radius is, and for my game's purposes players are supposed to be able to see a jupiter-sized world from millions of kilometers away hence the 1e+11 far clipping plane - this works for the planetary terrain renderer w/out the scattering effect, just not yet for the scattering effect. Just to clarify the "skydome" would be the actual atmosphere of the planet so yes, it does, more or less have the same radius as the planet.
  12. I've been working on an implementation of Eric Bruneton's atmospheric scattering effect for which the source code is provided here and I have run into a major problem with reconstructing 3d positions from a depth buffer which is needed for calculating the inscattered and reflected light. The problem is that my world and subsequent atmosphere is 6370 km in radius and the depth buffer I am using doesn't provide enough precision to reconstruct any useful positions (as all of it's values correspond to the far clipping plane [which is 100 million km out]). My response to this was to use a logarithmic depth buffer which provides far better precision for these type of large scenes and apparently it works until I attempt to reconstruct positions from it - then it seems to fall apart for some reason and produce almost the same results as the standard depth buffer. There are some screenshot below demonstrating this - the left half of each image shows the normal depth buffer and its results and the right shows the log depth buffer and its results, the top half shows the atmospheric effects rendered and the bottom half shows the depth buffers themselves.   This image shows how the log depth buffer appears to work yet has the same results.   This image highlights the overall problem of the positions not being reconstructed properly, thus all of the terrain is considered to be the same distance as the sky is from the camera while it may or may not be much, much closer, therefore the terrain appears transparent.   The same image is as the previous one but without any atmospheric effects or gbuffers, showing some pure white, "snowy" terrain.   I've checked and rechecked pretty much all of the code that produces the atmospheric effects multiple times, the only problem appears to be the bit already explained above. My only guess is that for some reason the log depth buffer isn't being linearized. Here's the vertex shader code that produces the log depth buffer (GLSL). void main () { vec4 pos; pos = (gl_ModelViewProjectionMatrix * gl_Vertex); pos.z = ((log2(max (1e-06, (1.0 + pos.w))) * (2.0 / log2((_Farplane + 1.0)))) - 1.0); pos.z *= pos.w; gl_Position = pos; } //_Farplane is the camera farplane which is 1e+11 And this is how the positions are later on reconstructed in the atmosphere fragment shader. float depth = tex2D(_DepthBuffer, input.uv); vec3 worldPos = g_cameraPos + input.cameraToNear + depth * input.nearToFar; //input.nearToFar is a vec4 that = farFrustrum - nearFrustrum //input.cameraToNear is a vec4 that = nearFrustrum - g_cameraPos To my knowledge this should work and I can't figure out why it doesn't. If anyone else has accomplished what I'm trying to do can they explain how their setup worked? Otherwise, does anyone see what's not being accounted for in my code?
  13. So... it took a really long time to get everything to work with 8 bit per channel ARGB textures but this is my result: Finally, time for texturing...   PS: Thanks to swiftcoder and y2kiah for the help! 
  14. So, in order to generate object space normal maps, I tried storing the terrain positions, after having the height applied, into the heightmap as the r,g,b components (the height value is the alpha). From there I was finding the edges between four adjacent pixels/positions, multiplying the edges in pairs, and adding the results from that and normalizing them to get the final normal. Here is the code (C#) to better explain: Vector3 p0 = PixelToPos(heightmap.GetPixel(x-1,y), posArr); //ignore the posArr thing Vector3 p1 = PixelToPos(heightmap.GetPixel(x+1,y), posArr); Vector3 p2 = PixelToPos(heightmap.GetPixel(x,y-1), posArr); Vector3 p3 = PixelToPos(heightmap.GetPixel(x,y+1), posArr); Vector3 p4 = PixelToPos(heightmap.GetPixel(x,y), posArr); Vector3 e0 = p4 - p0; Vector3 e1 = p4 - p1; Vector3 e2 = p4 - p2; Vector3 e3 = p4 - p3; Vector3 n0 = Vector3.Cross(e0, e2); Vector3 n1 = Vector3.Cross(e1, e3); Vector3 n = (n0 + n1).normalized; normalmap.SetPixel(x, y, new Color (n.x, n.y, n.z, 1.0f)); This produces a normalmap that is mostly black with specks of color in them - it doesn't work. Is there another way to generate the object space normal maps or are there just some errors I overlooked in my code?   EDIT: Um... help? Tried picking apart other parts of my code to check for errors, to me it all seems as if it would work. By the way, my code was "inspired" by the "formula" for normal map calculation found in this paper (my code is only somewhat different because I had tried an exact implementation of the formula in the paper and it didn't work): http://www.student.itn.liu.se/%7Echral647/tnm084/tnm084-2011-real-time_procedural_planets-chral647.pdf     EDIT 2: Alright, so I found a couple of errors in my code that I resolved, the code above has been updated to reflect that. I also found some problems with my PixelToPos function that I resolved, in short the function was returning positions that were way off (not being able to use ARGB Float32 textures is a real nightmare for storing positions) so it easily could have been the main problem, but, unfortunately it wasn't. Pretty much fixed all of the problems that I recognized and this is my result:     EDIT 3: I'm not trying to spam or anything but I fixed a few more problems and ran a few more tests... anyhow found that there is a weird texture tiling issue with the normal maps that was causing them to be used for rendering in a strange way, sort of fixed it and now I have this (which at least looks like... something):     Anyhow the tiling issue causes the texture to tile differently all depending on the quad-tree depth of the terrain patches. The normal map calculations are still off a bit because the PixelToPos code doesn't return exact positions but at least I've figured out what the problems are. Also the black lines around every patch were expected as I don't have any special cases for the edge pixels in the heightmap/normalmap.
  15. Would there be a way to calculate tangents for the terrain patches using the height map data or even the normal map? I've found some things involving terrain but it all relied on "up" being in the positive-y direction. For, reasons, I can't use object space normal maps which is a bit of a headache as I have to calculate mesh tangents for tangent space normal mapping.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!