Jump to content
  • Advertisement

Ryokeen

Member
  • Content Count

    44
  • Joined

  • Last visited

Community Reputation

852 Good

About Ryokeen

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Design
    DevOps
    Education
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. As a reminder : https://www.gamedev.net/forums/topic/595417-why-did-they-decide-to-point-z-up-in-quake/ And as Counter-Strike/Half-Life is based on Quake it's the same. About how to archive that, bunch of worldmatrix rotations, just look at the QuakWorld source
  2. Ryokeen

    Character Fragmentation

    If that is any help to you there is an publication from the Left 4 Dead Team https://steamcdn-a.akamaihd.net/apps/valve/2010/gdc2010_vlachos_l4d2wounds.pdf
  3. Yeah it's the same i use, so either the one of the matrices is incorrect, it's because i use 1.0 for z in the computeClipSpaceCoord. You sure that you inverse the current viewprojection with rotation and translation, same for last frames matrices
  4. Yep, odd frames could write to odd pixel numbers while using the full image from the last frame to fill in the even pixels. For even frames the other way around. And i don't quite know it it's correct to use the repojection you posted(as i basicly use the same), but it looked fine. So you should give it a try. Actually i use a 4x4 dither matrix https://en.wikipedia.org/wiki/Ordered_dithering as a threshold for which pixel should be written, along with a matching offset and use a frame counter internally. So for the first frame i would compute the upper left pixel, then to create the full image, update only that pixel in a 4x4 block, while reusing the previous fullscreen image on the other pixels. In case i don't have an old fullscreen image, i just use the new computed one with some linear filtering.
  5. When i did that, i just assumed clouds are at max depth. If they are moving rather slowly it still gives a sharp image. Another idea could be, that you save of the distance/position of the first in cloud sample, so you get the most front sample position of the clouds. Elsewise, yeah that's a problem with volume rendering as there is no hard surface. Another word towards optimization, i guess you read the articles from the Horzon:Zero Dawn team, so early out is one way to gain performance, skipping empty space and lowering the quality fully inside or at a distance are others. The empty space skipping is also quite usefull at low coverages, as you greatly reduce the amount of samples taken. But the major thing, wich also requires reprojection is, you don't do that at fullscreen. I do that at 1/16th resolution, essentialy creating a full image over 16 frames, then apply a small 1pixel blur to hide the sampling noise. Sure if the cam is rotating or moving fast you get some more blurry clouds, but that's not so visible and looks more like a very soft motionblur. And even at that low update resolution it can take up to 1.8 ms on a gtx970
  6. Yep, just be aware to normalize the directions before taking the dot product float LdotE = dot(vLightDirection,vRay); //both normalized, vRay is from Eye towards the clouds, vLightDirection is towards light float MainPhase = GetMainPhase(LdotE); //combination of HG-phase functions I could post my code aswell, but it's a mess atm and written in CgFx..yeah shame on me had no time to port it over to glsl yet
  7. The sampling itself looks a bit odd, like stretched towards the screen center. And yes view direction is samplingpoint-EyePosition, but as you move along a ray direction that is the same, so no need to recalculate it. For the dark edges, it looks a bit like the cloud gets brigther the thicker it is. What i do for composition vec3 color = CloudColor.rgb + CloudColor.a*background.rgb; While doing the raymarch i assume you go front to back, so start with extinction = 1.0, inScatter(cloud color) = 0.0 for each step sigT = cloudDensityAtSamplePoint * someValue;//some value is just a scaling to account for the step lenght, overall density.. curExt = exp( -sigT); curInScatter = ..compute lighting for that samplepoint inScatter = inScatter + sigT * curInScatter * extinction;//compute new overall inScatter amount extinction *= curExt;//compute new overall extinction vec4 finalCloudColor = vec4(inScatter*LightColor,extinction);
  8. For the volume texture lookup just use worldspace/whatever space you want to use in which the sampling ray is. For the clouddensity texture, the 2D one, just use the sampling x/y coordinate with some shift and scaling. As my world is 0,0,0 centered i center that 2D texture on that point aswell and scale it so it covers the visible area. The HG phase function uses the view direction and the light direction. Had to play a bit with them, so i end up with 3 HG phase functions. One for the forward part, one for backwars and one for ambient, with different g values. Not sure about the dark borders you mention, but if they are at the cloud edges, where they are only partly visible, that might be some blending error. Try using premultiplied alpha for that. Also there is some Frostbite paper about improving the scattering equations https://media.contentapi.ea.com/content/dam/eacom/frostbite/files/s2016_pbs_frostbite_sky_clouds.pdf
  9. A sphere shape would give you better results, as it produces a "natural" horizon. Should be pretty easy to do if you're doing the clouds as a fullscreen effect and writing it out to an FBO. Inside the shader just trace from the eye to a sphere, centered below your terrain, so you only get a small upper part. The intersection is your trace start. Just be aware that the height inside the cloud is then no longer along the global up axis but the height above the traced sphere in direction towards the spere-center. For performance optimization, early out in the shader if a ray is way below the horizon. Another thing i suggest is doing sphere/shell like sampling, means from the trace start point on you sample in shells not in height slices. That (for me at least) reduced sampling errors and still looks good when you're inside the cloud layer. Also it makes exponential sampling distances easier.
  10. Ryokeen

    AVX2 support in Visual Studio

    As a sidenote, i would not recommend to rely on the automatic vectorization of compilers. I did a bunch of test with various ones, gcc, g++, MSVC2015, MSVC2017(default and clang toolchain) and yes it does work, buuut breaks quite easily. So if you want to have that for speed, you should use intrinsics. Also, at least MSVC generated several codepaths with cpuid checks but only on the auto vectorized code.
  11. Any chance on some new papers ? Would like to read about the improvements you made, sadly can't be there.
  12. Ryokeen

    UV map blocky voxel models

    What comes to my mind is the following: http://vcg.isti.cnr.it/volume-encoded-uv-maps/ I've not rly looked into it myself but from a quick glimpse it might be something
  13. @ChenA I'm not sure how they even got the shape working. I get at best some very diffuse unshapeable cloud areas with their Remap approach. That's why i'm still sticking to my own way. For the top view, i have to change the shader a bit for that to work, since i optimized the cloud layer to be always above the camera. Makes more sense in Earth Special Forces (players there don't get that high). Will do that when i have time on weekend
  14. With the height from the weatherdata i compute the fade_in/fade_out values very similiar to how they do it in the article.   Then somewhere i have lCoverage = 1.0-cloudType.r*fade_out*fade_in; Where clodeType is the weatherdata at the current sample position. Try just the same you did for the cloudtops at a smaller heightscale for the bottom, that should help a lot. Also i scale the final cloud value by fade_in to make it a bit smoother
  15. Yep it is expensive, around 1.4-2.1ms on a gtx970 @1/16th FullHD resolution. Optimisations so far, early out on high opacity, no expensive calclation on coverage  == 0 , and no full quality light-samples on opacity > 60% and a carefully tweaked scale. Thought on a signed distance field aswell, but that eliminates the nice property of realtime coverage/shape changes
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!