Jump to content
  • Advertisement

Ryokeen

Member
  • Content Count

    47
  • Joined

  • Last visited

Community Reputation

852 Good

About Ryokeen

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Design
    DevOps
    Education
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. That is one problem, due to the whole map/level being one fbx file/mesh i can't rly cull whole objects. The instanced ones ok, but the rest of the map..is basicly one object. Atm it's in the range of approx 1million triangles, but since i render serveral passes(shadowmap, gbuffer, env cubemaps and some others) speeding up the rendering might be a good idea. And since that game is atm quite gpu bound..i wanted to give it a go. Sure one could say i can cull by material/submesh, but..they are like all over the place so computing bounding geometry is a pain and won't give me much. The reason why i was looking into a quadtree is, the maps are quite flat, players can run and fly and i have one big mesh without actual seperate objects i could cull. And it's just the map itself, stuff like vegetation, ocean and dynamic objects allready use some sort of culling scheme. BVH..need to look into that, havn't done much with partition algorithms Maybe i'm wrong but my main concern with not splitting triangles is this: The map can have quite big triangles and sometimes quite a lot of them. So if i have a collection of triangles in a bounding volume that might result in the case that the bounding volume gets quite big..collecting more triangles in it. Then if one node is visible but it's neightbour is not..and the big triangles go through both of them i need to render them if one of the nodes is visible, but doesn't want to render the triangle twice if both are visible. So where to put it ? In the parent node ? works but means i draw a lot more nodes
  2. That is out of question, every triange should only be in one node max, as if it is in 2 childnodes from the same parent, it would get drawn multiple times(and since i don't know if a triangle ends up transparent or not at tree creation time, that is an issue). About loose octrees..yeah that is something i could try out, guess i'll have to measure what is faster, drawing more nodes because of triangles covering multiple child nodes, or splitting them and get more smaller triangles but less nodes and less drawcalls.
  3. Well, just for the fun of it i started to implement a simple quadtree into a little project of mine. Just to maybe speed up rendering and have at least some sort of culling for maps as they are made as basicly one huge mesh. But then there is the situation that triangles will cross borders of the childnodes if a node has to be splitted. Pushing them into the parent node would be easier, but that means that there is the potential to draw more nodes than possible because of well, crossing triangles. Then if they would get split so that i only have to render leaf nodes, means i get quite some more triangles to render. As for now the quadtree will only be used for rendering so each triangle in the hughe mesh should only be present one time(to avoid possible overdraw/double rendering of triangles) Now what i'm asking or where i need some input is, is it better to push the triangles which are in more than one childnode into the parent node or split them at the boundary, resuling in more triangles but they will only be in leaf nodes ? regards Ryokeen
  4. As a reminder : https://www.gamedev.net/forums/topic/595417-why-did-they-decide-to-point-z-up-in-quake/ And as Counter-Strike/Half-Life is based on Quake it's the same. About how to archive that, bunch of worldmatrix rotations, just look at the QuakWorld source
  5. Ryokeen

    Character Fragmentation

    If that is any help to you there is an publication from the Left 4 Dead Team https://steamcdn-a.akamaihd.net/apps/valve/2010/gdc2010_vlachos_l4d2wounds.pdf
  6. Yeah it's the same i use, so either the one of the matrices is incorrect, it's because i use 1.0 for z in the computeClipSpaceCoord. You sure that you inverse the current viewprojection with rotation and translation, same for last frames matrices
  7. Yep, odd frames could write to odd pixel numbers while using the full image from the last frame to fill in the even pixels. For even frames the other way around. And i don't quite know it it's correct to use the repojection you posted(as i basicly use the same), but it looked fine. So you should give it a try. Actually i use a 4x4 dither matrix https://en.wikipedia.org/wiki/Ordered_dithering as a threshold for which pixel should be written, along with a matching offset and use a frame counter internally. So for the first frame i would compute the upper left pixel, then to create the full image, update only that pixel in a 4x4 block, while reusing the previous fullscreen image on the other pixels. In case i don't have an old fullscreen image, i just use the new computed one with some linear filtering.
  8. When i did that, i just assumed clouds are at max depth. If they are moving rather slowly it still gives a sharp image. Another idea could be, that you save of the distance/position of the first in cloud sample, so you get the most front sample position of the clouds. Elsewise, yeah that's a problem with volume rendering as there is no hard surface. Another word towards optimization, i guess you read the articles from the Horzon:Zero Dawn team, so early out is one way to gain performance, skipping empty space and lowering the quality fully inside or at a distance are others. The empty space skipping is also quite usefull at low coverages, as you greatly reduce the amount of samples taken. But the major thing, wich also requires reprojection is, you don't do that at fullscreen. I do that at 1/16th resolution, essentialy creating a full image over 16 frames, then apply a small 1pixel blur to hide the sampling noise. Sure if the cam is rotating or moving fast you get some more blurry clouds, but that's not so visible and looks more like a very soft motionblur. And even at that low update resolution it can take up to 1.8 ms on a gtx970
  9. Yep, just be aware to normalize the directions before taking the dot product float LdotE = dot(vLightDirection,vRay); //both normalized, vRay is from Eye towards the clouds, vLightDirection is towards light float MainPhase = GetMainPhase(LdotE); //combination of HG-phase functions I could post my code aswell, but it's a mess atm and written in CgFx..yeah shame on me had no time to port it over to glsl yet
  10. The sampling itself looks a bit odd, like stretched towards the screen center. And yes view direction is samplingpoint-EyePosition, but as you move along a ray direction that is the same, so no need to recalculate it. For the dark edges, it looks a bit like the cloud gets brigther the thicker it is. What i do for composition vec3 color = CloudColor.rgb + CloudColor.a*background.rgb; While doing the raymarch i assume you go front to back, so start with extinction = 1.0, inScatter(cloud color) = 0.0 for each step sigT = cloudDensityAtSamplePoint * someValue;//some value is just a scaling to account for the step lenght, overall density.. curExt = exp( -sigT); curInScatter = ..compute lighting for that samplepoint inScatter = inScatter + sigT * curInScatter * extinction;//compute new overall inScatter amount extinction *= curExt;//compute new overall extinction vec4 finalCloudColor = vec4(inScatter*LightColor,extinction);
  11. For the volume texture lookup just use worldspace/whatever space you want to use in which the sampling ray is. For the clouddensity texture, the 2D one, just use the sampling x/y coordinate with some shift and scaling. As my world is 0,0,0 centered i center that 2D texture on that point aswell and scale it so it covers the visible area. The HG phase function uses the view direction and the light direction. Had to play a bit with them, so i end up with 3 HG phase functions. One for the forward part, one for backwars and one for ambient, with different g values. Not sure about the dark borders you mention, but if they are at the cloud edges, where they are only partly visible, that might be some blending error. Try using premultiplied alpha for that. Also there is some Frostbite paper about improving the scattering equations https://media.contentapi.ea.com/content/dam/eacom/frostbite/files/s2016_pbs_frostbite_sky_clouds.pdf
  12. A sphere shape would give you better results, as it produces a "natural" horizon. Should be pretty easy to do if you're doing the clouds as a fullscreen effect and writing it out to an FBO. Inside the shader just trace from the eye to a sphere, centered below your terrain, so you only get a small upper part. The intersection is your trace start. Just be aware that the height inside the cloud is then no longer along the global up axis but the height above the traced sphere in direction towards the spere-center. For performance optimization, early out in the shader if a ray is way below the horizon. Another thing i suggest is doing sphere/shell like sampling, means from the trace start point on you sample in shells not in height slices. That (for me at least) reduced sampling errors and still looks good when you're inside the cloud layer. Also it makes exponential sampling distances easier.
  13. Ryokeen

    AVX2 support in Visual Studio

    As a sidenote, i would not recommend to rely on the automatic vectorization of compilers. I did a bunch of test with various ones, gcc, g++, MSVC2015, MSVC2017(default and clang toolchain) and yes it does work, buuut breaks quite easily. So if you want to have that for speed, you should use intrinsics. Also, at least MSVC generated several codepaths with cpuid checks but only on the auto vectorized code.
  14. Any chance on some new papers ? Would like to read about the improvements you made, sadly can't be there.
  15. Ryokeen

    UV map blocky voxel models

    What comes to my mind is the following: http://vcg.isti.cnr.it/volume-encoded-uv-maps/ I've not rly looked into it myself but from a quick glimpse it might be something
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!