Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

3307 Excellent

About FreneticPonE

  • Rank

Personal Information

  • Role
    Creative Director
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. FreneticPonE

    Back-projection soft shadows

    If you're going for that big of an area lightsource then you're basically doing global illumination, so I'd skip looking at direct shadowing and try out voxel cone tracing/Signed distance field tracing. Actually SDF tracing is probably what you're looking for. Unlike cone tracing you can check the sign to see if your inside a mesh, so your empty voxel problem is solved. Epic did an experiment with using this to shadow VPLs. It worked out pretty well, until they abruptly cancelled all experiments as such:
  2. FreneticPonE

    Distance Fields

    Yeah in order to save a lot of computation Epic currently generates distance fields offline. There's a runtime generated global distance field as well, though lower res. I don't know how many mip levels they use however, though it has to be bounded. Neither am I sure how they loop through overlapping pre-computed volumes, which one do you test first? Anyway, here's a great talk on using all this distance field stuff for both geometry and lighting done in realtime in a shipping game:
  3. FreneticPonE

    Back-projection soft shadows

    Unfortunately area shadows is basically an unsolved problem right now. Signed distance fields can be fast, but get expensive when animating a lot or getting too detailed. Cone tracing would probably be leaky and unstable for shadows. Right now there's just not a great answer. But a good answer might be Moment Based Shadow mapping. It's the fastest/best way I know of to get a large filter radius in renderable time. There's hundreds of pages to read on the stuff, you can read that, anti-aliased moment shadow mapping, some other papers about improving lightleak a lot, etc. etc. etc. But the point is soft shadow mapping with a large filter is doable on the relative cheap with it. Edit- AFAIR temporally supersampled shadows have been tried before, for anti-aliasing. But you'd have to re-do an entire temporal reprojection pipeline for each one to avoid obvious ghosting and etc. just like visibility sampling temporal AA. Probably not worth it, either in production time or milliseconds.
  4. FreneticPonE

    Back-projection soft shadows

    To me it's always seemed a bit silly, slow, and prone to lightleak. You could create some sort of acceleration structure out of the shadow map and in effect cone trace or raytrace through an SDF through it. But it's still going to lightleak. It's also going to be quite slow. Frankly I'm just hoping whatever Ubisoft did for Far Cry 5 shows up at Siggraph or something soon. They've got variable penumbra sun shadows that seem very, very fast, have a lightshape (though I don't know why they chose a hexagonal light shape), look good, and again seem very, very, very fast. Fast enough to run on an Xbox One if I'm remembering right.
  5. There's even dithering in offline stuff. It ends up as a fairly standard thing to do.
  6. FreneticPonE

    Have you ever had your game idea stolen?

    Yes but I don't care. Me and another guy came up with modern survival games as a mod for Oblivion way back when. It rolled on up from there, great for them. We didn't do much of anything with it beyond that first half assed mod that, I can't even remember what we did with the mod itself. It's the work beyond that first idea that also counted here, the execution. Just look at Day Z King of the Hill, first modern battle royale multiplayer out there. But the devs didn't put in the work, didn't put in the thousand other ideas that really build upon the first to make it into something great. So now it's PUBG and Fortnight that are the winners, because they did do that. Don't get me wrong, the first initial idea is important. A lot of people, especially people that "do the work" for a living, vastly underestimate how important that founding idea is. You can put all the hard work you want into an idea but if it's stupid at it's base it's going nowhere. But you also do have to put in the work to get something out. You need both, so even if someone "steals" your idea and succeeds it's not like they didn't do any work themselves. So trust me when I say all your plans are going to look silly at some point. You're going to get partway into building the game and realize this thing isn't working, so you have to re-arrange this, and change that, and eventually it's a mess and you wonder how anyone ever ships a game at all. That's the work part. Just worry about getting over that part before worrying about how great your initial idea is.
  7. FreneticPonE

    DirectX12 adds a Ray Tracing API

    Welp, here's about all you need to know: "Single Ray Per Pixel, 5ms @1080p on very high end graphics card, single sample termination, ambient occlusion with geometry sampling only" Which of course is really noisy so then you get to add denoising overhead on top of that. Oh and it's all static too. So, yeah performance is definitely not realtime for today and probably not tomorrow and next gen either. Really don't understand why DirectX needs it's own raytracing API in the first place.
  8. Precision (t-junction) issues might be at fault. Is there a debug view showing the triangles to see if the cracks line up? Basically the internal mesh rendering of these specific GPUs could be throwing whatever precision it thinks is best at your triangles, altering them slightly due to quantization errors and allowing tiny gaps to bleed through.
  9. The Nvidia paper is... unreliable. Cone tracing is potentially fast, the problem is lightleak makes its hard to implement reliably. By cone tracing's nature the farther you trace the more lightleak you get. But the shorter a cone you trace the less light you get, overall it was an idea that seemed like the future two+ years ago but has since fallen out due to its weaknesses. There are a lot of other GI techniques that can be considered depending on your requirements. EG is the environment static, or highly deformable, or runtime generated? Does light need to move fast or can it move slowly (EG a slow time of day?). That being said Signed Distance Field tracing and some version of lightcuts/many lights looks like it could, potentially, do what cone tracing once promised in realtime. Here's a nice presentation on signed distance fields, which is essentially a sparse voxel octree from cone tracing but you "sphere trace" instead of doing a cone. Benefits therein being no lightleak. Lightcuts/VPLs/"Many Lights" would be other half of the equation. Here's a nice presentation from Square Enix, wherein the biggest cost they have in the test scene is their choice of "adaptive imperfect shadow maps" which is a really hacky and slow way to do what SDF tracing can do easier and faster.
  10. You don't need to do virtual texturing with one, master texture for the whole world. You'll need to do blending again but you can just use it almost exactly like traditional texturing, not worrying about texel density or disc space at all. The latest Trials game does this (at least what they're GDC presentation indicated)
  11. FreneticPonE

    Blending local-global envmaps

    Remedy also had voxelized pointers towards which probes are relevant where. Heck you could go a step further (or does Remedy do this already) and store a SH probe, with channels pointing towards the relevant probes to blend. It'd be great for windows and the like, blending relevant outdoor probes would be great there. You could even make the entire system realtime, or near to it. Infinite Warfare used deferred probe rendering for realtime GI, and Shadow Warrior 2 had procedurally generated levels lit at creation time. I seriously hope those are the right links, I'm on a slow public wifi at the moment so... Regardless a nice trick is to use SH probes with say, ambient occlusion info, or static lighting info or something, to correct cubemap lighting. This way you can use cubemaps for both spec and diffuse, and then at least somewhat correct it later.
  12. FreneticPonE

    Spherical Harmonics and Lightmaps

    Oof, I remember that second one. At that point more traditional pathtracing is just as fast or faster, doesn't have any missing data problems, and would probably use less memory as there'd be no multiple copies of the same data.
  13. FreneticPonE

    Spherical Harmonics and Lightmaps

    Cubemaps only offer low frequency spatial data, ultra low frequency no matter how much angular frequency they offer. Invariably the farther away from the sample, or rather if it's just behind a pole or something, the less correct the data will be no matter how high a resolution it is. Lightmaps are ultra high frequency spatial data, even if angular data is low frequency it can still be more correct than a cubemap, no matter how many tricks you pull. And SSAO only works with onscreen data, and only works for darkening things. Most modern SH/SG lightmaps are used to somewhat correct or supplement cubamaps.
  14. FreneticPonE

    Spherical Harmonics and Lightmaps

    Cubemaps are only sampled from one spatial point, maybe 2 or so if you're blending across. Say, an H basis lightmap would sample light from each texel. You just contribute whatever specular response you can from your spherical harmonics to help with the fact that the cubemap is almost certainly going to be some level of incorrect. For rough serfaces the entire specular response can come from the lightmap, and thus (except for dynamic stuff) be entirely correct, position wise. Doing all this helps correlate your diffuse color to your specular response, which will become uncorrelated the more incorrect your cubemaps become. BTW if you're curious I'd consider "state of the art" to be Remedy's sparse SH grid used in Quantum Break: The idea is to voxelize your level into a sparse voxel grid, then place SH (or SG/whatever) probes in each relevant grid point. The overall spatial resolution is less than a lightmap, but it's much easier to change up the lighting in realtime, and uses the same exact lighting terms for static and dynamic objects. It might not seem intuitive, but having a uniform response for lighting across all objects gives a nice look compared to the kind of disjointed look you get out of high detail lightmaps being right next to dynamic objects with less detailed indirect lighting.
  15. Not randomly, there's patterns and etc. Here's a nice tutorial instead of explaining here. As for variance shadow maps, they're very fast at low resolutions but grow badly with it, 4x the res = 4x the cost. They also have lightleak which to fix you need to go down some infinite hole of ever longer papers. I'd definitely stick with PCF first and try playing around with an offset biases to clean up artifacts.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!