Jump to content
  • Advertisement

Frantic PonE

Member
  • Content Count

    20
  • Joined

  • Last visited

Community Reputation

108 Neutral

About Frantic PonE

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Frantic PonE

    Is my parallax mapping ok or not?

    TBH if you made the second screenshot brigher it'd help
  2. By next year even, though I'm personally fascinated by how a patent applied for by AMD proposes to do so. Modifying the texturing units to allow an inner raytracing loop to be acclerated could mean it ends up far more programmable than Nvidia's "bounding box only!" solution. One could, potentially, trace through distance fields, mipped texture SVOs, or whatever structure one wanted. And being in what could be both consoles would mean this would become the new standard, and RTX itself could just be a deprecated experiment. Of course that's a big assumption, which is the trouble of trying to guess anything now before solid information is out.
  3. Yeah this is fine and dandy as an idea, UE4s signed distance fields (essentially sparse voxel octrees) have a dual global/local model, and only change as minimally as possible in quite the same manner you propose. Anyway, the idea is there's a high level "global" distance field, that only contains a few distance mips to keep updates to the global model cheap. Then there's each models individual voxel tree stored in a separate 3d texture, which offers much higher resolution but doesn't change (no skinned updating/etc.) and can be instanced/etc. It works fairly well, signed distance fields are so fast to trace through you don't even need specialized hardware to do it (heck The Last of Us did it with capsules all the way back on the PS3). That being said this "updating the acceleration structure" thing is a major area of unsexy research. Everyone wants to "cast rays faster" (what you'll see most raytracing research on) but whether its a bounding volume heirarchy or some voxel structure or other you can quickly get updating that to be the performance bottleneck. It's part of the reason bounding volume heirarchies are being used for realtime raytracing instead, updating distance fields get's progressively more expensive the more detail you have/etc. Fundamentally though you're on the right track. Ray instersection tests of any kind don't actually need transforms if the object being instersected doesn't change relative to itself, regardless of what sort of structure you're going through. You can throw around static objects all you want and only have to transform the "middle" portion of whatever structure you're updating (where the object is relative to other objects). If you can then come up with a fast way of revoxelizing/etc. smaller scale transforms, eg a tree blowing in the wind/a character animating/etc. then you're relatively good to go.
  4. Frantic PonE

    HDR10 programming for Windows

    Neither block for OpenGl apps anymore. AMD hasn't for a long, long time, and Nvidia finally cracked like a week or two ago and delivered open driver support for it as well.
  5. SSR is extremely fast, and there's plenty of hacks to get it to sort of look good for water. Most of them work by just choosing the previous valid result/nearest valid result/temporal reprojection and then fading the effect on/off as the player tilts the camera towards/away from parallel to the water. Here's a relevant blog post going into detail, most SSR water today looks a lot like this: http://remi-genin.fr/blog/screen-space-plane-indexed-reflection-in-ghost-recon-wildlands/ Regardless just about every game I can think of just uses SSR for water this gen if they use anything, at least at times (sometimes it'll be just a cubemap). It doesn't work for a lot of scenarios obviously, but it's just so bloody fast it's hard not to use.
  6. Frantic PonE

    Questions about surfel

    Yeah, the way it uses raytracing is a bit silly and arbitrary, like they had to fit it in somehow because the research is sponsored by Nvidia. But thinking about it, flat surfel list/G-buffer list could just be done by "dilating" the scene texels, just choose super low res mip maps so sample points are far likelier to overlap. Well that's the lazy, hacky way to do it, I'm sure there's some much more clever neighborhood sorting thing to do.
  7. Frantic PonE

    Questions about surfel

    Surfel de-duplication is definitely a good idea, shading becomes nigh half the cost here. There's another paper that builds a bit on that first one, though I forget if it mentions exactly how it does de-duplication or if it's just glossed over: https://morgan3d.github.io/articles/2019-04-01-ddgi/ Either way it also has a neat hack for reducing lightleak, which is otherwise a big concern when using probes for GI. And since I'm on a roll, it's better to store the resulting probes using ambient dice: https://www.ppsloan.org/publications/AmbientDice.pdf Which provides better results for mem/perf than other basis, and can be used for specular approximation without costly raytracing: https://torust.me/2019/06/25/ambient-dice-specular.html
  8. This one's kind of the definitive GI probe talk for modern day rendering. They separate diffuse GI probes, which have density based on scene complexity, from cubemap probes, which is also standard practice.
  9. Frantic PonE

    What GI is usable right now?

    Err, the answer is, inevitably, a roll your own complex thing. The three questions I can think of are: How fast, like is the camera flying like a jet, or just a car? How much indoor/outdoor is there? Is there only a few interiors that are generally close to outdoors, or are there ultra complex interiors digging down into caves? How much do you need in the way of reflections and please don't say it's a lot those are hard. Now onto solutions: This is a very good overview of a low cost, static, diffuse only, multi bounce solution that has some content restrictions: it's only for big/relatively mid complexity environments and can't handle dramatically fast light changes. Also there's some entirely unnecessary steps, like raytracing every probe every frame just to handle doors/windows, and in the hopes that it can handle dynamic scenes (artists will easily break this): https://morgan3d.github.io/articles/2019-04-01-ddgi/ You can handle doors/windows in other ways. Have two versions of lightprobes near doors/windows with one for open and one for closed, or raytrace only probes that are near doors/windows with only geo primitives, and only when those doors/windows open/close, or etc. The above is a somewhat improved and better documented version of this, but in the interest of completeness and showing you absolutely don't need that raytracing part: https://t.co/7fii07TJzl and video: https://www.gdcvault.com/play/1023273/Global-Illumination-in-Tom-Clancy Relatedly, the Call of Duty guys did something similar for a level in Infinite Warfare, but with cubemaps! While this would consume more memory and possibly compute depending on how often you update the lighting, you do get specular lighting from it! Now for the life of me I can't find the paper, but the idea was relatively simple: Place cubemap lightprobes as normal, store visibility of cubemaps in textures as a g-buffer, relight those g-buffers just with diffuse lambertian lighting in a round robin fashion (do N cubemaps a frame), then take that output as a lit cubemap and use their clever filtering to do prefiltering on it: https://research.activision.com/publications/archives/fast-filtering-of-reflection-probes Obviously, again, only slow relighting. And the bigger/more complex your environment the costlier and worse it gets. But it's easily combineable with the above papers, including the flat surfel list/gbuffer strategy from the above two. The Call of Duty one had a static grid of lightprobes with essentially ambient occlusion baked in to help correct for lighter/darker parts of the scene the sparser cubemaps didn't reach, but you could use the above paper lightgrids ideas to replace them with smaller diffuse lightprobes that do the same thing but look better. The basic thing to remember about all these, including fine realtime scale details, is this seminal paper EG That which all shall copy this generation cause damn it's neat: https://users.aalto.fi/~silvena4/Publications/SIGGRAPH_2015_Remedy_Notes.pdf Which isn't scalable to open worlds and has to be precomputed (but is somewhat dynamic). Still all the above and more owe a lot to that, so it's a great read. Ok, that was exhausting. The only other idea is, the future! Well, a fast version of the future. Distance field occlusion is awesome for computing visibility, and can be changed... sooomewhat in realtime (like the above, not very fast, but fast enough for slow changes): http://advances.realtimerendering.com/s2015/DynamicOcclusionWithSignedDistanceFields.pdf One can use this with... whatever else there is for actual lighting. Re-calculate a skybox environment light every once in a while and use this for occlusion. Do virtual point lights ala: http://www.jp.square-enix.com/tech/library/pdf/Virtual Spherical Gaussian Lights for Real-time Glossy Indirect Illumination (PG2015).pdf For a single lightbounce, maybe combine with a grid of lightprobes that changes somewhat (relight a bit just based on time of day or whatever) like the above for secondary bounces? The heightfield lighting in Epic's paper was never really more than an experiment, heightfield conetracing or etc. could easily produce better distant GI, including distant reflections, with some more work EG just apply the principles of screenspace ray/conetracing to the heightfield (same principle): https://www.tobias-franke.eu/publications/hermanns14ssct/hermanns14ssct_poster.pdf
  10. EA Put out a nifty, and fairly complete overview of their sky/atmospheric scattering/etc.: https://www.ea.com/frostbite/news/physically-based-sky-atmosphere-and-cloud-rendering And the PDF link: https://media.contentapi.ea.com/content/dam/eacom/frostbite/files/s2016-pbs-frostbite-sky-clouds-new.pdf Here's their froxel fog for near rendering too, as a bonus: https://www.ea.com/frostbite/news/physically-based-unified-volumetric-rendering-in-frostbite
  11. Frantic PonE

    Confused on GPU voxelization

    You don't raster the whole scene, just the voxel.
  12. Frantic PonE

    RayTracing + Displacement/Tesselation

    Thus I'm thinking a good solution for shadowing might be constraining tesselation amplitude and using screenspace shadowing/heightfield self shadowing. Precision issues can already be encountered with other raytracing effects,and be overcome with screenspace gap filling already, so this just seems like on more to add to the list. Besides, tesselation already has texture stretching problems.
  13. Frantic PonE

    Efficient octree for dynamic scene

    BVHs are generally easier. Sparse octrees are really neat and can speed up tracing a lot, they're the same as a signed distance field so you can estimate soft shadows with one sample, cone trace, etc. But they have scaling/updating problems that aren't solved yet. On the other hand there's been a ton of work for realtime BVH lately, cause raytracing. Here's a very nice BVH updating paper: http://box2d.org/files/GDC2019/ErinCatto_DynamicBVH_Full.pdf And as a bonus, a more efficient shape than a box, an axis aligned bounding octahedra: https://github.com/bryanmcnett/aabo
  14. Frantic PonE

    SSR roughtness

    The Call of Duty guys convolve the entire screen for GGX to use in both SSR and for GGX translucency refraction, a nice double win there I'd say, if that's your question? Also your rays should just be cast a long a 2D heightfield for SSR, you have Z-depth to make your heightfield, which you just assume has a thickness of... some arbitrary number you choose for the sake of occlusion, maybe based upon apparent pixel width of the object or whatever. If that's your question?
  15. Already liked, but here's a bump for sheer coolness and so it doesn't just sink down without other, interested people seeing it.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!