Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 11 Apr 2005
Offline Last Active Aug 23 2016 02:31 AM

Posts I've Made

In Topic: Good tools for indoor-map editing / architecture

11 June 2016 - 07:55 AM

Again thanks for the replies guys. Stubborn as I am, I'll try to wrestle through it myself to begin with, but I certainly keep this course in the back of my mind.


And good that you mention Sketchup. I completely forgot about that one, Making a bunch of rooms and a window was minute work indeed. Though I can't remember if adding slightly more detail like stairs, skirtings, rounded ceilings or beams was possibly. I guess it does that as well.

In Topic: Good tools for indoor-map editing / architecture

08 June 2016 - 08:12 AM

Sounds Blender deserves a second chance then. And yeah, "takes 3/4 years to learn", I'm afraid I can't get around that, but it was worth a try, maybe some unconnvential auto-magic software for these exact purposes exists.


Thanks for diving into detail on all those points!

In Topic: Doing SubSurfaceScattering / backlight

14 March 2016 - 02:19 PM

Thanks again guys!


Nice&clear presentation you posted there Styves! Maybe not perfect, but using a good old Lookup Texture / BRDF instead sounds a whole lot easier! Only little problem is that when using a Deferred pipeline like I do, you also have to store which BRDF to use, assuming that besides skin, we have would have a couple more profiles. You could put all possible BRDF LUT textures into a big texture, and use 1 gBuffer parameter as an index/offset.


For larger, not-so curvy surfaces, Pre-Integrated Shading won't do a whole lot, neither does it give us backlight. But for that I'd like to give the Frostbite approach a try. That probably means each vertex will get 2 extra attributes "Curvature" (For fake SSS) & "Thickness" (for backlight). Only thing that still won't work, is backlight coming out of a lightmap or probe... Oh well, let's just make & check this out first!



Well, I'm informed now, thanks again!

In Topic: Doing SubSurfaceScattering / backlight

13 March 2016 - 06:21 AM

The Head demo is very impressive, although I must say the ultra-quality of the mesh and textures itself are doing at least half the job. Even with all tricks disabled, it still looks good. And that's my main beef with these techniques: Big efforts for little results. Apart from cutscenes, how many times do you really face-to-face a (non-gasmask-wearing) that close, without dropping dead within the next 100 milliseconds, allowing you to count the pimples?


At this point I have a blurred version of the screen available. But question is if I really need it (in case simpler tricks can do the job as well), and ifso, how to mix between max-blurred and non-blurred pixels, giving a normal, eyevector, et cetera. I bet the papers explain, but I always have a hard time understanding them.



Personally I'm a bit more interested in environment usage. Curtains, plants, rubber hoses, chunks of ice, that kind of stuff. The Frostbite paper you showed (thanks!) seems to suit better for that, at a low cost. Yet there are still a few problems:


* Precomputed thickness map is per object & doesn't work (well) for animated / morphing objects

Since I also want to use it for environment meshes, it would mean each mesh may need an unique map, as each of them has an unique shape. A bit expensive. Could bake the thickness into vertices though...



Speaking of which, the (float) thickness value is an average over the pixel-normal hemisphere? Or just the distance a ray travels straight down to the other side of the mesh, using the inverted normal as a direction?



* Outer walls don't have a backside

Say I have a room made of snow/ice. Or wax, whatever. Geometry wise, there is nothing behind the walls, thus no backlight obviously. But still want to give them that "SSS look" somehow... Here you have to use internal scattering again. Which would bring me back to the original question - howto? I could look into blurring, and/or do the Unreal way, whatever that exactly is.



* Lightmaps / Ambient

Since ambient light is so important in my case, having backlight for realtime lights only would be a shame. Of course with some artist-smartness you can adapt the lights in your scene when there are translucent objects (which doesn't happen that often), but  having the ability to involve the backlight coming from the lightmap somehow, would be nice. Maybe it can be done like this:


- In the lighting pass (where you render spheres/cones to splat light on the screen - when using Deferred Rendering)...

- Render backsides of translucent stuff, as if they were light volumes as well

- Incoming color = lightmap pixel, incoming direction = inverted normal OR the global incoming direction baked in your lightmap, if you have it

- Perform the rest of the shader math, as described in the slideshow




Excuse me for the vague questions. I always write this on the fly, not really knowing where the real issue is ;)

In Topic: ComputeShader Performance / Crashes

28 January 2016 - 12:55 PM

Thanks for taking time to wrestle through my code pieces Joe!


>> you forget to do a memory barrier on shared memory as well.

All right. Adding "memoryBarrierShared()" in addition to "barrier()" would do the job (to ensure the index-array is done filling before starting the second half)?


Btw, besides crashes, is it possible that bad/lacking usage of the barrier as suggested can cause such a huge slowdown? Like I said, on my computer all seems fine, another one works as expected as well, but just very slow.



>> because OpenCL was two times faster on Nvidia ans slightly faster on AMD 1-2 years ago

Now that concerns me. Especially because I used OpenCL before, removed it completely from the engine, and swapped it for OpenCL (easier integration, more consistency)...  Doh!


Is it safe to assume that modern/future cards will overcome these performance issues? Otherwise I can turn my Deferred Rendering approach back to an "old" additive style. Anyone experience if Tiled Difference Rendering is that much of a win? And then I'm talking about indoor scenes which have relative much lights, but certainly not hundreds or thousands.


The crappy part is that I'm adapting code to support older cards now, even though I'm far away from a release, so maybe I shouldn't put too much energy on that and bet on future hardware.



>> Unroll

I suppose that can't happen if the size isn't hardcoded (counts.x comes from an outside (CPU) variable)?



Well, let's try the shared-barrier, different workgroup size, and avoiding unrolling. And see if these video-cards start smiling... But I'm afraid not hehe.