Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 04 Apr 2007
Offline Last Active Oct 22 2016 12:10 PM

#4951906 Bloom before tonemapping

Posted by on 22 June 2012 - 09:02 PM

They tell you right in the image title, you use the 'screen' blend mode instead of a direct addition. Much, much simpler ;)

#4940940 Materials Blending a'la UDK

Posted by on 17 May 2012 - 08:54 AM

Are you completely sure of that? An artist told me that he could blend separate materials in UDK and asked me if I could do the same in Unity. Of course, "material" is quite a high-level concept which differs in UDK and Unity, so I was wondering how this could actually be done technically in UDK.

No, for realz. There may be other approaches, but the specific materials used in the video you linked to were all designed explicitly to support vertex color-based blending. Unreal lacks the concept of multipass rendering for materials outright.

EDIT: In fairness, I think lighting is done multipass, but this is not something you work with as an artist. The material compiler generates a lightmap-lit shader for 'ambient' and point light additive shaders.

#4940939 Branching & picking lighting technique in a Deferred Renderer

Posted by on 17 May 2012 - 08:50 AM

Well, luckily it should be somewhat coherent indeed. A typical example would be a room that uses default shading on all contents, except the walls using Oren Nayar, and a sofa a velvet lighting method. In other words, pixels using technique X are clustered together.

Yet I'm still a bit affraid of having such a big branch, or doesn't it matter much whether a shader has to check for 2 or 20 cases?

It's still going to add about 19+ spurious ALU ops that may or may not be scheduled concurrently with useful work, depending on the target GPU architecture and a handful of other things. In the non-coherent branch case, you're very likely going to be shading all 20+ BRDF models and then doing a predicated move to pick the 'right' result-- *any* sort of boundary is going to be disproportionally expensive to render. I guess what I'm trying to say here is that your question gets asked a lot and the answer hasn't really changed much :(

If you want flexible BRDFs, you have a few options. You can just use standard, expressive BRDFs like Oren-Nayar/Minnaert or Kelemen Szirmay-Kalos for everything and store some additional material parameters in your G-buffers; this is in general a workable base for most scenes. More esoteric surfaces could be handled via forward shading (and you may be doing this anyway for things like hair, being that they're partially-transparent and all) and compositing into the final render.

You could also aim for the more general BRDF solutions like Lafortune or Ashikminh-Shirley and encode their parameters too. This should be sufficient to represent pretty much any material you can think of.

Lastly, you can also give tiled forward rendering a go. If you're starting off from a deferred renderer this may not be that hard to switch over to, though you'll need to do some work on the CPU side (namely light binning and culling) if you're just using a D3D9 feature set. It should still be viable, however.

#4935257 Why use Uber Shaders? Why not one giant shader with some ifs.

Posted by on 26 April 2012 - 05:59 PM

It's right smack in GPU gems, actually. The article doesn't go too much into implementation details, but I wager there's some sort of runtime inlining or additional precompilation going on.

EDIT: If they don't mention it in the new language manuals, I may stand corrected here. Wonder if it's been removed/deprecated somehow.

EDIT 2: Based on the API descriptions provided, it's probably the first approach, inlining/AST substitution.

#4935071 Cascaded shadow maps question

Posted by on 26 April 2012 - 07:57 AM

I mentioned this in one of the other cascaded shadow mapping threads, but do give adaptive shadow maps through rectilinear texture warping a gander. It only uses one map/'cascade' so lots of the cascaded shadow mapping edge cases aren't necessary. You can definitely get superior quality and possibly speed on top of it :)

#4934068 Advice for enhancing the lighting

Posted by on 23 April 2012 - 06:17 AM

Look into distance fog/atmospheric scattering. If you have a depth map laying around, you can do it as a postprocessing effect, otherwise you can just throw it into the end of the vertex/pixel shader.

#4933184 Shadows for Outdoors

Posted by on 20 April 2012 - 07:15 AM

What is the minimum capable hardware support required to do this? He mentions a non-tessellated version in the paper but I didn’t have time to read the whole thing to determine if there is something else required above DirectX 9 capabilities.

Also how much of a problem would you guess blur kernels would be on the shadow map?

L. Spiro

Vertex texture fetch is a requirement; I don't think R2VB is an adequate substitute. Other than that, everything's bog-standard Direct3D9 stuff. You basically goof around with the on-screen position of vertices based on an importance map, shrinking or enlarging their size based on 2D interest values in shadow map space.

Blur kernels are super-easy, you just scale the texture coordinate offset by the above-mentioned interest value and sample from the shadow map again.

#4932667 Shadows for Outdoors

Posted by on 18 April 2012 - 08:30 PM

As an alternative to cascaded shadow maps, consider using adaptive shadow maps by way of rectilinear texture warping instead.

#4929030 HLSL Dynamic Linking

Posted by on 07 April 2012 - 07:43 AM

You might be interested in this HLSL SM3 approach: http://code4k.blogsp...osures-and.html

Not even. It literally works the same way that dynamic linkage does (at least from an API standpoint) except it doesn't require any form of hardware or driver support. If the D3D implementation worked like this it'd be *awesome,* but as it stands you need to target SM5 for it to even work which is basically a deal-killer.

#4928758 Palette Based Rendering?

Posted by on 06 April 2012 - 06:37 AM

I'm wasn't so much thinking about a performance boost, I was just wondering if the theory is sound.

I tend to use a lot of vertex coloring and it seemed to make sense in the event that I wanted to swap out the certain colors of something while rendering the same model instead of copying the entire model over or changing the color of everything.

I've never seen it done before so I assumed there was a a reason not to do it.

Oh, OK, that makes more sense. Look into vertex data streams/input layouts, you can instruct the input assembler to pull the vertex color from a separate chunk of memory from the rest of your data. You'd just bind a different vertex buffer to your 'vertex color data' slot and leave the rest of it untouched.

#4928757 HLSL Dynamic Linking

Posted by on 06 April 2012 - 06:30 AM

I'd wager the HLSL compiler would reduce it basically to example #1. It's primarily a maintenance/code brevity thing, not performance.

#4928262 Quick question about the language

Posted by on 04 April 2012 - 11:59 AM

Yup. You just don't use new, your example would look like
Entity@[] array;

for( uint32 i = 1; i < 10; ++i )
	 array.[color=#000000][size=3]insertLast( Entity() );[/size][/color]

EDIT: Ninja'd. I would back the 'new' keyword idea, though.

EDIT2 : GDNet source tag, y u so bad

#4928205 Is this a Good site to learn DirectX

Posted by on 04 April 2012 - 08:27 AM

The syntax of C++ is very, very similar to C#, I'd say you have bigger problems if that's an issue. On the brighter side, SlimDX was architectured to follow the design of unmanaged D3D very closely. While there are some subtle changes and more idiomatic code, if you can understand how device state is manipulated in either API you'll at least have an idea of where to start with the other.

Most of the more interesting bits, like shader code, work fundamentally the same way in either since it's a separate language altogether. The theory and mathematics behind common techniques are also going to be the same.

#4928114 DirectX SDK PRT tools, light probes

Posted by on 03 April 2012 - 11:58 PM

Unreal may not be the best example for a newcomer as there's a *lot* of cheating/cleverness going on. Based on my understanding, things work like so:

Static lighting for static objects, both direct and indirect, is all baked into an SH lightmap. I think older versions of the engine actually use the HL2 basis and didn't capture specular, though with Lightmass that's obsolete.

Static object shadows from static light sources on dynamic objects are handled using their proprietary distance field shadows technique. I *think* this involves a sort of 'shadow edge detect' filter and then using some blending to create nice smooth shadows, but I don't know how it works for certain. Sorry! :(

Static lighting for dynamic objects is done mostly through LightEnvironments, which in non-Tim Sweeney-speak translates out to diffuse environment probes. Lightmass will generate these and the actual runtime will probably look very much like the IrradianceVolumes sample in the DXSDK. There's some interesting cleverness here with the modulated shadow system-- the game engine will extract the dominant light from the interpolated SH coefficients and then use this as a modulative shadow projection direction. While not perfect, this is actually a really clever way to handle that problem.

Dynamic lighting on static objects and dynamic lighting on dynamic objects is just your average forward renderer, though I believe that LightEnvironments will futz around with the contributions from unshadowed lights and try to bake them into the SH coefficients for shading.

#4928111 Revival of Forward Rending?

Posted by on 03 April 2012 - 11:38 PM

I'm only posting out of curiosity but isn't the title of the topic a bit bold? I had no idea forward rendering was even dead, I understand deferred shading can be of great advantage, but even in some of the best deferred systems forward rendering can still be of great use. Hybrids aside forward rendering when mixed with a pre-z-pass (read and to a tiny degree tested) is a viable alternative and offers many great advantages, ranging from memory and bandwidth saving, to diversity of shaders.

Deferred shading is just really, really in vogue right now and it ends up being one of those hobbyist engine bullet points. Plenty of modern games (Skyrim, MW3, Mass Effect 3 and basically any UE3 game for that matter) use forward shading, some to very great effect.