Jump to content

  • Log In with Google      Sign In   
  • Create Account


InvalidPointer

Member Since 04 Apr 2007
Offline Last Active Sep 18 2014 04:39 AM

#4934068 Advice for enhancing the lighting

Posted by InvalidPointer on 23 April 2012 - 06:17 AM

Look into distance fog/atmospheric scattering. If you have a depth map laying around, you can do it as a postprocessing effect, otherwise you can just throw it into the end of the vertex/pixel shader.


#4933184 Shadows for Outdoors

Posted by InvalidPointer on 20 April 2012 - 07:15 AM

What is the minimum capable hardware support required to do this? He mentions a non-tessellated version in the paper but I didn’t have time to read the whole thing to determine if there is something else required above DirectX 9 capabilities.

Also how much of a problem would you guess blur kernels would be on the shadow map?


L. Spiro

Vertex texture fetch is a requirement; I don't think R2VB is an adequate substitute. Other than that, everything's bog-standard Direct3D9 stuff. You basically goof around with the on-screen position of vertices based on an importance map, shrinking or enlarging their size based on 2D interest values in shadow map space.

Blur kernels are super-easy, you just scale the texture coordinate offset by the above-mentioned interest value and sample from the shadow map again.


#4932667 Shadows for Outdoors

Posted by InvalidPointer on 18 April 2012 - 08:30 PM

As an alternative to cascaded shadow maps, consider using adaptive shadow maps by way of rectilinear texture warping instead.


#4929030 HLSL Dynamic Linking

Posted by InvalidPointer on 07 April 2012 - 07:43 AM

You might be interested in this HLSL SM3 approach: http://code4k.blogsp...osures-and.html


Not even. It literally works the same way that dynamic linkage does (at least from an API standpoint) except it doesn't require any form of hardware or driver support. If the D3D implementation worked like this it'd be *awesome,* but as it stands you need to target SM5 for it to even work which is basically a deal-killer.


#4928758 Palette Based Rendering?

Posted by InvalidPointer on 06 April 2012 - 06:37 AM

I'm wasn't so much thinking about a performance boost, I was just wondering if the theory is sound.

I tend to use a lot of vertex coloring and it seemed to make sense in the event that I wanted to swap out the certain colors of something while rendering the same model instead of copying the entire model over or changing the color of everything.

I've never seen it done before so I assumed there was a a reason not to do it.


Oh, OK, that makes more sense. Look into vertex data streams/input layouts, you can instruct the input assembler to pull the vertex color from a separate chunk of memory from the rest of your data. You'd just bind a different vertex buffer to your 'vertex color data' slot and leave the rest of it untouched.


#4928757 HLSL Dynamic Linking

Posted by InvalidPointer on 06 April 2012 - 06:30 AM

I'd wager the HLSL compiler would reduce it basically to example #1. It's primarily a maintenance/code brevity thing, not performance.


#4928262 Quick question about the language

Posted by InvalidPointer on 04 April 2012 - 11:59 AM

Yup. You just don't use new, your example would look like
Entity@[] array;

for( uint32 i = 1; i < 10; ++i )
{
	 array.[color=#000000][size=3]insertLast( Entity() );[/size][/color]
}

EDIT: Ninja'd. I would back the 'new' keyword idea, though.

EDIT2 : GDNet source tag, y u so bad


#4928205 Is this a Good site to learn DirectX

Posted by InvalidPointer on 04 April 2012 - 08:27 AM

The syntax of C++ is very, very similar to C#, I'd say you have bigger problems if that's an issue. On the brighter side, SlimDX was architectured to follow the design of unmanaged D3D very closely. While there are some subtle changes and more idiomatic code, if you can understand how device state is manipulated in either API you'll at least have an idea of where to start with the other.

Most of the more interesting bits, like shader code, work fundamentally the same way in either since it's a separate language altogether. The theory and mathematics behind common techniques are also going to be the same.


#4928114 DirectX SDK PRT tools, light probes

Posted by InvalidPointer on 03 April 2012 - 11:58 PM

Unreal may not be the best example for a newcomer as there's a *lot* of cheating/cleverness going on. Based on my understanding, things work like so:

Static lighting for static objects, both direct and indirect, is all baked into an SH lightmap. I think older versions of the engine actually use the HL2 basis and didn't capture specular, though with Lightmass that's obsolete.

Static object shadows from static light sources on dynamic objects are handled using their proprietary distance field shadows technique. I *think* this involves a sort of 'shadow edge detect' filter and then using some blending to create nice smooth shadows, but I don't know how it works for certain. Sorry! :(

Static lighting for dynamic objects is done mostly through LightEnvironments, which in non-Tim Sweeney-speak translates out to diffuse environment probes. Lightmass will generate these and the actual runtime will probably look very much like the IrradianceVolumes sample in the DXSDK. There's some interesting cleverness here with the modulated shadow system-- the game engine will extract the dominant light from the interpolated SH coefficients and then use this as a modulative shadow projection direction. While not perfect, this is actually a really clever way to handle that problem.

Dynamic lighting on static objects and dynamic lighting on dynamic objects is just your average forward renderer, though I believe that LightEnvironments will futz around with the contributions from unshadowed lights and try to bake them into the SH coefficients for shading.


#4928111 Revival of Forward Rending?

Posted by InvalidPointer on 03 April 2012 - 11:38 PM

I'm only posting out of curiosity but isn't the title of the topic a bit bold? I had no idea forward rendering was even dead, I understand deferred shading can be of great advantage, but even in some of the best deferred systems forward rendering can still be of great use. Hybrids aside forward rendering when mixed with a pre-z-pass (read and to a tiny degree tested) is a viable alternative and offers many great advantages, ranging from memory and bandwidth saving, to diversity of shaders.


Deferred shading is just really, really in vogue right now and it ends up being one of those hobbyist engine bullet points. Plenty of modern games (Skyrim, MW3, Mass Effect 3 and basically any UE3 game for that matter) use forward shading, some to very great effect.


#4927018 Graphics were are they

Posted by InvalidPointer on 31 March 2012 - 02:57 PM

I have a question about grapics like in RuneScape. Just useing RuneScape as a example.
1.) How does RuneScame build the map. is it all code, or a Map Editor?
2.) are the grapics set in sprites?, if so were do you find them.


The runtime/'game' bits are all code, i.e. any of the stuff that tries to draw or pull data from the map. It has to get executed somehow! That said, Jagex also has a dedicated team of environment artists that use a level editing tool to arrange and manipulate parts of the word to their artistic whim. This, too, uses code in the sense that it's a computer program that takes user input.

I don't think Runescape uses sprites anywhere outside of particle systems, actually. Old versions (pre-2004) did a Doom-style 2.5D deal for characters and game items, but I think everything's all models now.


#4926521 Texture sheets vs individual textures?

Posted by InvalidPointer on 29 March 2012 - 08:01 PM

As an architectural suggestion, don't design the renderer in such a way that it cares. As far as it's concerned, it just takes black-box textures, meshes, materials, etc. and feeds them to the graphics hardware.

*On top of that* you can implement an atlasing system-- a part of the character logic would be an excellent place to put this.

This way, you get the best of both worlds :)


#4925977 Path Tracing BSDF

Posted by InvalidPointer on 28 March 2012 - 07:17 AM

Watertight meshes are pretty par for the course, I don't think you'd earn any ire by placing that restriction.

Also, instead of mucking about with actually bouncing some rays around, I would *highly* suggest using some of Jensen's top-notch work on dipole transmittance. It's really good stuff :)


#4925665 UDK/UnrealEngine material implementation

Posted by InvalidPointer on 27 March 2012 - 08:05 AM

I'm not sure where you're getting your information from, as swizzles/masking are dirt-cheap to the point of being free. You can also just dink around with some 2D matrix multiplication for your texture coordinates, so the final cost for something like what you describe there is maybe 2-3 ALU operations per texture coordinate matrix multiply and the standard texture read per, well, texture read. For three channels that's like 6-9 ALU and 3 reads, which is hardly expensive in this day and age.

Also, you assume Unreal is dinking around with the actual texture data when it actually isn't.

In terms of mechanics, the material compiler pretty literally concatenates HLSL strings then feeds that into fxc/cgc to get some runnable bytecode. There's a little bit of extra script stuff for parameter exposure, but you seem to be really overthinking this. It isn't magic ;)


#4924812 Behavior of energy conserving BRDF

Posted by InvalidPointer on 23 March 2012 - 08:56 PM

Yes, definitely. You can set the exposure value manually if you want-- many games do just this! (CoD:BlOps and possibly MW3 in particular)
EDIT: I'm pretty sure Source/Half-Life 2 lets you goof with exposure as well, though they do have a wonky fake-exposure trick due to their clever-as-hell in-shader tonemapping operation.

Also, HDRBlendable is (I think) the FP10 format. It should render slightly faster since you write half as much (assuming you're fill-bound, which isn't all that uncommon in high-overdraw scenarios with cheap shaders-- much like a light prepass renderer) as a traditional FP16. You can also experiment with fixed-point encoding schemes if you're willing to give up multipass rendering, though I don't think that's actually an option for you.




PARTNERS