Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


wngabh11

Member Since 19 Aug 2012
Offline Last Active Mar 06 2013 08:19 PM

Posts I've Made

In Topic: Difference between two ways of assigning values to variables in HLSL constant...

12 February 2013 - 03:20 AM

Thanks for help. 

I try to mix two way together. In each frame, I do things in the following order:

 

IASetInputLayout

IASetPrimitiveTopology

ID3DX11EffectTechnoque->GetPassByIndex(0)->Apply( 0, DeviceContext );

Map constant buffer

assign values

Umap constant buffer

VSSetConstantBuffer

PSSetConstantBuffer

IASetVertexBuffer

IASetIndexBuffer

 

DrawIndex

 

But If i render in the order above, I cannot pass the values correctly and thereby cannot render correct things.

Is the order wrong? Or I cannot mix the two ways?


In Topic: real and complex fog representatin

03 February 2013 - 05:21 AM

THanks for your help. I have understood what you say.

 

Do you know Tyndall effect? For example, in the forest, there will be some 'god' light when sun light is across the leaves of the tree. We can use some techniques, such as billboard, to show this effect, but we can also use analytical methods to implement it.

 

Take the indoor situation as an example, due to the effect i said above, if there is a lamp or some other types of source light there, there may be glow effect, shadows, god light( depends on the type of light ). If I use the way this link, www.cs.berkeley.edu/~ravir/papers/singlescat/scattering.pdf, introduced, i just simulate the equation in this paper instead of post processing methods to generate the vivid effects. I implemented this algorithem before and I think it is much better than post processing methods to show the effect. Another article in the book shaderX 7, called 'a hybrid method for interactive shadows in homogeneous media', impressed me a lot. the author uses a method which I think is closer to physical simulation and the effects are very excellent.

 

At the outdoor situation, I also hope to implement these effects, such as glow, just based on such kinds of equations. When deciding the color of each pixel, I can use more complex model(BRDF) instead of the producting of two vectors(N * L). I can add Ambient Occlusion instead of simple shadow map. It goes without saying that shadow map is needed. However, if I want to implement the glow effect, what I know is just post processing the render target. Besides, I think the method you said cannot let the fog have the color caused by scattaring. As a result, the translucent feature is displayed, but the fog color is monochromatic. So, I don't know how to combine the affect of ambience which makes the fog has color with the thickness of fog. The shortcoming of the link paper is that it is just suitable for point light rather than sun light.


In Topic: HDR Confusion

14 October 2012 - 01:36 AM

Jeff.

your problems is vague for me, because there are many factors that can cause the problems. But as you said that changing the exposure can change the color, i guess, there must be some wrong parameters. And the white color also indicates that you color value is beyond 1.0. Hence, i hope you to check the value of variables when running the program. Debug them. it is not complex.

In Topic: HDR Confusion

13 October 2012 - 06:18 PM

i m not very sure about the format per pixel of the texture( variable final ). if is a kind of 16-bit type, it may beyond 1.0 and thereby it may be white though you saturate it. you can trace or debug by PIX, a software which is used to debug HLSL, or use nsight if you have 2 video cards. When tracing, you can watch the value variable final and finalCol.

In Topic: TXAA details

20 August 2012 - 02:26 AM

AFAIK, Timothy Lottes hasn't published much info on TLAA yet -- he'll probably do so soon, seeing that the test-case (The Secret World) just came out, and it's in the new nVidia drivers. It's actually still a WIP R&D project.

if that is the result, it is no doubt the details cannot be found recently. Well, I just wait for them.

If you're interested in this stuff I would suggest digging up some reading material on image processing and filtering. There's also some info about the filtering modes in RenderMan on Pixar's website.

among the approaches of anti-aliasing, post-processing method is an important way all the time. Recently, i am just curious about why it is the technique can only be used on the latest hardware, in other words, what kinds of module are added in it, and whether it can be implemented by relatively old versions of hardware and SDKs.

PARTNERS