Jump to content

  • Log In with Google      Sign In   
  • Create Account


MegaPixel

Member Since 28 May 2010
Offline Last Active Today, 06:56 AM
-----

Topics I've Started

FFT Water Video shows strange behaviour

15 January 2014 - 08:18 AM

Hi all,

 

I've implemented radix-2 FFT on the gpu and that's the result that I've got so far (for the lod algorithm I'm using geometry clipmap with toroidal update on).

 

http://www.youtube.com/watch?v=vLByMPrxYLQ

 

http://www.youtube.com/watch?v=R2-pC1seKLY

 

sometime It looks like that the waves instead of doing a smooth and gentle fade off, they just looks like they cut off with a discontinuity ... If you look at the 1st video at 0:18 I show the heightmap update along with the related spectrum. The heightmap looks ok to me (it actually has choppiness in as well, as you can see because it's colored).

 

Note: there is choppiness as well (which I tried to disable it to see if that was the problem, but it wasn't).

 

can someone shed some light or has some ideas ?

 

Also what could be a good rule of thumb to follow to try different settings for the water ?

 

Reminder:

 

I've got grid size in meters (MxN) and grid resolution which is the actual heightmap res.

 

wind speed V

wind direction W

 

PS.: if I increase V the speed of the simulation is still the same. I guess V will generate just bigger waves ? ...

 

Thanks in advance


Switching between Camera Types (TrackBall -> First Person -> etc.)

17 January 2013 - 06:58 AM

Hi to everyone,

I'm trying to build a generic camera class (so just one class) that with few simple operators can allow one to create any type of high level camera.
I'd like to do it this way because I think that, then, switching for example between a first person and a trackball like camera will be easier.
For now I've successfully implemented the usual 1st person camera by defining few simple operators on the camera class and creating orientation using quaternions.

Operators/Methods:

moveXCameraRelative
moveYCameraRelative

rotateXCameraRelative
rotateYCameraRelative

The thing is that I can't figure out how to switch between (say) a 1st person and a trackball without screwing everything. What I mean is flying a bit with a 1st person and then from that exact pov switch to trackball and use it and then back to 1st person transparently (like in a DCC tool).
What I thought is that I should accumulate orientations etc. but I guess that my current method is not very correct because instead of accumulating orientations deltas I accumulate the angle and calculate the orientation of accumulated angles instead of defining an offset quaternion. I saw some implementation they do something like:

Quaternion q(axis,delta) //the offset quaternion (relative rotation quaternion)

I do something like:

angle += delta;

Quaternion q(axis,angle); //the absolute rotation quaternion

should I use the first solution and accumulate quaternions instead of accumulate the absolute angle to have the possibility to implement the behaviour that I described before tongue.png ?

Thanks in advance for your help smile.png


Multiply RGB by luminance or working in CIE ? Which is more correct ?

24 September 2012 - 03:40 AM

Hi guys,

I was studying through tonemapping operators and exposure control.

I've read several threads on gamedev about those topics and I've also read the MJP article on tonemapping on his blog and John Hable on filmic S curve.

Just one thing Is not clear to me: I understand that exposure control and tonemapping are two different things, but I wonder why someone is still multipling the RGB color by the adapted/tonemapped luminance L straight away.
Shouldn't be more correct to go in CIE color space Yxy, adapt Y to get Y' and then given Y'xy go back to RGB to get the new tonemapped color ?
Also, given that automatic exposure is a different thing with respect to tonemapping (which brings the Luminance values in [0,1] range in Reinhard) why in the Reinhard paper he basically calculates the geometric mean to get the average luminance in the context of a tonemapping process (I mean it's just part of the process but is still not the tonemapping step), that means that the relative luminance then can be used in the context of any tonemapping curve (not just reinhard) to have automatic exposure control ? And therefore shouldn't the relative luminance Lr calulation (the one before Lr / (1 + Lr), which is the actual tonemapped Lt ) consider the interpolated Lavg across frames in its calculation ?

Lr = (L(x,y) / Lavg)*a, where Lavg is the interpolated average scene luminance across frames and a is the key.

So it seems to me that the process of calculating the average scene luminance Lavg and from there the relative luminance Lr can be shared accross the different tonemapping curves and can happen just right before whatever curve is applied (not just Reinhard). Otherwise I can't see a general way to calculate automatic autoexposure.


See If understand then:

1) Get scene in HDR
2) Go from RGB to CIE Yxy
3) calculate average scene Yavg using Reinhard method (sum of logL div N)
4) calculate relative luminance Yr = (Y(x,y) / Yavg)*a (a is the key)
5) calculate autoexposure considering the average scene luminance of the previous frame Lavg(i-1) and the one of the current frame Lavg(i) and interpolate between them using an exponential falloff for example...
6) use the new adapted luminance in 4) to calculate the relative adapted luminance of the pixel (x,y) ?

so basically Lavg is always the interpolated average luminance across frames ?

7) use the Yr in the tonemapping curve to get Yt (i.e. tonemapped luminance):

if Reinhard:

Yt = Yr / (1+Yr)

if filmic curve (uncharted 2 variant):

Yt = ((Yr*(A*Yr+C*B)+D*E)/(Yr*(A*Yr+B)+D*F))-E/F

and more generally if whatever tonemapping function f(x):

f(Yr) = Yt

8) Transform from Yt xy CIE to RGB to get the tonemapped RGB color given its tonemapped luminance Lt which is calculated from the Yr, wihch is in turn calculated from the interpolated average scene luminance Lavg across frames (automatic exposure control).

Pfu, I made it.

Now is this the correct way of calculating autoexposure before applying any tonemapping curve to Yr at all ?

And why someone still apply tonemapping on RGB values straight away knowing that is wrong ? (John Hable said that is not correct to apply the reinhard curve to each rgb channel straight away but at the same time its examples are all doing that, maybe is half correct ?! ) Maybe CIE is more correct but because we can't alpha blend in that space we can live with a less correct solution applying tonemapping on rgb right away. But we have to still interpolate Lavg across frames to have the autoexposure control.

A quite important note: Since I send all my lights in one batch I need to have support for alpha blending, therefore I'm thinking to use CIE Yxy only during post fx and tonemap as being on PC I won't have any problem in fp blending support (as instead happens for other platforms ;) ).
I guess the variant Luv is just convenient if we don't have fp blend support (which on PC is not case anymore by long time). So the idea of accumulating lights in Luv is justified only if the underlying platform doesn't have support for fp render target blending and therefore we are constrained in relying on 8888 unsigned light accumulation buffer right ?

Thanks in advance for any reply

XMMatrixPerspectiveFovLH unit cube z in clipping space goes from 0,1 or -1,1 ?

12 September 2012 - 03:37 AM

hi all,

very simple question coz I need to know whether I need to rescale the z range in 0,1 or not and I couldn't find any explanation on the xnamath docs.

If it's like the old dx convention z should be in 0,1 in clipping space (the openGL convention was different in fact we had -1,1 for z in clipping space).

The function that I'm using to project my scene is XMMatrixPerspectiveFovLH

thank in advance ;)

Cube Map Rendering only depth ! No color writes. - BUG

04 September 2012 - 02:59 AM

Hi all,

I'm trying to render to a cubemap using the geometry shader (GS). I want to render just depth, therefore I need to set a null render target and a depthstencilView as a texture2Darray of resources and indexing them with SV_RenderTargetArrayIndex in the GS.
The problem is that anytime that I launch PIX to verify the rendering results, the application crashes under pix (it doesn't crash if launch it without pix).
I tried disabling the shadow generation code and it still crashes, then I disabled the code that was creating the depth stencil view and the shader resource view for the cubemap and magically PIX wasn't crashing anymore.
So I think the problem might be in how I create those resources. I started digging on the internet to be sure that I was doing everything in the correct way etc. But I couldn't find any contraddiction with my code, so everything seems correct. It's also true that I couldn't find any reference or code sample that was showing the use of a depthstencilview for cubemap rendering with a rendertargetview set to null and color writes off.

btw here is my code to create DSV:
// Create a texture array to hold cube map data
FdkGfxTexture2DDesc texDesc;
ZeroMemory(&texDesc,sizeof(FdkGfxTexture2DDesc));
texDesc.width									 = 1024;
texDesc.height									= 1024;
texDesc.mipLevels							  = 1;
texDesc.arraySize							   = 6;
texDesc.format									= FDK_FORMAT_R32_TYPELESS;
texDesc.sampleDesc.count				 = 1;
texDesc.sampleDesc.quality				= 0;
texDesc.usage									 = FDK_USAGE_DEFAULT;
texDesc.bindFlags							   = FDK_BIND_DEPTH_STENCIL | FDK_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags				   = 0;
texDesc.miscFlags							  = FDK_RESOURCE_MISC_TEXTURECUBE;
FdkGfxTexture2DId  texId					= fdkGfxCreateTexture2D(device,&texDesc,NULL);

// Create the depth stencil view desc for cube depth render
FdkGfxDepthStencilResourceViewDesc descDSV;
ZeroMemory(&descDSV,sizeof(descDSV));
descDSV.format									  = FDK_FORMAT_D32_FLOAT;
descDSV.viewDimension						 = FDK_DSV_DIMENSION_TEXTURE2DARRAY;
descDSV.texture2DArray.firstArraySlice  = 0;
descDSV.texture2DArray.arraySize				  = 6;
descDSV.texture2DArray.mipSlice				   = 0;
mDeferredData->mPointLightShadowMapBufferId	   = fdkGfxCreateDepthStencilTarget2D(device,&descDSV,texId);

//create shader resource view desc for the cube depth texture
FdkGfxShaderResourceViewDesc srvDesc;
ZeroMemory(&srvDesc,sizeof(srvDesc));
srvDesc.format								   				= FDK_FORMAT_R32_FLOAT;
srvDesc.viewDimension									  = FDK_SRV_DIMENSION_TEXTURECUBE;
srvDesc.textureCube.mipLevels						 = 1;
srvDesc.textureCube.mostDetailedMip			   = 0;
mDeferredData->mPointLightShadowMapShaderResource = fdkGfxCreateShaderResource(device,&srvDesc,texId);
fdkGfxDestroyTexture2D(texId);

fdk is just my API you can replace it with D3D11_blabla ;)

When it comes time to render, I expect to do something like:

OMSetRenderTargets(0,0,cubeMapDSV); //rtViewCount == 0, rtView == NULL

which means no render target view, just depth. I guess the rendering should be faster if we render depth with no color writes.

Could someone shed some light on this ?

Thanks in advance

PARTNERS