Jump to content

  • Log In with Google      Sign In   
  • Create Account


MegaPixel

Member Since 28 May 2010
Offline Last Active Today, 02:22 PM
-----

#5022532 Switching between Camera Types (TrackBall -> First Person -> etc.)

Posted by MegaPixel on 17 January 2013 - 09:04 AM

There are several problems mentioned in the OP. My advices are:

 

1. You should stay away from using Euler angles if possible.

 

2. You should apply delta rotations and translations, but store the current placement as either a matrix or a pair of position vector and orientation quaternion.

 

3. You should not integrate too much responsibility into a single class. E.g. the camera class stores and grants read access to the placement of the camera as well as some simple manipulators (in extreme just a setter) for this placement, but it should not provide higher level control like a concept of 1st person and 3rd person camera. Instead, provide a basic CameraControl class and derive FPCameraControl and others from it.

 

4. When switching the active CameraControl, the next active control may need to alter the camera's placement to get it into a prescribed state. If you want to avoid sudden changes, then make a soft transition so that the current placement (i.e. those stored in the camera object, usually as left from the previous control) and the new required placement are interpolated over a short time, e.g. a second or so, before actually granting control to the following CameraControl object. Notice please that this can be integrated into the schema nicely: Define a TransitionalCameraControl class that is parametrized with the CameraControl that should become active. Let the former one do the interpolation (it asks the camera for the current placement and the given CameraControl for the required placement), and let it replace itself by the given CameraControl when done.

 

So that means that there is no way to switch from one camera control to another without interpolating between them ?

 

I thought it was possible to just accumulate in the right order (based on the current camera control type) to be consistent to one camera control or

 

another without sudden changes to show up.

 

Plus, currently I'm calculating my orientation like this and it works stable with no gimbal lock or numerical instabilities, but I do not understand why I should work with delta rotations instead of absolute angles (it works anyway).

 

Here is a code snippet:

 

//create orientation
QuatfFromAxisAngle(Vec3f(1.f,0.f,0.f),mPitch,&mRotX);
QuatfFromAxisAngle(Vec3f(0.f,1.f,0.f),mYaw,&mRotY);
QuatfFromAxisAngle(Vec3f(0.f,0.f,1.f),mRoll,&mRotZ);
QuatfMult(mRotX,mRotY,&mRotXY);
QuatfMult(mRotZ,mRotXY,&mOrientation);	

//normalize quaternion
QuatfNormalize(mOrientation,&mOrientation);
						
//now extract the orientation part of the view matrix
Mat44fInitFromQuaternion(mOrientation,&mViewMatrix.mat44f);


mViewMatrix.mat44f.v03 = -Vec3fDot(cameraPos,mRight);
mViewMatrix.mat44f.v13 = -Vec3fDot(cameraPos,mUp);
mViewMatrix.mat44f.v23 = -Vec3fDot(cameraPos,mForward);	

  It works smooth and perfect (I keep Roll == 0 all the way).

 Pitch, Yaw are just accumulated absoulte angles:

Pitch += dPitch; same for yaw

 

What I'm thinking is It possible to make just one class that gives just very bare bones operators to implement different camera behaviours without having to implement a class for every camera type ?

 

I saw some implementation showing something like:

 

rotateXCameraRelative

 

or rotateXWorldRelative blabla

 

which leads me to think they are just basic operators and there is no reference to first person or third or track ball ... and the idea is that a very specific combination of them can implement for example a first person behaviour or a trackball if you use a different combination of them.

 

I




#4983206 A few questions about GPU based geo clip maps by Arul Asirvatham and Hugues H...

Posted by MegaPixel on 24 September 2012 - 06:57 AM

I've been looking into creating an implementation of the GPU based Geo Clip Maps described in this paper http://research.micr...oppe/gpugcm.pdf by Arul Asirvatham and Hugues Hoppe. Now, I understand the general idea of the algorithm and how it uses vertex textures to achieve it's performance, etc. But, there is one thing I am not grasping about the section where they describe how the geometry is rendered, more precisely this section describing the vertex layout:

Posted Image
The two paragraphs under this image reads like this:

However, the union of the 12 blocks does not completely cover the ring. We fill the
small remaining gaps using a few additional 2D footprints, as explained next. Note that
in practice, these additional regions have a very small area compared to the regular m×m
blocks, as revealed in Figure 2-6. First, there is a gap of (n − 1) − ((m − 1) × 4) = 2
quads at the middle of each ring side. We patch these gaps using four m×3 fix-up regions
(shown in green in Figures 2-5 and 2-6). We encode these regions using one vertex
and index buffer, and we reuse these buffers across all levels. Second, there is a gap of one
quad on two sides of the interior ring perimeter, to accommodate the off-center finer
level. This L-shaped strip (shown in blue) can lie at any of four possible locations (topleft,
top-right, bottom-left, bottom-right), depending on the relative position of the fine
level inside the coarse level. We define four vertex and one index buffer for this interior
trim, and we reuse these across all levels.

Also, we render a string of degenerate triangles (shown in orange) on the outer perimeter.
These zero-area triangles are necessary to avoid mesh T-junctions.
Finally, for the finest
level, we fill the ring interior with four additional blocks and one more L-shaped region.


I understand how each "ring" is broken into 12 square sections of m x m vertices, and then the holes are patched using a m x 3 block, and since the size is uneven (example above uses 15 x 15 vertices), we also need to fill top or bottom and left or right with an additional strip of (2m + 1) x 2 blocks. But, what I don't understand is the first sentence of the second paragraph, more precisely what is marked as orange in the image and called "Outer Degenerate Triangles".
  • What are these used for?
  • How are they drawn, is each long section of a orange line a stretched out quad? Does it use the same vertex resolution as the grid itself? Are they drawn using the same shader as the main geometry?
  • Are they drawn using 0 size? And if so why?


Basically I'm looking for an explanation for the outer degenerate triangles, why they are there, how they are drawn and how their vertex layout is?


They are degenerates triangles used to stich togeter the inner ring boundaries with the outher ring ones. Another more elegant solution would be to just eliminate the T-junctions adding proper tessellation.


#4983161 Multiply RGB by luminance or working in CIE ? Which is more correct ?

Posted by MegaPixel on 24 September 2012 - 03:40 AM

Hi guys,

I was studying through tonemapping operators and exposure control.

I've read several threads on gamedev about those topics and I've also read the MJP article on tonemapping on his blog and John Hable on filmic S curve.

Just one thing Is not clear to me: I understand that exposure control and tonemapping are two different things, but I wonder why someone is still multipling the RGB color by the adapted/tonemapped luminance L straight away.
Shouldn't be more correct to go in CIE color space Yxy, adapt Y to get Y' and then given Y'xy go back to RGB to get the new tonemapped color ?
Also, given that automatic exposure is a different thing with respect to tonemapping (which brings the Luminance values in [0,1] range in Reinhard) why in the Reinhard paper he basically calculates the geometric mean to get the average luminance in the context of a tonemapping process (I mean it's just part of the process but is still not the tonemapping step), that means that the relative luminance then can be used in the context of any tonemapping curve (not just reinhard) to have automatic exposure control ? And therefore shouldn't the relative luminance Lr calulation (the one before Lr / (1 + Lr), which is the actual tonemapped Lt ) consider the interpolated Lavg across frames in its calculation ?

Lr = (L(x,y) / Lavg)*a, where Lavg is the interpolated average scene luminance across frames and a is the key.

So it seems to me that the process of calculating the average scene luminance Lavg and from there the relative luminance Lr can be shared accross the different tonemapping curves and can happen just right before whatever curve is applied (not just Reinhard). Otherwise I can't see a general way to calculate automatic autoexposure.


See If understand then:

1) Get scene in HDR
2) Go from RGB to CIE Yxy
3) calculate average scene Yavg using Reinhard method (sum of logL div N)
4) calculate relative luminance Yr = (Y(x,y) / Yavg)*a (a is the key)
5) calculate autoexposure considering the average scene luminance of the previous frame Lavg(i-1) and the one of the current frame Lavg(i) and interpolate between them using an exponential falloff for example...
6) use the new adapted luminance in 4) to calculate the relative adapted luminance of the pixel (x,y) ?

so basically Lavg is always the interpolated average luminance across frames ?

7) use the Yr in the tonemapping curve to get Yt (i.e. tonemapped luminance):

if Reinhard:

Yt = Yr / (1+Yr)

if filmic curve (uncharted 2 variant):

Yt = ((Yr*(A*Yr+C*B)+D*E)/(Yr*(A*Yr+B)+D*F))-E/F

and more generally if whatever tonemapping function f(x):

f(Yr) = Yt

8) Transform from Yt xy CIE to RGB to get the tonemapped RGB color given its tonemapped luminance Lt which is calculated from the Yr, wihch is in turn calculated from the interpolated average scene luminance Lavg across frames (automatic exposure control).

Pfu, I made it.

Now is this the correct way of calculating autoexposure before applying any tonemapping curve to Yr at all ?

And why someone still apply tonemapping on RGB values straight away knowing that is wrong ? (John Hable said that is not correct to apply the reinhard curve to each rgb channel straight away but at the same time its examples are all doing that, maybe is half correct ?! ) Maybe CIE is more correct but because we can't alpha blend in that space we can live with a less correct solution applying tonemapping on rgb right away. But we have to still interpolate Lavg across frames to have the autoexposure control.

A quite important note: Since I send all my lights in one batch I need to have support for alpha blending, therefore I'm thinking to use CIE Yxy only during post fx and tonemap as being on PC I won't have any problem in fp blending support (as instead happens for other platforms ;) ).
I guess the variant Luv is just convenient if we don't have fp blend support (which on PC is not case anymore by long time). So the idea of accumulating lights in Luv is justified only if the underlying platform doesn't have support for fp render target blending and therefore we are constrained in relying on 8888 unsigned light accumulation buffer right ?

Thanks in advance for any reply


#4978383 Separable gaussian blur too dark ?

Posted by MegaPixel on 09 September 2012 - 02:27 PM

So to get a good looking bloom do I have to blur my lighting image several times or is there something else I have to change ?
Because as it is now its just slightly blurred but you can't even tell the difference when being added to the rest of the scene (albedo, ....)


You blur multiple times until you are satisfied ! Like 3/4 times will give you good results ! You do that by ping ponging and blurring repeatedly the same blurred image ! But then you can also use more or less taps or varying the standard deviation and see what happens ! Fx are not really rocket science. The rule is always tweak tweak tweak until it looks good :)


#4976377 Cube Map Rendering only depth ! No color writes. - BUG

Posted by MegaPixel on 04 September 2012 - 05:20 AM

From "Programming Vertex, Geometry and Pixel shaders":

There are some things to keep in mind though: As of Direct3D 10, you can
only set one depth stencil surface to the device at any given time. This
means that you need to store the depth values in the color data of the
render targets. Fortunately, D3D10 pixel shaders let us to handle
arbitrary values (including colors) as depth via views to typeless
resources, so this isn’t a problem.

I'm actually using DX10.1 so I think that might depend on that.

If that is the problem: Isn't slower generating the shadow map with color writes on ? Maybe in dx11 I can bound more than one depth stencil view ...

tbh on the internet I've found some people outputting linear depth in world space or view space to get more precision out of it and in that case a color buffer was compulsory if you don't want z/w hiperbolic falloff

Any thoughts on this ?



PARTNERS