Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 06 Aug 2009
Offline Last Active Jul 25 2014 07:25 AM

Topics I've Started

Level data model, and (sfml) engine view [MVC question]

14 October 2012 - 06:18 AM

Hello, I have a semi philosophical semi practical question.
I need a code representation of level data. say I'm using C++ I have a "Level" struct storing layers, a layer is a grid of objects.
each object represents some kind of displayable sprite with a position, orientation, sprite id (e.g. lookup-able by filename using an external global sprite/images central manager...).
The issue is, storing the sprite ID is enough to store all properties (position/orientation etc) because the display engine (say SFML) has a copy of all this information.
So the question is, should the "LevelObject" structure store position, orientation, size... in its own structure and transfer to SFML when necessary using an "UpdateViewFromModel" function ?
Or should it suppose that this is dual property storing, and that it will cause confusion and incoherence at some point, because the View stores a copy of the Model. So we could consider the view as being the view and the model at the same time ?


cosine lobe in spherical harmonics

27 September 2012 - 07:41 AM

Hi there, I have an issue understanding contents of various resources and papers around the place talking about how to express a cos lobe in SH basis.

in their LPV implementations, both Andreas Kirsch (c.f http://blog.blackhc....annotations.pdf)
and Benjamin Thaut (http://3d.benjamin-thaut.de/?p=16) are using similar definitions, either premultiplied with c0 and c1 coefficients or not, but basically the same. (0.25, 0.5) along z (for ZonalH), after multiply with c0 and c1.

you can find the rotation formula along axis n in Andreas Kirsch annotation, and concludes that it results to the same thing, but spread on all SH coefficients (in the band 2), just multiplied by normal vector: (0.25,-0.5*ny,0.5*nz,-0.5*nx).

However, if you read Ramamoorthi, they use another way of calculating the coefficients, instead of doing the actual integrand to project the cos(theta) function on the carthesian basis functions, they use the base definitions of the Ylm to extract some horrible formula full of square roots and factorials, and they decide the coefficients are:
(Pi, 2 Pi / 3, Pi / 4) for 3 bands of Zonal Harmonics. (so simply 0 on the non zonal terms)
this is relayed by Sebastien Lagarde on his blog:

though I don't get the feeling that he is truly understanding those coefficients since he mention the sources (peter pike sloan, robert green and ShaderX2) to get them.

and talking about Sloan and Green, we also find disturbing lack of consistencies between papers, notably in the cartesian definitions of the SH basis, when one claims the Zonal of order 2 is k*(3z²-1) the other (Green) claims k*(2z²-x²-y²) !! wtf ?

and there is another inconsistency in the constant of the l=1,m=2 coefficient, where Green claims it is 1/2*sqrt(15/Pi), the others claim it is 1/4*sqrt(15/Pi) which is quite different.

When I calculate the projeciton of the cos lobe on that l=1,m=0 coefficient I get sqrt(5*Pi)/8 which is a coefficient I found nowhere on the literature though I quaduple checked my math.
Also, nobody talks clearly of how to rotate the order 2 coslobe, in ramamoorthi it is completely forgotten, like if it was trivial, in Green he suggests it is extremely complicated and "reaches the limit of current research".
In Sloan, you find the idea that Zonal rotations are simpler, "only O(N) compared to O(N²)" he says.
And if you read Halo 3 engine siggraph 2008 slide presentation, the SH rotation shader they have seems to be quite complicated.
Though this one surely is for any SH ... :/

I simply want to project a goddamn environment map into SH, convolve it with a coslobe.
there is a sample code from Nvidia for that but for paraboloid cube maps.
but I don't understand really how to do it properly, and trying something blindly is the best way to get something, thinks that it works but actually it is wrong but in a way that is difficult to see.
(go prove that an irradiance map is biaised, or incorrectly scaled etc...)

thanks for any help on that.

edit: actually I found a paper that corroborates my result :
but it clearly is different from the result of Lagarde and Ramamoorthi for coefficients of the same function. (ramamoorthi calls it Âl)
the only missing thing now is the rotated clamped cosine lobe in 3 band SH.

edit2: i found some promising stuff for that last part : http://www.iro.umont.../SHRot_mine.pdf
however it will take me ages to understand that paper :'( if I can at all.
for the moment, i'm just going to calculate projections of cos lobes along 6 directions and interpolate between them linearly. at least i can grasp that.

edit3: I have calculated coefficients for the cosine lobe along x : [S0=sqrt(pi)/2, S3=-sqrt(pi/3), S7=sqrt(15*Pi)/8] 0 elsewhere.
the plot is attached file to this post. we can clearly see that it is tilted and not resembling the coslobe along Z. Which would invalidate the famous "rotation invariant" property of SH that everybody seems to praise. Or rather, if I understand it, they seem to praise an erronous assumption that rotations will not make the projection vary, which is false, the property merely says that rotating the coefficients will get the same projection than reprojecting the rotated function. It never mention anything about keeping the same shape. Sload and Green both mention that function will not 'wobble' when rotated. I think this is false and my plot tends to prove it.

very compact reflective shadow maps format

31 August 2012 - 05:00 AM

Hello gamedev,

There is something I've been wanting to do for quite some time, I have an RSM that is stored in 3 textures:
one R32F, and two RGBA8.
  • R32F : depth in meters (from camera that rendered the RSM)
  • RGBA8 : normal storage in classical shifted to color space with " * 0.5 + 0.5 "
  • RGBA8 : albedo
so we have 3 render targets simultaneously, and a waste of two alpha components.

For optimisation, because 3 RT can be very heavy for some cards, I thought about compacting ALL of that, into ONE RGBA16F texture.
  • R : depth, part 1
  • G : depth, part 2 (+ sign bit for normal)
  • B : normal
  • A : color
It must be compatible with DX9 so no integer targets, and not bit fiddling in shaders.

I thought of for the depth, a simple range splitting should do the trick.
we decide of some multiple of a distance that the "MSB" will store, and the LSB will store the rest.
R = floor(truedepth / 100) * 100;
G = truedepth - R;

for the normal, we could store the x in the first 8 MSbits using the same trick, and the y in the 8 LSB.
the z can be reconstructed using the sign stored in the depth. knowing we are on a sphere there is just a square root to evaluate.
(and when reading depth we just always think of doing abs(tex2d(depth).r))

for the color, it would be a 16 bits color, stored in HLS with the same trick, again, of "floor" and "modulo" to park the values in 6/5/5 bits.


knowing we have 16 bits IEEE754 half floats per channel here.
checking wikipedia, the precision is at least integer until 1024,
therefore should be increasing by steps of 32 between 32k and 65k.
and by steps of 0.00098 max between 0 and 1.

the issue is, what space slices should we use for the depth divisor ??
and would it be better stored if using a logarithmic depth ?
but in that case it would still need slicing since we need to store the depth in 32bits, so on two components, I suppose in that case the slicing will be logarithmic too ?

about the normal, I feel that there is a danger storing them like this, because some direction will have more precision than others.

the color is not really an issue, RSM don't need precise albedo.

what do you think ?

thanks in advance !

[solved][DX10] Modify sampler states in effects from CPU code ?

14 February 2011 - 12:01 PM

Hi everybody, I wanted to call to your knowledge in Direct3D10.

I've got this problem, while porting an engine that worked with D3D9, we had some code that used D3DXConstantTable and GetSamplerIndex to read back the slots in which samplers state could be plugged for a given sampler.
(sampler = a global variable in the effect).

In Direct3D10, this has become, to my understanding, simply impossible. you just don't change sampler states from the code when working with effects. or can I ?

I tried using PSSetSamplers, and PSSetShaderResources, but if the order in which I do this do not corresponds to the order of declaration in the effect, it just totally messes up the Resources that are bound to my textures global variables !! (it assigns wrong resources to wrong variables)
This is logical by the way, because the effect runtime IS already calling these 2 functions in the EffectPass->Apply call. (that is deactivatable trough masks, which I should do but its ok since i call manually my 2 routines after the 'Apply', thus overriding what's done in 'Apply'. And since it doesn't work anyway, i keep the default hardcoded sampler state written in the fx file for the moment, so I need not masking those out yet)

And it is doing it very much knowing how to order the sampler state buffer uploaded by the PSSetSamplers call, because it has the reflection information of the sampler global variables I declared in the .fx

So i was going in this direction, to get the information that we had in DX9 with constant tables, I wanted to use the Shader Reflection Layer.
But I need a ID3D10Blob pointing to the compiled effect "binary" in memory that is only possible to have when compiling the shaders by hand e.g. with CreateShaderFromFile.

I tried looking in the Desc of passes, to get a ID3D10ShaderVariable interface, but i can't access the Blob from here.

So, i was thinking, i could iterate over the description of my variables, each time i see a sampler, i could increase a counter which I store in a map [sampler, slot], but I don't like it, because i don't know if i should have the same slots for vertex shaders and pixel shader.
I know for a fact that ATI cards in OpenGL do NOT want the same texture units, even for the same samplers, when using them in both VS and PS. And this make me afraid.

now, this is where i call for help... wait for it

help !
thanks guys :)

[SDL] Carnage-engine, C++

16 March 2010 - 10:38 AM

Alright so, I announced it as a possible game library in the dedicated thread; I think it could also be proper to announce it correctly in its own thread.This engine is yet another 2D library. (yet.. c.f free-game-dev-libs)Well I developed it so I thought since it exists, it'd better be used helping whoever has the use of it.So here it is, a simple SDL wrapper over C++. (2000 lines of engine, 2000 lines of generic C++/math tools)It features:

edit: v2 is out now, check project on sourceforge: http://sourceforge.net/projects/carnage-engine
  • wrapped SDL initialization and window creation
  • sprite management, either statics, animated, or trick-rotated (all rotations baked in the bitmap)
  • collisions management. register static object shapes in your level, and ask efficiently (binary tree) for intersections with basic shapes later.
  • sound management, over sdl_mixer.
  • generic tools: fps counter, mac/nux/win hi-res timer, freelist, circularlist, 2D vectors etc.an idea of the programming style:
#include "carnage-engine/spritemanager.hpp"
namespace ce = carnage_engine;  // let's not bother with gigantic prefixes.

int main(int, char**)
   // creates a manager, will create a window if there is no window yet.
    ce::SpriteManager spm;
	int h = spm.LoadSprite("hello.bmp");	// let's use helper function to compute x = (screenRes - helloWidth) / 2...
	spm.Move(h, ce::GetScreenCenterX(spm.GetW(h)), ce::GetScreenCenterY(spm.GetH(h)));
	spm.Draw(h);  // blit sprite h
	spm.Update();  // flip surfaces
	getchar();  // wait for keyboard in the console and quit the program
Posted Image

With hope that it will help, knowing that there is only SDL (and SDL_mixer if you want sound) as dependencies, its nice for those who want to keep it simple.