Try this:
alSourcei( mSource, AL_SOURCE_RELATIVE, AL_TRUE );
alSourcef( mSource, AL_ROLLOFF_FACTOR, 0.0 );
We're offering banner ads on our site from just $5!
1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.
Posted by lunkhound on 09 October 2014 - 09:10 PM
Try this:
alSourcei( mSource, AL_SOURCE_RELATIVE, AL_TRUE );
alSourcef( mSource, AL_ROLLOFF_FACTOR, 0.0 );
Posted by lunkhound on 26 September 2014 - 11:24 AM
I loved whole L. Spiro's post, but I have something to correct
Sorting 2 smaller queues is faster than 1 big one.
This is a half truth.
Sorting can take:
- Best Case: O(N)
- Avg. Case: O( N log( N ) )
- Worst Case: O( N^2 )
1. In best case, N/2 + N/2 = N; so in theory it doesn't matter whether it's split or not. But there is the advantage that two containers can be sort in separate threads. So it's a win.
2. In the average case, 2 * (N/2 log(N/2)) > N log(N); having one large container should be faster than sorting two smaller ones (though there remains to be seen whether threading can negate the effect up to certain N)
3. In the worst case, 2 * (N/2)^2 < N^2; which means it's much better to sort two smaller containers than a large one.
In the end you'll have to profile as it is not a golden rule.
Spiro's suggestion of using temporal coherence assumes that most of the time you can get O(N) sorting using insertion sort; thus most likely having two smaller containers should be better (if you perform threading).
While I love your posts in general, this "correction" doesn't seem right to me. Specifically #2. With O( N log N ) run time -- worse than linear -- divide and conquer is beneficial when possible.
Plugging actual numbers into your inequality in 2 (N=1024, using base 2 log):
left side expansion: 2*(1024/2) log (1024/2) = 1024 * 9
right side expansion: 1024 log 1024 = 1024 * 10
The left side is LESS, contrary to your inequality.
Sorting 2 half-sized arrays is sometimes faster but never slower than a single full-size sort.
Perhaps you were thinking of searching with O( log N ) run time -- better than linear. In that case, doing divide and conquer IS harmful.
Posted by lunkhound on 06 July 2014 - 05:31 PM
Additive blending, that is:
destColor = srcColor + destColor
can only lighten the color of what is underneath (because srcColor can't be negative). Its fine for making white smoke, but not black smoke.
For black smoke, you could try subtractive blending (not sure if D3D9 supports this mode or not):
destColor = destColor - srcColor
Alternatively you could use normal alpha blending:
destColor = srcColor * srcAlpha + destColor * (1 - srcAlpha)
with a mostly dark colored texture with an alpha channel that is opaque in the middle and fades to a transparent circular border. You'll probably want some noise added to the texture to make it look smoky.
Note that with alpha blending, the order the quads get drawn in matters (unlike with additive or subtractive blending) so you'll want to draw them in back-to-front order, otherwise it won't look right.
Posted by lunkhound on 02 February 2014 - 11:09 AM
That doesn't sound right. I don't see anything wrong with the code there. There must be something wrong with the viewProjection matrix, make sure view and projection are concatenated in the correct order. For column major matrices, it should be projection * view, not the other way around.
Posted by lunkhound on 01 February 2014 - 10:22 PM
The corners of a 2x2x2 box at the origin are 8 vertices where each component is either a 1 or a -1. Vec3(1,1,1), Vec3(-1,1,1), .. and so on. This corresponds to the view volume in clip-space. The view-projection matrix maps from world space to clip space (it is the concatenation of the view matrix and the projection matrix). So if you invert the view-projection matrix, it will map from clip space to world space. If you apply the inverse-view-projection matrix to each of the 8 corners, you should end up with the 8 corners of the view volume in world-space. The inverse-view-projection matrix will be a 4x4 matrix, so each of your vertices will need to be "promoted" to a Vec4 with a w-component of 1, and the result will be in homogeneous coordinates so you will need to divide by w after the transform.
Posted by lunkhound on 31 January 2014 - 06:21 PM
You also need to normalize the first cross product there (perpVec1).
If the 50x50x50 box encloses your scene, then yes that should work fine as a temporary hack.
The extents would be:
Vec3 extents( maxX - minX, maxY - minY, maxZ - minZ );
The half extents would just be 0.5 * extents:
Vec3 he = extents * 0.5;
Then you want something like ortho( -he.x, he.x, -he.y, he.y, -he.z, he.z );
Otherwise, you'll have unwanted translation in the projection matrix. Translation that already exists in the view matrix so you'd be applying it twice.
Posted by lunkhound on 31 January 2014 - 04:02 PM
Well, the rotation matrix has some problems for starters. The rows of the rotation matrix all need to be unit magnitude (some of yours aren't), and they need to be perpendicular to each other (also not true for your matrix). So what you've got there is a matrix that scales, skews, and rotates.
Another problem that jumps out is that you aren't calculating the corners of the main camera's frustum. Instead you've got the corners of a 50 x 50 x 50 box centered on the origin. Your main camera has a perspective projection matrix, so the shape of it isn't a box, it is more of a truncated pyramid.
If you take your main camera's view-projection matrix, invert it (note, NOT a simple transpose), and then run the corners of a 2 x 2 x 2 box centered on the origin through it, that should give the corners of your camera's frustum.
Also your orthographic matrix is also wrong. You should calculate the half-extents of the box from your minX, maxX, etc, and express the ortho in terms of the half-extents.
Posted by lunkhound on 29 January 2014 - 01:02 AM
What you have there doesn't look right at all. It isn't that easy.
I would do it something like this:
1. Find a vector orthogonal to the light direction.
2. Use the cross-product to find another vector orthogonal to the other two. Now you have 3 orthogonal unit vectors.
3. Form a rotation matrix from the 3 vectors. The light direction should be the z-axis, and the other two are x and y. Make sure the handedness is correct. The x-axis cross y-axis should equal the z-axis, not the negative z-axis. This rotation matrix transforms directions from world space into the light space.
4. Generate a list of the 8 corner vertices of the main camera's frustum. You'll need the main camera's position, z-near, z-far, horizontal field of view, and vertical field of view.
5. Loop over the 8 corner vertices, and apply the rotation matrix to each in turn and record the minimum and maximum on the x, y, and z axes. This gives the extents of the box I mentioned in an earlier post.
6. Find the coordinates of the center of the box in light space i.e. ((xmin + xmax)*0.5, (ymin + ymax)*0.5, (zmin + zmax)*0.5)
Now it is straightforward to assemble the shadowmap view matrix. The 3x3 rotation matrix goes into the rotation part of the 4x4 matrix. For the translation part of the matrix, you want to the box-center coordinates NEGATED. And the last row of the matrix is just (0 0 0 1).
Posted by lunkhound on 21 January 2014 - 01:06 PM
The view matrix, I chose to put the origin of the view transform in the center of the box because it simplifies constructing the projection matrix. You can put the view transform at one end of the box or the other if you prefer, but then you'll need to introduce a translation component into the projection matrix to compensate. It's just easier to put it at the center.
And yes, the -Z axis.
The projection matrix would just be:
1/he.x 0 0 0
0 1/he.y 0 0
0 0 1/he.z 0
0 0 0 1
(where "he" is the half-extents of your box).
Also you could use glOrtho( -he.x, he.x, -he.y, he.y, -he.z, he.z ) to define the projection matrix if you like. (I may have flipped the signs on z there)
It is probably safer to use glOrtho, as I'm not completely sure about the range of clipping coordinates in z.
Posted by lunkhound on 20 January 2014 - 09:31 PM
With a directional light, some things are simpler than a spotlight, like setting up the transforms. A directional light uses a very simple orthographic projection, as opposed to a perspective projection.
However, most things are more complex because a directional light in theory affects everything out to infinity, but in practice your shadow texture is only a finite size and you don't want to stretch it over too large of an area or the resulting shadows will look very pixely. Shadowmapping with a directional light is all about optimizing the shadow texture usage to make the projection of shadow map texels onscreen as small as possible.
In addition to the light direction, you'll need to define a frustum for your directional shadow camera. An orthographic frustum is just a box oriented so that the z-axis is aligned with the projection direction. The box should include everything that casts or receives a shadow, but at the same time you want the box to be as small as possible because your shadow texture will be stretched over the x and y extents of the box. Minimizing the z-extents of the box isn't quite as critical, but still important, as this range is mapped into your shadow texture bit depth.
You'll want to calculate the position and size of this box based on your main camera's frustum. If your main camera can see very far, like say 1000 meters, you won't be able to cover the whole area seen by your main camera with just one shadow texture. A single shadow texture will only adequately cover a few tens of meters from the camera (depending on shadow texture resolution). If you want to cover more area then that, you may want to look into cascaded shadow mapping and similar techniques.
Once your shadow camera frustum is defined, the matrices are quite simple to set up.
1. The view matrix is just a transform with only position and rotation, centered in the center of the box and rotated to align with the box.
2. The projection matrix is just a pure scale matrix that maps the extents of the box into clip space (so each axis is mapped into a -1 to 1 range).
3. Yes you still need to render the shadow texture. With cascaded shadow mapping, you'll need to render MULTIPLE shadow textures.
Posted by lunkhound on 11 November 2013 - 07:09 PM
Another way:
Compute the bounding box of all of the skinning bones (that aren't scaled to zero). Inflate the resulting box by a precomputed amount (scaled according to the maximum scale applied to any skinning bone).
Advantage: Cheaper in CPU cost to compute (compared to the bounding box per bone method); also no per-bone extra data
Disadvantage: Less optimal fit.
Posted by lunkhound on 03 November 2013 - 12:18 PM
This paper on Dual Depth Peeling also presents an alternative approach to order independent transparency they refer to as "Weighted Average" (hidden away on page 8). It sounds too good to be true, it's just one pass over the geometry, and then a full-screen blending pass. The results are an approximation, but from the pictures in the paper, the results are indistinguishable from doing it the correct way.
It uses MRT with 16-bit 4 channel target to accumulate RGBA values, and a second target to count the layers of transparency at each pixel. Then the fullscreen pass divides the accumulated RGBA by the layer count and blends the result to the main framebuffer. Super simple.
I haven't tried it myself, but it sounds great for particles. Anyone try it?
Posted by lunkhound on 26 October 2013 - 06:30 PM
If you've got a light source such as the sun, casting shadows of off-screen geometry across the scene, then it seems like you'd want to cull/draw the main camera and the shadow camera separately.
Posted by lunkhound on 14 June 2013 - 04:08 PM
The blender export plugin here:
https://bitbucket.org/MindCalamity/blender2ogre
Might be a decent starting point. It is up to date with recent versions of Blender (2.66). Also it outputs to a an easy to read XML format.
It is designed for the Ogre3d graphics engine. The bone animations get written into a ".skeleton.xml" file.