There is a Unity tutorial on making a 2D roguelike.
 Home
 » Viewing Profile: Reputation: lunkhound
Banner advertising on our site currently available from just $5!
1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!
lunkhound
Member Since 25 Feb 2013Offline Last Active Today, 04:51 PM
Community Stats
 Group Members
 Active Posts 35
 Profile Views 1,400
 Submitted Links 0
 Member Title Member
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Not Telling
User Tools
Latest Visitors
#5227569 Best engine for developing a 2D roguelike
Posted by lunkhound on 06 May 2015  01:22 PM
#5196841 How to scale UI without losing image quality?
Posted by lunkhound on 07 December 2014  02:44 PM
I was looking around and I found an image format called SVG. from what I understand, it scales really well with any resolution. What's your opinion about it? and why isn't it replacing PNG as a file format?
Well PNG is for compressed raster/bitmapped images (i.e. a fixed resolution). I don't know much about SVG but from a cursory look, it seems to be a vector graphics format, where shapes are described with curves and such. PNG and SVG are really very different. You wouldn't want to save your raster images as SVG, as much would be lost in the translation (I'm not even sure if SVG can do raster images). Likewise you wouldn't want to save vector based art in PNG format as that would lose all the scalability.
There is software that can convert from a raster format like PNG to SVG, but I wouldn't expect the results to be very good for detailed images like what you are trying to scale.
#5196830 How to scale UI without losing image quality?
Posted by lunkhound on 07 December 2014  01:59 PM
This is really just an extension of what you are doing already.
Take all of the details that don't scale nicely (anything with sharp edges like the little cracks and nicks, the gem inlays, etc) and make each one into a separate texture (with a transparency channel). Remove all the details from the image so that what is left can be scaled using your existing method.
Then after scaling, apply all of the detail textures (with alpha blending). Taking some care to place the details appropriately on the scaled image.
#5196667 Ogre for graphics, bullet for phisics and what else?
Posted by lunkhound on 06 December 2014  12:42 PM
For audio, I'm using OpenALsoft. It isn't as fullfeatured as FMOD, but it is open source, and I like having the code to everything I'm using.
I'm also using http://nothings.org/stb_vorbis/, a public domain Ogg vorbis decoder.
Other libraries I've found useful include:
 nvidiatexturetools  for creating compressed textures
 Intel Threading Building Blocks  task scheduler for multicore processors
However, I suggest taking a look at UE4  for $19 (or $19/mo for continuous updates to the code) you get a complete highend game engine with tools  with source code to everything (except PhysX, the physics engine). It's going to be miles better than anything you'll be able to cobble together with libraries like Ogre.
#5186105 Background music with OpenAL question
Posted by lunkhound on 09 October 2014  09:10 PM
Try this:
alSourcei( mSource, AL_SOURCE_RELATIVE, AL_TRUE );
alSourcef( mSource, AL_ROLLOFF_FACTOR, 0.0 );
#5183159 Temporal coherence and render queue sorting
Posted by lunkhound on 26 September 2014  11:24 AM
I loved whole L. Spiro's post, but I have something to correct
Sorting 2 smaller queues is faster than 1 big one.
This is a half truth.
Sorting can take:
 Best Case: O(N)
 Avg. Case: O( N log( N ) )
 Worst Case: O( N^2 )
1. In best case, N/2 + N/2 = N; so in theory it doesn't matter whether it's split or not. But there is the advantage that two containers can be sort in separate threads. So it's a win.
2. In the average case, 2 * (N/2 log(N/2)) > N log(N); having one large container should be faster than sorting two smaller ones (though there remains to be seen whether threading can negate the effect up to certain N)
3. In the worst case, 2 * (N/2)^2 < N^2; which means it's much better to sort two smaller containers than a large one.
In the end you'll have to profile as it is not a golden rule.
Spiro's suggestion of using temporal coherence assumes that most of the time you can get O(N) sorting using insertion sort; thus most likely having two smaller containers should be better (if you perform threading).
While I love your posts in general, this "correction" doesn't seem right to me. Specifically #2. With O( N log N ) run time  worse than linear  divide and conquer is beneficial when possible.
Plugging actual numbers into your inequality in 2 (N=1024, using base 2 log):
left side expansion: 2*(1024/2) log (1024/2) = 1024 * 9
right side expansion: 1024 log 1024 = 1024 * 10
The left side is LESS, contrary to your inequality.
Sorting 2 halfsized arrays is sometimes faster but never slower than a single fullsize sort.
Perhaps you were thinking of searching with O( log N ) run time  better than linear. In that case, doing divide and conquer IS harmful.
#5165146 Black smoke alpha settings
Posted by lunkhound on 06 July 2014  05:31 PM
Additive blending, that is:
destColor = srcColor + destColor
can only lighten the color of what is underneath (because srcColor can't be negative). Its fine for making white smoke, but not black smoke.
For black smoke, you could try subtractive blending (not sure if D3D9 supports this mode or not):
destColor = destColor  srcColor
Alternatively you could use normal alpha blending:
destColor = srcColor * srcAlpha + destColor * (1  srcAlpha)
with a mostly dark colored texture with an alpha channel that is opaque in the middle and fades to a transparent circular border. You'll probably want some noise added to the texture to make it look smoky.
Note that with alpha blending, the order the quads get drawn in matters (unlike with additive or subtractive blending) so you'll want to draw them in backtofront order, otherwise it won't look right.
#5128193 Directional light, shadow mapping and matrices..
Posted by lunkhound on 02 February 2014  11:09 AM
That doesn't sound right. I don't see anything wrong with the code there. There must be something wrong with the viewProjection matrix, make sure view and projection are concatenated in the correct order. For column major matrices, it should be projection * view, not the other way around.
#5128095 Directional light, shadow mapping and matrices..
Posted by lunkhound on 01 February 2014  10:22 PM
The corners of a 2x2x2 box at the origin are 8 vertices where each component is either a 1 or a 1. Vec3(1,1,1), Vec3(1,1,1), .. and so on. This corresponds to the view volume in clipspace. The viewprojection matrix maps from world space to clip space (it is the concatenation of the view matrix and the projection matrix). So if you invert the viewprojection matrix, it will map from clip space to world space. If you apply the inverseviewprojection matrix to each of the 8 corners, you should end up with the 8 corners of the view volume in worldspace. The inverseviewprojection matrix will be a 4x4 matrix, so each of your vertices will need to be "promoted" to a Vec4 with a wcomponent of 1, and the result will be in homogeneous coordinates so you will need to divide by w after the transform.
#5127864 Directional light, shadow mapping and matrices..
Posted by lunkhound on 31 January 2014  06:21 PM
You also need to normalize the first cross product there (perpVec1).
If the 50x50x50 box encloses your scene, then yes that should work fine as a temporary hack.
The extents would be:
Vec3 extents( maxX  minX, maxY  minY, maxZ  minZ );
The half extents would just be 0.5 * extents:
Vec3 he = extents * 0.5;
Then you want something like ortho( he.x, he.x, he.y, he.y, he.z, he.z );
Otherwise, you'll have unwanted translation in the projection matrix. Translation that already exists in the view matrix so you'd be applying it twice.
#5127836 Directional light, shadow mapping and matrices..
Posted by lunkhound on 31 January 2014  04:02 PM
Well, the rotation matrix has some problems for starters. The rows of the rotation matrix all need to be unit magnitude (some of yours aren't), and they need to be perpendicular to each other (also not true for your matrix). So what you've got there is a matrix that scales, skews, and rotates.
Another problem that jumps out is that you aren't calculating the corners of the main camera's frustum. Instead you've got the corners of a 50 x 50 x 50 box centered on the origin. Your main camera has a perspective projection matrix, so the shape of it isn't a box, it is more of a truncated pyramid.
If you take your main camera's viewprojection matrix, invert it (note, NOT a simple transpose), and then run the corners of a 2 x 2 x 2 box centered on the origin through it, that should give the corners of your camera's frustum.
Also your orthographic matrix is also wrong. You should calculate the halfextents of the box from your minX, maxX, etc, and express the ortho in terms of the halfextents.
#5127140 Directional light, shadow mapping and matrices..
Posted by lunkhound on 29 January 2014  01:02 AM
What you have there doesn't look right at all. It isn't that easy.
I would do it something like this:
1. Find a vector orthogonal to the light direction.
2. Use the crossproduct to find another vector orthogonal to the other two. Now you have 3 orthogonal unit vectors.
3. Form a rotation matrix from the 3 vectors. The light direction should be the zaxis, and the other two are x and y. Make sure the handedness is correct. The xaxis cross yaxis should equal the zaxis, not the negative zaxis. This rotation matrix transforms directions from world space into the light space.
4. Generate a list of the 8 corner vertices of the main camera's frustum. You'll need the main camera's position, znear, zfar, horizontal field of view, and vertical field of view.
5. Loop over the 8 corner vertices, and apply the rotation matrix to each in turn and record the minimum and maximum on the x, y, and z axes. This gives the extents of the box I mentioned in an earlier post.
6. Find the coordinates of the center of the box in light space i.e. ((xmin + xmax)*0.5, (ymin + ymax)*0.5, (zmin + zmax)*0.5)
Now it is straightforward to assemble the shadowmap view matrix. The 3x3 rotation matrix goes into the rotation part of the 4x4 matrix. For the translation part of the matrix, you want to the boxcenter coordinates NEGATED. And the last row of the matrix is just (0 0 0 1).
#5125430 Directional light, shadow mapping and matrices..
Posted by lunkhound on 21 January 2014  01:06 PM
The view matrix, I chose to put the origin of the view transform in the center of the box because it simplifies constructing the projection matrix. You can put the view transform at one end of the box or the other if you prefer, but then you'll need to introduce a translation component into the projection matrix to compensate. It's just easier to put it at the center.
And yes, the Z axis.
The projection matrix would just be:
1/he.x 0 0 0
0 1/he.y 0 0
0 0 1/he.z 0
0 0 0 1
(where "he" is the halfextents of your box).
Also you could use glOrtho( he.x, he.x, he.y, he.y, he.z, he.z ) to define the projection matrix if you like. (I may have flipped the signs on z there)
It is probably safer to use glOrtho, as I'm not completely sure about the range of clipping coordinates in z.
#5125217 Directional light, shadow mapping and matrices..
Posted by lunkhound on 20 January 2014  09:31 PM
With a directional light, some things are simpler than a spotlight, like setting up the transforms. A directional light uses a very simple orthographic projection, as opposed to a perspective projection.
However, most things are more complex because a directional light in theory affects everything out to infinity, but in practice your shadow texture is only a finite size and you don't want to stretch it over too large of an area or the resulting shadows will look very pixely. Shadowmapping with a directional light is all about optimizing the shadow texture usage to make the projection of shadow map texels onscreen as small as possible.
In addition to the light direction, you'll need to define a frustum for your directional shadow camera. An orthographic frustum is just a box oriented so that the zaxis is aligned with the projection direction. The box should include everything that casts or receives a shadow, but at the same time you want the box to be as small as possible because your shadow texture will be stretched over the x and y extents of the box. Minimizing the zextents of the box isn't quite as critical, but still important, as this range is mapped into your shadow texture bit depth.
You'll want to calculate the position and size of this box based on your main camera's frustum. If your main camera can see very far, like say 1000 meters, you won't be able to cover the whole area seen by your main camera with just one shadow texture. A single shadow texture will only adequately cover a few tens of meters from the camera (depending on shadow texture resolution). If you want to cover more area then that, you may want to look into cascaded shadow mapping and similar techniques.
Once your shadow camera frustum is defined, the matrices are quite simple to set up.
1. The view matrix is just a transform with only position and rotation, centered in the center of the box and rotated to align with the box.
2. The projection matrix is just a pure scale matrix that maps the extents of the box into clip space (so each axis is mapped into a 1 to 1 range).
3. Yes you still need to render the shadow texture. With cascaded shadow mapping, you'll need to render MULTIPLE shadow textures.
#5108580 How to compute the bounding volume for an animated (skinned) mesh ?
Posted by lunkhound on 11 November 2013  07:09 PM
Another way:
Compute the bounding box of all of the skinning bones (that aren't scaled to zero). Inflate the resulting box by a precomputed amount (scaled according to the maximum scale applied to any skinning bone).
Advantage: Cheaper in CPU cost to compute (compared to the bounding box per bone method); also no perbone extra data
Disadvantage: Less optimal fit.