• Content count

  • Joined

  • Last visited

Community Reputation

104 Neutral

About daedalic

  • Rank
  1. I solved this issue by using PVRTC compression since pngs were being fully decompressed on the hardware.
  2. I'm developing an iOS game targeting iPad 3/4 that features a high resolution map display. I am using Ogre 3D as a graphical foundation. I have 100 1024x1024 textures stored as .png files which take up 58 MB of hard disk space total. I would like to display anything from 2 to 40 of these textures at once depending on zoom level. I would also like to keep RAM usage below 500 MB if possible. I have set up some code to load all of these textures as a unique material, each assigned to a Rectangle2D. I then attached 16 of these textures to a scene node to make them visible. This resulted in around 400 MB of memory usage at varying zooms. Attaching all of the textures to that node resulted in a crash due to lack of ram even if 96 of them were off screen and culled. I am generating 5 mipmaps by default. Assuming these were in .png format it would only roughly double the 58 MB. Obviously there is some decompression going on here. (If the generated mipmaps were completely decompressed, this would account for the memory usage). I can lower the resolution by a factor of 2 and make this work easily enough, but I'm trying to avoid having to do that. What tools and strategies are available to conserve RAM in this case? Thanks for any help!
  3. That's probably a good option. Thanks for the input. Something I should have mentioned previously is that the points used will form a thin contiguous surface wrapped around 3D terrain and structures.
  4. This data structure will have frequent insertions and lookups. In addition, it needs to be able to return a null pointer if it is indexed by a 3D point that has not had an insertion associated with it. Up to millions of pointers will be inserted into this structure. At first glance some form of associative array seems appropriate, but the 3D component complicates things. So I'm not sure if i need to use spatial partitioning with this or not. If its any help, the ranges for the x and y components are integers from 1 to 100,000+. For the z component it will range from 1 to 1,000+. The points that index this structure will be near each other. A distance of more than 1000 between any points used here is unlikely. Memory usage and speed are both important in this case. If there isn't a clear winner for a data structure to use, feel free to name several candidates. Thanks for any help!
  5. This particular A* implementation I am starting on will be operating on a 3D grid of a large size, where every node represents a cube in that 3D grid. What is the best data structure for storing refrences to the nodes that have already been traversed in this case? This data structure would ideally be able to work well for both very small and very large search spaces. The idea I came up with would be a dynamically growing octree, that would begin with just the one start node and would grow over time and add additional layers of parent nodes as the search space increases. Performance is critical, so I need to know if there's a better way to do this. Thanks for any help!
  6. I am making a tuple class using variadic templates and am stumped on how to correctly implement a get element function. I've tried a few approaches, only to have them fail miserably for various reasons. Here is my code for the tuple: [code] // undefined tuple template<typename... T> class tuple; // empty tuple template< > class tuple< > { }; template<typename U, typename... T> class tuple<U, T...> { public: tuple () {}; tuple (U& f, T&... r); U first_; tuple<T...> rest_; }; template<typename U, typename... T> tuple<U, T...>::tuple (U& f, T&... r) { first_ = f; rest_ = tuple<T...> (r...); } [/code] I am not interested in using the stl tuple, I am just interested in learning how to use variadic templates. Thanks for any help!
  7. Another update: I reordered the cross product between the up and zAxis. As a result, I start facing the correct direction with up in the correct direction as well. Yaw pitch and roll are now inverted, so if I try to look left, I look right instead. My forward, backward, left and right movements relative to the camera are correct unless i angle the camera down. When I angle the camera down these movements will reverse after a certain threshold. This is my revised look at function: [code] void MatrixLookAt (Matrix& dest, Vector3D& eye, Vector3D& at, Vector3D& up) { Vector3D xAxis; Vector3D yAxis; Vector3D zAxis; Vector3DSubtract (eye, at, zAxis); zAxis.Normalize (); Vector3DCross (up, zAxis, xAxis); xAxis.Normalize (); Vector3DCross (zAxis, xAxis, yAxis); } [/code] Any help would be much appreciated. I've been stuck on this problem for far too long now. Edit: Negating the yaw and pitch values gives correct camera rotations, however movement of the camera still has problems when you angle the camera away from the z axis.
  8. I was using the 90 degree around x idea because the conversion from the right handed system that I got my matrix algorithms from, to my right handed system (where z is up) is a 90 degree clockwise rotation around the x axis. However, I am not currently using that. The up vector I am passing in is [0, 0, 1]. Update: I'm still having issues, just slightly different ones. I have rewritten my yaw pitch and roll function based on the relation of yaw pitch and roll to my coordinate system instead of rotating the matrix from the other system. As a result, up is now down and yaw pitch and roll is working fine if and only if you remain in the up is down orientation. In addition, movement to the right results in movement to the left and vice versa regardless of what up down orientation you place yourself into. Also, you now start facing the correct direction of forward (positive y). Here is the yaw pitch and roll function that I am using. There are some pretty obvious optimizations to it that I'll implement once things are working properly. [code] void MatrixRotationYawPitchRoll (Matrix& dest, float32 yaw, float32 pitch, float32 roll) { Matrix yawMatrix; Matrix pitchMatrix; Matrix rollMatrix; MatrixRotationZ (yawMatrix, yaw); MatrixRotationX (pitchMatrix, pitch); MatrixRotationY (rollMatrix, roll); dest.Identity (); dest *= rollMatrix; dest *= pitchMatrix; dest *= yawMatrix; } [/code]
  9. I am designing matrix functions for a right handed coordinate system where x is right, y is forward, and z is up. Currently, I am using matrix code taken from a right handed system where x is right, y is up, and z is backward. I have not modified this code except for yaw pitch and roll which seems to be working fine via a 90 degree clockwise x axis rotation applied to it. Although the game terrain appears to look normal, the camera starts facing down instead of forward and movement of the camera position is very odd. I am thinking that some modifications to the perspective and lookat matrices are needed. I have tried a few modifications such as the same 90 degree rotation given to yaw pitch and roll, but none of these modifications have helped much and they usually make things worse. Here are the functions for perspective and lookat matrices that I am currently using: [code] void MatrixPerspectiveFov (Matrix& dest, float32 fov, float32 aspect, float32 nearClip, float32 farClip) { float32 height = Cot (fov / 2.0f); dest.Init (height / aspect, 0.0f, 0.0f, 0.0f, 0.0f, height, 0.0f, 0.0f, 0.0f, 0.0f, farClip / (nearClip - farClip), -1.0f, 0.0f, 0.0f, nearClip * farClip / (nearClip - farClip), 0.0f); } void MatrixLookAt (Matrix& dest, Vector3D& eye, Vector3D& at, Vector3D& up) { Vector3D xAxis; Vector3D yAxis; Vector3D zAxis; Vector3DSubtract (eye, at, zAxis); zAxis.Normalize (); Vector3DCross (zAxis, up, xAxis); xAxis.Normalize (); Vector3DCross (zAxis, xAxis, yAxis); dest.Init (xAxis.x, yAxis.x, zAxis.x, 0.0f, xAxis.y, yAxis.y, zAxis.y, 0.0f, xAxis.z, yAxis.z, zAxis.z, 0.0f, -xAxis.Dot (eye), -yAxis.Dot (eye), -zAxis.Dot (eye), 1.0f); } [/code] Does anyone have any ideas on how these functions can be corrected? For starters it seems that height might be better off in a z position instead of a y position in perspective, and that given the different direction of up, z and y could be swapped in some way in lookat. However simple swaps don't seem to help much, so I must be missing something. Thanks for any help!
  10. I am currently using 3ds max and photoshop for all of my art development. And I am wondering what the most efficient steps are to take for UV mapping and texturing of models. There seem to be two opposite methods to this process. One is first doing the UV mapping and then painting a texture over it. The other is starting with the texture first. Which is the best method and why? In addition are there any times when one method is preferred over the other, such as with many repeating elements? (wooden boards, fur, scales, etc...) Also when it comes to reducing the detail of a mesh, are there any ways to reuse or automatically alter the same UV map without having to remake it for every detail level? (3ds max allows you to preserve UV boundaries in ProOptimizer, but it doesn't seem to perform anywhere near well enough for my needs) Thanks for any help!
  11. I made my own matrix lookat method that functions just like D3DXMatrixLookAtLH. Yet when I use it in place of that directx method I end up seeing nothing but black in the game viewport. Is there an obvious flaw in this code? I am doing this for a left handed, row major matrix. In the method, the matrix dest is intended to be initialized as a lookat transform matrix. Thanks for any help! [code] INLINE void MatrixLookAtLH (Matrix& dest, Vector3D& eye, Vector3D& at, Vector3D up) { Vector3D tmp; Vector3D xAxis; Vector3D yAxis; Vector3D zAxis; tmp = at - eye; tmp.Normalize (); zAxis = tmp; tmp = up; tmp.Cross (zAxis, tmp); tmp.Normalize (); xAxis = tmp; tmp = zAxis; tmp.Cross (xAxis, tmp); yAxis = tmp; dest.Init (xAxis.x, yAxis.x, zAxis.x, 0, xAxis.y, yAxis.y, zAxis.y, 0, xAxis.z, yAxis.z, zAxis.z, 0, -xAxis.Dot (eye), -yAxis.Dot (eye), -zAxis.Dot (eye), 1); } [/code]
  12. The problem was that I had assumed DirectX used column vectors instead of row vectors. So that takes care of that problem. Thanks!
  13. Here's an example of a transform matrix we are using: 0 0 -1 0 1 0 0 0 0 -1 0 0 0 0 0 1 When we multiply this matrix by the unit vector: (0, 0, 1) using D3DXVecTransformCoord, we get (0, -1, 0) as a result. However, when we use our multiplication method we get (-1, 0, 0) as a result. Here's our multiplication method: It ignores the 4th dimension since it shouldn't be needed for our purposes... as you can see, the transform matrix is equal to the identity matrix in the 4th dimensional positions. [code] INLINE Vector3D Matrix::operator* (const Vector3D vector) { return Vector3D (m11 * vector.x + m12 * vector.y + m13 * vector.z, m21 * vector.x + m22 * vector.y + m23 * vector.z, m31 * vector.x + m32 * vector.y + m33 * vector.z); } [/code]
  14. We are trying to replace this direct x method with our own method. We tried just multiplying a matrix by a vector and got some very different results. What is this method doing that's different from simple multiplication?
  15. By doing multiple perlin noise calculations, and a lot of tweaking, it seems that all my terrain generation issues can be solved. This includes features. A perlin noise gradient can determine the likelihood of a feature being placed in a certain area.