Deception666

Member

167

182 Neutral

• Rank
Member
1. Help understanding projection matrix settings....

A 2D orthogonal matrix, as in your second case, will be built as such (glOrtho): | a 0 0 Tx | | 0 b 0 Ty | | 0 0 c Tz | | 0 0 0 1 | a = 2 / (right - left) b = 2 / (top - bottom) c = -2 / (far - near) Tx = - (right + left) / (right - left) Ty = - (top + bottom) / (top - bottom) Tz = - (far + near) / (far - near) The 'far' and 'near' values are 1 and -1 when using gluOrtho2D. Here is what is produced in the first case. | Sx 0 0 x | | 1 0 0 0 | | 1 0 0 x | | Sx 0 0 0 | | 0 Sy 0 y | = | 0 1 0 0 | * | 0 1 0 y | * | 0 Sy 0 0 | | 0 0 Sz z | | 0 0 1 0 | | 0 0 1 z | | 0 0 Sz 0 | | 0 0 0 1 | | 0 0 0 1 | | 0 0 0 1 | | 0 0 0 1 | x = -1 y = -1 z = -1 Sx = 2 Sy = 2 Sz = 1 // resulting matrix | 2 0 0 -1 | | 0 2 0 -1 | | 0 0 1 -1 | | 0 0 0 1 | Lets go back to the glOrtho case. a = 2 / (1 - 0) = 2 b = 2 / (1 - 0) = 2 c = -2 / (1 - (-1)) = -1 Tx = - (1 + 0) / (1 - 0) = -1 Ty = - (1 + 0) / (1 - 0) = -1 Tz = - (1 + (-1)) / (1 - (-1)) = 0 | 2 0 0 -1 | | 0 2 0 -1 | | 0 0 -1 0 | | 0 0 0 1 | Now lets compare eye space to clip space coordinates. // gluOrtho2D | 2x - 1 | | 2 0 0 -1 | | x | | 2y - 1 |= | 0 2 0 -1 | * | y | | -z | | 0 0 -1 0 | | z | | 1 | | 0 0 0 1 | | 1 | // translate / scale | 2x - 1 | | 2 0 0 -1 | | x | | 2y - 1 | = | 0 2 0 -1 | * | y | | z - 1 | | 0 0 1 -1 | | z | | 1 | | 0 0 0 1 | | 1 | As you can see the matrices are not quite the same, but assuming the MODELVIEW matrix is an identity matrix, then the gluOrtho2D gives you the ability to provide z input that is between [-1, 1] before it will be clipped while the translate and scale case only allows z input to be between [0, 2]. I might have fudged up the math a bit, but the point is that the matrices are almost identical, except for in the z case.
2. C++ Callback Reference

Post your Keys.cpp and Keys.h. More visual clues are always better than chitchat. Have you tried setting a break point or pumping text to a prompt to verify that the code is being executed?
3. BGRA As Pixel Format

I am basing my information off of what NVidia's technical briefs are stating. Quote: For 8-bit textures, NVIDIA graphics cards are built to match the Microsoft GDI pixel layout, so make sure the pixel format in system memory is BGRA. Why are these formats important? Because if the texture in system memory is laid out in RGBA, the driver has to swizzle the incoming pixels to BGRA, which slows down the transfer rate. For example, in the case of glTexImage2D(), the format argument specifies how to interpret the data that is laid out in memory (such as GL_BGRA, GL_RGBA, or GL_RED); the internalformat argument specifies how the graphics card internally stores the pixel data in terms of bits (GL_RGB16, GL_RGBA8, and GL_R3_G3_B2, to name a few). To make matters more confusing, OpenGL allows you to specify GL_RGBA as an internal format, but this is taken to mean GL_RGBA8. It is always best to explicitly specify the number of bits in the internal format. Refer to Table 1 to see the performance impact of using non-optimal texture formats. Note, this is not the case with 16-bit and 32-bit floating point formats. http://http.download.nvidia.com/developer/Papers/2005/Fast_Texture_Transfers/Fast_Texture_Transfers.pdf The paper might be a bit old, but I am assuming that technology has not changed that much. I understand that the third parameter is an indication of what type of texture format internally you would like represented. NVidia or ATI could decide one day that GL_RGBA8 maps to something like GL_ABGR8. We ultimately do not know. I am not a complete idiot and it sounds like you really did not read the question I was posing. I understand that I would need the data to match the internal format of the graphics card. The external data would need to be saved as BGRA if it were to match that of the internal format stated by NVidia. Not a big deal. What I would like to know is if the graphics card needs to swap an image out to system memory, is the swap to system memory going to be as optimal as it gets and vice versa? I would assume so, and as I think about it now it would not make much sense for the swap to do anything other than be optimal. Thanks

5. OpenGL BGRA As Pixel Format

Because I've been tasked to try and increase performance out of our application at work, I've been doing some reading and found that OpenGL on Windows would prefer a call to upload a texture to look something like the following: glTexImage2d(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0, GL_BGRA8, GL_UNSIGNED_BYTE, pData); I understand that the card's driver would have to swizzle the incoming data if the format was specified as GL_RGBA8. This is where GL_BGRA8 gives you maximum throughput. Our application uses quite a bit of texture memory that not all of it is going to fit in VRAM. If GL_RGBA8 was used instead, does the driver only penalize your throughput once for uploading the data to the card since a swizzle is required, or does it penalize you if the texture has to be swapped out to system memory to make room for other data on the VRAM? I would assume that driver would continue to store the texture as the internal format while in system memory, but I just wanted to make sure.
6. Passing an array of pointers to a function

int * slider = new int[11]; SomeFunction(slider); . . . BOOL SomeType::SomeFunction(int * slider) { slider[1].sx = 5; } Change the reference (&) to a pointer (*) in your function declaration and definition. Alternative: If the size of the array does not change, you can also declare it this way on the stack. int slider[11]; SomeFunction(slider); . . . BOOL SomeType::SomeFunction(int slider[11]) { slider[1].sx = 1; }
7. Farplane parameter seemingly limited

Before using gluPerspective, you should set the current matrix mode to the perspective matrix and then switch back to the modelview matrix mode after calling.
8. help with enviroment cube mapping

To define the six faces, you need to do something along these lines. // bind the cubemap texture glBindTexture(GL_TEXTURE_CUBE_MAP, m_nCubeMapTexture); // generate no mipmaps glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_GENERATE_MIPMAP, GL_FALSE); // send the temp image to the card for (int i = 0; i < CM_MAX_TYPES; i++) { // send the image to the graphics card glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT + i, 0, GL_RGBA8, m_nDynCubemapSize, m_nDynCubemapSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); } // setup the cube map texture parameters glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
9. GLUT Problems

Instead of building GLUT, you should just take the already built binaries and just link the sample application to them. The warning that you are getting are coming from either the 2005 or 2008 version of MS Visual Studio. The warnings indicate that the functions that have been specified will at some point no longer be apart of the API. You should be ok to ignore those warnings, since I doubt that MS will be removing them from the standard C library. I am sure others can comment on this better than I have. Hope this helps.
10. texture problem

The only other thing I could think of is if you have lighting turned on and no source of light. The code that you have looks fine.
11. texture problem

Since your buffer is only 16 bytes long, your 2D image is actually only 2x2. So, you will want to change the image width and height from 4x4 to 2x2. Change your luminance formats to RGBA formats. Luminance, I believe, provides a grayscale image. Since you set the texture to repeat, you can provide values greater than 1 to get the checkerboard look you are wanting. Hope this helps.
12. Program crashes with STD::map find()

Add the rest of your code so we can look at more of it... Please point out which line it is crashing on too. Thanks.
13. Program crashes with STD::map find()

Are you sure you are wanting to insert a non-assigned pointer into your _collisionMap. From your code: CollisionQueue* firstQueue; // this is not pointing to a valid object. are you handling this? this->_collisionMap->insert( std::make_pair( first, firstQueue ) );
14. My last vector question.

What you have is a vector that contains pointers to integers. The vector class knows nothing of how to release the memory you are giving it. It only stores it. You will have to be responsible for releasing the memory back to the system. For integers, you would be better of making your vector class std::vector< int > and not std::vector< int * >. The vector class will allocate internal housekeeping memory to store and be able to reference the objects that are being stored within it. The amount of memory being used, the type of internal variables being used, and the efficiency of the vector are platform dependent, so when you are basically calling .clear( ... ) or .erase( ... ), you are more than likely just manipulating the internal representation for the begin and end of memory pointers. No memory is being released since the vector, which I assume is on the stack, still has scope value. If you insert another object into the vector after the clear, there will be internal memory available for it to use. Once the vector is destroyed, the internal memory that it created should be released back to the system.
15. Multimap Dilemma

In your example, the first element of the multimap iterator is the key, which in your case represents your int index. The second element of the iterator represents the actual object stored at this location.