# dimebolt

Members

322

440 Neutral

• Rank
Member
1. ## Computing tangents

Quote:Original post by cignox1 Thank you all, now I have many different implementations of the algorithm, and some more general description of it. I wil try to figure it out from them. Thank you again! EDIT: I've just read the article "Bump mapping using GC" and perhaps I'm able to understand things with this article. I've one question though: In the algorithm the normal is computed with a cross product between tangent and bitangent. But in the general case the normal is already there, that is the mesh already has normals (perhaps averaged among triangles sharing vertices). How should I behave in this case? Should I rotate my resulting tangent to be perpendicular to the normal? How do I perform this? Thank you. Usually, generating normals works like this: 1. calculate normals for triangles 2. calculate normals per vertex for smooth shading (as yous said, often by taking the average vector of all normals of the triangles meeting in that vertex) I believe that when also using s and t tangents, the process is similar: 1. calculate normal and s and t vectors for the triangles (s and t tangents simply being the actual directions of the texture coordinates in the plane of the triangle) 2. calculate normals, s and t tangents per vertex for smooth shading (this time getting 3 average vectors instead of 1) If, for example, the normals come from a model file, they already went through step 1 and 2. You can then either do the calculation only for the tangents (which may or may not look weird depending on how much your algorithm for obtaining the smooth vectors differs from the one used when creating the model) or you can throw away the normals and do the entire thing yourself. Tom
2. ## Computing tangents

Since source code for this problem for OpenGL is easy to find (for example, check the Irrlicht sources for inspiration), I'll explain the mechanics. In general a tangent only requires to be perpendicular to the normal, so an arbitrary vector perpendicular to the normal could be chosen. The binormal then simply is cross(normal, tangent). However, since you wish to use these for bumpmapping, there are additional requirements to the tangent and binormal. They will be used to determine the direction of the texture to properly interpret the normal values stored in the texture. This means that tangent and binormal should be alligned with the direction of the texturecoordinates. Therefore, in this context, the tangent and binormal are usually referred to as s tangent and t tangent, corresponding to the s and t texture directions the correspond to. Hence, to calulate the tangent and binormals for the 3 vertices of a bumpmapped triangle, we need the values of the 3 vertices, their respective normals and their texture coordinates. For a more thourough explanaition and the important formulas I suggest this webpage: Bump Mapping Using CG. The part describing the theory is not specific to CG and can be applied to any shader type. Tom
3. ## Moving objects in opengl

Quote:Original post by Ezbez Quote:Original post by EndarEdit:: Also, if you want to rotate something and also translate it, you must rotate first and then translate. Otherwise instead of rotate about the object's center point, it will be rotating around another point and you'll go crazy attempting to figure it out. So, just remember rotate then translate. I'm a rather new to OpenGL, but I don't beleive that this is correct. Isn't the reverse true? Yes, he got the order mixed up. One way to remember this stuff, is that the last transformation before drawing an object always applies 'first'. Therefore, to rotate an object around it's axis, this rotation should be the last transform before drawing it. Tom
4. ## Moving objects in opengl

OpenGL uses transformation matrices that the hardware (or software, depending on where and what OpenGL you use) will apply to your vertices. Plenty of information on transformations in OpenGL is available in the online version of Red Book. For the actual matrix/vector math involved, you may (or may not) need to brush up your linear and vector algebra. Tom
5. ## one vs many Scene(Object) Graphs

First of all, I'm having a bit of trouble of understanding how all this would work for the user (i.e. the guy specifying the scene to be rendered). After creating scene objects, does he have to sort them into each separate component himself? Or can he create a scene graph, which is subsequently sorted into a render list, a collision structure, etc? Quote:Original post by tank104 1. One problem is that part of the SceneManager structure is partially stored in the ITransform implementation as it keeps track of children objects, I see this as not good in design, but cannot see an easy way around it? If you want parent/child information separated from the ITransform class and defined within the scene manager, you can use the Decorator pattern: define some tree classes (group/leaf) local to SceneManager that contain ITransforms (with a has-a relation rather than the usual is-a (inheritance)). Use these classes to define the hierarchy for their contained ITransforms. I'm not sure if this is a good idea though. A transformation is an operation that should be applied to it's context (children). If you separate data and children, a single ITransform becomes rather meaningless. This means that if an object different from the scene manager requires to apply a transform, he'll have to go the scene manager first (which could also be axplained as a good thing, so I really don't know). Quote:Original post by tank104 2. Is there going to be a worrying ammount of overhead keeping so many list/trees? As a base rule, don't do performance optimization until you run into a performance problem. That being said, there is a simple observation that applies to scene management overhead: the overhead of managing a single scene object is only noticable when the cost is approaching the cost of the actual rendering of that object. This implies that if your management overhead becomes higher, your system will perform worse on applications with many simple (i.e. low rendering cost) objects. Quote:Original post by tank104 3. Is texture swapping the most expensive change, followed by shading swapping? It appears so and if that is the case am i best to order all my IRenderable objects first by Material and then by Shader, or is there a better order? This question is addressed in the OpenGL forum FAQ. Although it's described in the context of OpenGL, it will apply to direct3D as well, as they're useing the same hardware. Quote:Original post by tank104 4. Other designs seem to keep the cameras/render objects seperate from the SceneNodes (i.e. as properties and not by inheriting), is there an obvious reason for this? I believe it's mainly a matter of personal taste. I prefer to put the camera in the scene graph, that way I can simply attach it to the head of an avatar and it will move automatically when this avatar moves. It's a lot less bookkeeping for the user. Tom

7. ## Getting current transformation matrix

Quote:Original post by someone2 Hi, How can I get the current transformation matrix (The Model View matrix)? Also, if I have a transformation matrix, is there a function that calculates where a certain vertex would go under this matrix (i.e. it will do the matrix multiplication instead of me doing it myself!) Thank you very much Like haegarr said, the sentence "where a certain vertex would go under this matrix" doesn't specify in which coordinate system you want this point. If it is indeed view space (i.e. the position of the point w.r.t. the camera), no function exists, but multiplying a vector by the matrix is pretty straightforward. I can post code for doing this with the output of the glGetFloatv function if you desire. If instead you want to know the point in screen space (i.e. the position of the point on the window), then there is a function you can use: gluUnProject. It looks a bit tricky at first, but all required parameters can be obtained through the glGetFloatv function. Finally, another commonly used coordinate system: the coordinates in world space (i.e. the position of the point in the world). If you want this, they cannot be obtained through the modelview matrix, as they require multplication with the model matrix only. If this is what you want, you'll need to keep track of the model matrix yourself. Tom

9. ## OpenGL compiling opengl /w codeblocks on linux

Quote:Original post by supagu im a linux newb, but wanna compile my opengl app on linux. Problem is my codeblocks project cant find gl/gl.h how am i supposed to set up the includes/libs for my project to find this? i searched my drive and it came up with gl/gl.h in some nnvidia folder which is not good! i would have expected it in some non-driver specific location like one tut said it should be in usr/include/gl which doesnt exist :-/ do i need so install some package for this or something? Linux distributions don't necesseraly come with OpenGL libraries by default (like windows). Possible sources are: MESA for a non-accelerated OpenGL implementation or the driver by the vendor of your graphics card for hardware accelerated OpenGL. Did you install the nvidia drivers yourself? Perhaps someone chose to install them into that nvidia directory, rather than the usual dirs. In that case it should work fine all the same. If you want them to be in usr/include and usr/lib then you should probably just download the current driver from www.nvidia.com. Getting latest driver is probably a good idea anyway. Tom
10. ## Scenegraphs again

Quote:Original post by jeroenb I was wondering how to render this scenegraph with regard to the lights. The program must be aware of these lights before rendering the terrain. Should I keep a list of pointers to these lights in the renderer? Could someone please explain this? One possible solution to this problem is to traverse the tree to create a sorted list of objects before rendering. This first traversal visits all nodes, calculates model matrices, and adds (visible, if combined with culling) objects to a sorted list. The sorting criteria can be anything, but obviously lights should end up in front of objects they affect. Since this combines really well with state sorting (making sure that objects with the same shader, texture, etc. are grouped together), it's a nice solution. The renderer is also reasonably easy to implement at this point, as it simply gets a list of nicely ordered objects with the proper matrices already calculated. Tom
11. ## Jim Adams "Programming RPGs with DirectX"

Quote:Original post by Sev Hello folks, I picked up the above-mentioned book (2nd edition) and found it to be pretty well written and easy to understand. Unfortunately, I'm having a few problems with compiling the source code. Someone suggested it might not work properly with the version of DirectX I'm running, and that I should try posting here. I've got DirectX SDK 9.0c, with the July 2005 update. I'm mainly getting a lot of errors where it doesn't seem to recognize keywords (IDirectXFileData comes up a lot) or that functions don't match their declarations. Has anyone here worked out the kinks themselves, or are able to give me some insight on what I should try next? Thanks in advance! Please post the specific errors you're getting. Without that, it's hard for us to tell what's wrong. Also specify which compiler/IDE you're using. That way, we can explain you the solution in case you have linker errors. Tom
12. ## lights flickering

Quote:Original post by indigox3 I'm setting up directional lighting in OGL with something like: float pos[3] = { 1, 0, 0, 0 }; glLightfv ( GL_LIGHT0, GL_POSITION, pos ); glEnable( GL_LIGHT0 ); Which works fine, until I start moving my camera around. At certain camera positions my meshes will look like they are being lit from (-1,0,0)! It could be my modelview matrix is messed up, but if that were the case wouldn't my geometry come out messed up? The geometry gets rendered fine, its just the lighting that flips sometimes. I'm totally stumped by this. The checklist in this FAQ is a good place to start if you have lighting issues. It certainly doesn't cover all possible problems, but at least it covers the most common lighting mistakes. From your description, it sounds like the lighting position is not properly placed (point four in the FAQ), although it is impossible to tell without the code surrounding your glLightfv(...). Your code should look something like this: void MainLoop() { glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // clean matrix gluLookAt(...); // create the view matrix glLightfv(.., GL_POSITION, ..); // define the light in world coordinates for (i = 0; i < all_objects; i++) { transform(object[i]); // setup the model matrix draw(object[i]); // draw all objects } } Note, if you place the glLightfv() before the gluLookAt() (or whatever you use to setup the view matrix) the lighting is defined in camera space which could explain the behaviour you experience. Tom
13. ## how do you pass arrays to a function?

Since you already use std::vector for your strings, why not use them for the shorts as well? That saves you all the trouble of messing with C-style arrays. Tom
14. ## OpenGL Having problems with gluUnproject

Quote:Original post by prototypev Quote:Original post by zedzeek why havent u done what i said? I am doing that, I have a debugging text to show the x,y,z in window co-ords as well as x,y,z in object co-ords (after gluUnProject). But your suggestion isn't really helping because like I mentioned, the values I'm getting are wrong :) In defense of zedzeek: He wants you to print the mouse x, y and associated z (depth). You never told us that you printed those values or that they are wrong. The only values you give us are the return values of the function. Since the consensus is that function looks fine, your first assumption should be that at least one of your input values (mouse x,y, projection, modelview and viewport) is incorrect. Therfore, make absolutely sure that the modelview matrix you use is identical to the modelview matrix of your object. This can be done by calling glGetFloatv(GL_MODELVIEW_MATRIX, matrix), both in the render function and in the GetOGLPos function and comparing the contents of the two 16 float arrays. If they differ this might be caused by the fact that you don't load the view matrix before the 3 object transforms (it might still be on the stack, but there is no way for us to tell). A less redundant and therefore safer option would be to just store the array you get with glGetFloatv while rendering the object and pass that array directly in the gluUnProject function. Similar tricks should be done for the viewport and projection if those ever change in your application. If your mouse x, y values are wrong check the parameters of the function. If depth is wrong, you should check for errors with glGetError. In fact, you should probably call glGetError at the end of the function anyhow. Just to make sure nothing went wrong. Tom
15. ## Yet another C pointer/array syntax question

Quote:Original post by Ned_K I'm trying to get at the subtle distiction that SEEMS apparent to me since I can have two slightly different expressions(pbuffer = buffer and *pbuffer = buffer) which have identical left-hand sides of the assignment aoperation and different right-hand sides and yet provide exactly the same result. Assuming you have your left and rights mixed up (if you really meant to say that they have different right-hand sides I have really have no idea what you mean and you can ignore me), you should look at it differently: They don't have different left-hand sides because you incorrectly associate the * with the variable name. Consider * part of the type not part of the variable. Writing it like this will perhaps make more sense to you: char* pbuffer = buffer; // assigns the pointer (pointing to element 0 of // buffer) to pbuffer (which happens to be a char*) or char* pbuffer; pbuffer = buffer; // assigns the pointer (pointing to element 0 of // buffer) to pbuffer (which was defined to be a // char* on a different line) Your code means exactly the same thing and will compile to the same executable, but this notation makes it more clear that the * is part of the type. To add to the confusion the * can also be used as an operator to dereference the pointer and access the pointee directly. In other words *pbuffer also has meaning by itself, for example: char* pbuffer; pbuffer = buffer; *pbuffer = 'a'; // the * operator dereferences the pointer, which means // that *pbuffer is a synonym for buffer[0] and has type // char instead of char*. // the result of this code is that the contents of buffer[0] // will be 'a' after the assignment Tom