Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

163 Neutral

About muhkuh

  • Rank
  1. This is an embedded system without swap space. Furthermore the OpenGL implementation (not API) actually will decide what to do with textures and it decides to place a copy of them in system ram (Which is generally a good decision but not in my case). The question is if there is a way to prevent this.
  2. Hi, I'm developing for a system that has 256MB video memory and 256 MB system RAM. Is there a way to prevent the OpenGL implementation from keeping copies of all textures in system memory? I need system ram for other things. The hardware is an nvidia 5200FX. I tried using PBOs to upload miplevel 0 and let glTexImage2D read from VRAM but this didn't decrease memory consumption in any way. My next try would be FBOs but may be theres is some other way to prevent this redundancy? Thanks for answering Markus
  3. Hello, I'm starting a new project using OpenGL/GLSL aimed to be quite platform independant. I really need an effect format for GLSL and now cg has been digged out of it's grave and supports GLSL as target profile. Rumors had it that the cg compiler didn't produce optimal code for other hardware than nvidia. Is this still true when using GLSL as target because the produced GLSL code is optimized by the OpenGL ICD later? Are there any alternative effect formats I could use? I read about ColladaFX but is it really suited to be used directly in an engine or just for exchanging FX data in the content pipeline? Thank you for your opinions. Markus
  4. Hi, I will describe my situation first. We do some semi embedded development. There is a x86 PC ebedded system using OpenGl under xorg to play some animations on one display. Because we develop quite close to what the hardware can display at a acceptable frame rate I would like to let the artist use exactly the target system so he can see performance problem early in the development cycle. As I don't want to hack a network interface to the engine it is required that the output opengl window and the editor the artist uses run on the same machine SOMEHOW. My first idea was to use some 3D-rendered GUI like CEGUI and implement an in game editor. The problem is that the editor itself takes huge parts of the screen so it might get more complicated for the artist. My second idea was to open two OpenGL windows on two displays if the hardware would allow this. This might be necessary anyway in the future. The problem is that neither SDL nor GLUT support miltiple displays and I really didn't want to code a lot of platform dependent stuff. The third option that crossed my mind was to mix remote and local displays somehow. The application runs on the target system on a xorg server. The output window is opened on a local display of the target but the editor on some remote machine using wxWindos or something like that as GUI library. Would this approach work? (I'm not that familiar with X11) Thanks for your answers. Suggestion how to solve the problem in another way are welcome too. Markus
  5. Hello, I'm writing a parser for x files. There seem to be some different "interpretations" of the MeshMaterialList template. The DirectX 9 SDK states: template MeshMaterialList { < F6F23F42-7686-11CF-8F52-0040333594A3 > DWORD nMaterials; DWORD nFaceIndexes; array DWORD faceIndexes[nFaceIndexes]; [Material <3D82AB4D-62DA-11CF-AB39-0020AF71E433>] } (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dire...) Polytrans exports a cube (12 faces) with 1 material like this: MeshMaterialList { 1;1;0;; {Material__default} } There are two things that I don't get: 1. Where in the spec is mentioned that when there is only one material it can be assigned to the whole cube like this? 2. Why is a double semicolon needed? The spec for text encoded x files has an example that dosn't have one: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dire... "If an array is defined in the following manner: Template mytemp { array DWORD myvar[3]; } Then an instance of this looks like the following: mytemp aTemp { 1, 2, 3; } In the array example, there is no need for the data items to be separated by semicolons because they are delineated by commas. The semicolon at the end marks the end of the array." Double semicolons occur when the array elements are itself defined by templates which is not the case for the faceIndexes in the MeshMaterialList template. Did I misinterpret the spec? I contacted the author of polytrans and he states that he wrote the x exporter with feedback from the direct x development team and that it is correct the way it is. In contrast to that the x files that come with the directx sdk don't have this double semicolon and "one material optimization". Thanks a lot. Markus Henschel 3D-Programmer Bally-Wulff-Automaten-GmbH Germany
  6. Hello, I read the faq about state changes but there is one thing I'm not really sure about. If I render multiple objects using the same shaders, textures and geometry but with different shader constants. Does it really make a difference if I update one shader constant or several at the same time? For instance: Method a: 1. Render object A 2. update several constant registers of the vertex and pixel shader 3. Reder object A again Method b: 1. Render object A 2. update less constant registers of the vertex and pixel shader 3. Render object A again Does method a really differ significantly in speed from method b? I though that the thing that hurts performance is that different parts of the pipeline get stalled. But with this case the same parts would be stalled. Thanks for giving me some insight :-) Markus
  7. Hi, I read the Game Programming Gems 4 article "Hardware Skinning with quaternions" but it is a bit short in it's explaination of the technique. Unfortunately I don't find any information about it in google except that there is an article about it in GPG4. Would be great if someone here could give me a hint what to search for. Unfortunately there is no online version so I try to give a summary of what the article describes (as far as I understood): Instead of blending final vertex positions it tries to interpolate transforms that give a skinned vertex. The vertex is bound to one bone but (somehow) the rotations of several bones are spherecally interpolated as quaternions in the vertex shader. Any help appreciated. Thanks, Markus
  8. One last thing: So usually the local Bone transformation matrix includes: -rotation of the current bone relative to the parent bone -translation away from the joint of the parent bone This actually means the "length" of the parent bone is actually the translation component of the current bone's local matrix doesn't it? So the root bone usually has no translation in it's local matrix at all? Thank you for this "talk to me like I'm a 3 year old" turorial :-) Regards, Markus
  9. Unfortunately I don't find the tutorial(s) at the moment. I read several and what I'm actually trying to is to get a base undestanding of what is going on in general and how to do a base implementation. So let's take this as a reference: http://www.gamedev.net/reference/articles/article2221.asp There are: 1.)local bone matrix (position a bone's joint relative to it's parent bone) -translation (moving the joint away from the parent bone's joint, probably constant in different keyframes) -rotation (likely to change in different keyframes) 2.)skin offset matrix (bring the vertices from bind to bone space) -translation -rotation Is this correct?
  10. Quote:Original post by haegarr For all three cases the cited formula is ok! For (1) and (3) the normals before and after the scaling does not differ at all. And for (2) here is the proove: With s>0 denoting the scale factor, normalize ( k0*(M0*n*s) + k1*(M1*n*s) + k2*(M2*n*s) + k3*(M3*n*s) ) = normalize ( s*k0*(M0*n) + s*k1*(M1*n) + s*k2*(M2*n) + s*k3*(M3*n) ) = s * normalize ( k0*(M0*n) + k1*(M1*n) + k2*(M2*n) + k3*(M3*n) ) / sqrt(s*s) = n' since the scale matrix (for _uniform_ scaling) is S(s) := s * I and so, for any concatenated transformation and s != 0 T * S(s) = T * s * I = s * T * I = s * T Thanks a lot for describing it so detailed. I understand that uniform scaling that is applied to the model as a whole doesn't affect normals. What I initially ment was uniform scaling coming from the individual bone and skin offset matrices so the s in the formula above would not be the same for all bone influences. I read in several tutorials that the bone und skin offset matrices generally do not need scaling. Why not? It seems to me there is something I do not understand correctly about the bone and skin offset transformations. May be you or someone else can give me a hint: 1.) skin offset matrix: I imagine this matrix to get the vertices from bind pose to local bone space like the opposite of taking a bone and moving it to the right place in the bind pose. This involves moving and rotating the bone (or the vertices) so that position of the joint and direction are correct. Until now I thought scaling would occur here too to get the bone from unit legth to it's desired length. This doesn't seem to be right. Why? 2.) bone matrix (the one transforming a bone relative to it's parent bone): I thought this mostly consisted of rotations relative to a parent bone. But if there is not scaling in the skin offset matrix probably this is wrong too? Is the translation component in the bone matrix used to get the information about the length of a bone into the skeletal animation system? So if there is is no scaling needed 99.9% of the time for the bone and skin offset matrices what are the 0.1%? Making individual body parts of the model grow or shrink? Thanks a lot! Markus
  11. Quote:Original post by haegarr Setting the homogeneous co-ordinate w to zero means nothing more than to ignore the translational part of the transformation. Scaling is still active since it is located in the linear sub-matrix of the transformation, and that is apart from the translational part. If you mean simply _direction_ vectors by the term "normal vector" (not the case, I assume) then all is ok. However, scaling a _orthogonal vector_ (just to use a more specific term) like the model is not in each case correct. It is correct if and only if the scaling factors are identical for each principal axis. If, on the other hand, you scale only in 1 direction (e.g. a height field) then scaling the normals the same way would yield in vectors no longer being orthogonal. Example: The vector a := [ -1 , 1 ] is orthogonal to the vector b := [ 1 , 1 ] since a . b = -1 * 1 + 1 * 1 = 0 Scaling only in y direction by s: a' . b' = -1 * 1 + s * s != 0 Scaling in both directions by s: a' . b' = -s * s + s * s = 0 You could see from this 2D example that scaling the normal "orthogonal" to the model would do the trick: a' . b' = -s * 1 + 1 * s = 0 May be my question wasn't written clearly. With normal vector I mean the surface normal of a vertex. I do undestand the math. My question is more how to deal with that when doing skeletal animation with skinning. Most tutorials transform vertices like that (example for 4 bones influencing the vertex: v: vertex v': transformed vertex k0..k3: weighting factors for the influencing bones M0..M3: transformation matrices for the influencing bones v'= k0*(M0*v) + k1*(M1*v) + k2*(M2*v) + k3*(M3*v) They do exactly the same transformation for normals with w=0 but renormalize n' after the calculation is done. n: normal n': transformed normal n'=normalize ( k0*(M0*n) + k1*(M1*n) + k2*(M2*n) + k3*(M3*n) ) This should definately work when no scaling is taking place. Would the result with uniform scaling still be correct? Should'n the renormalizing be done for alle individual M*n terms instead of n'? n'=k0*normalize((M0*n)) +k1*normalize((M1*n)) + k2*normalize((M2*n)) + k3*normalize((M3*n))
  12. Hello, I'm doing skeletal animation with smooth skinning. I've got a number of transformation matrixes and weighting factors per vertex that I use to transform them to object space from bind space. I'm a bit unsure what to do with normals. I read in some tutorials that they transform the normals using the same matrix as for the vertices but setting the normal's w to zero. This actually takes care of the translational part but what about scaling? I think bone transformation include scaling. How is scaling handled when transforming normals with smooth skinning? Thanks Markus
  13. Hello, I changed my previous question to this: Given: -OpenGL Standard Lighting -1 enabled point light with ambient=diffuse, no specular -one object -no global ambient lighting 1. Render the object as usual 2. Render in 2 passes 1st: light's diffuse set to zero 2nd: enable blending (GL_FUNC_ADD, GL_ONE, GL_ONE) restore light's diffuse color set light's ambient color to zero Shouldn't the output be the same? In Method 1 diffuse and ambient are added per vertex. Method 2 does it in the frame buffer. Why is the output of Method 2 much brighter than method 1? Thanks a lot. Markus [Edited by - muhkuh on December 2, 2005 6:00:32 AM]
  14. Hi there, I'm using this on texture stage 3 glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT); glTexEnvf(GL_TEXTURE_ENV, (GLenum)GL_COMBINE_RGB_EXT, GL_MODULATE ); glTexEnvf (GL_TEXTURE_ENV, (GLenum)GL_SOURCE0_RGB_EXT, GL_PRIMARY_COLOR); glTexEnvf (GL_TEXTURE_ENV, (GLenum)GL_OPERAND0_RGB_EXT, GL_SRC_COLOR); glTexEnvf (GL_TEXTURE_ENV, (GLenum)GL_SOURCE1_RGB_EXT, GL_PREVIOUS); glTexEnvf (GL_TEXTURE_ENV, (GLenum)GL_OPERAND1_RGB_EXT, GL_SRC_COLOR); This acutally means the value of the texture set for this stage isn't needed but when I do not bind a texture to this stage it is disabled completely and the computation isn't executed. Do I really have to set a texture for this stage (dummy texture)? Thanks Markus
  15. muhkuh

    C++ compiler error

    Thanks a lot. So the problem is not gcc but my colleague's strange coding style. I like this a lot more this way.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!