• Advertisement

LordSputnik

Member
  • Content count

    97
  • Joined

  • Last visited

Community Reputation

132 Neutral

About LordSputnik

  • Rank
    Member
  1. Embedded Help!

    One used spaces, one used tabs: [source lang="cpp"] This is a tabbed line. And another. This line has two spaces. And this line.[/source] The spaces appear to get removed from the code when it's pasted, the tabs don't.
  2. Very sorry, I did mean world space - it was late and I was sleepy! The only thing I can think it could be is the cml::transform_point function I'm using - I've assumed that this multiplies the input point by the matrix as gluUnProject does, but that might not be right. I'll try making a 4x1 matrix containing the values plus the 1 at the bottom, and use straightforward matrix multiplication.
  3. I did write a wonderfully detailed post about my problem, then accidentally closed the tab. So this'll be a little more brief, but hopefully still detailed enough to get some help! Anyway, I'm trying to convert mouse co-ordinates to world space. At the moment, I'm passing in two normalized device co-ordinates, (x,y,-1.0f) and (x,y,1.0f), then transforming them by the inverse of (proj_matrix*view_matrix). I'm expecting to get two points - one on the near clipping plane and one on the far clipping plane, but I'm not. The near plane is 30, and the far plane is 5000, but I'm getting z values of 0.7 and 7 respectively. I'm not doing any multiplication by any w value to get to clipping space - could that be the problem? If so, how should I get the w value to multiply all the elements by? Here's the bits of my code that are relevant: [code]Ray newray(0.2f,-0.2f,-1.0f,0.2f,-0.2f,1.0f); newray.SetMatrices(cam_->GetProjectionMatrix(),cam_->GetViewMatrix()); newray.Calculate();[/code] [code]class Ray { cml::matrix44f_c inv_mat_; vector3f start_, end_; vector3f transformed_start_, transformed_end_; public: Ray(float sx, float sy, float sz, float dx, float dy, float dz); void SetRayEnds(float sx, float sy, float sz, float dx, float dy, float dz); void SetMatrices(const cml::matrix44f_c & proj, const cml::matrix44f_c & view); void Calculate(); vector3f GetYIntersection(float y); };[/code] [code]Ray::Ray(float sx, float sy, float sz, float dx, float dy, float dz) : inv_mat_(cml::identity_4x4()), start_(sx,sy,sz), end_(dx,dy,dz) { } void Ray::SetRayEnds(float sx, float sy, float sz, float dx, float dy, float dz) { start_.set(sx,sy,sz); end_.set(dx,dy,dz); } void Ray::SetMatrices(const cml::matrix44f_c & proj, const cml::matrix44f_c & view) { inv_mat_ = cml::inverse(proj*view); } void Ray::Calculate() { transformed_start_ = cml::transform_point(inv_mat_, start_); transformed_end_ = cml::transform_point(inv_mat_, end_); }[/code] To all the matrix and graphics wizards, what am I doing wrong? Is this the way that you would approach the problem? Thanks for your help! Ben EDIT: World space, not eye space.
  4. As far as I know there's no way of mixing int and float data. You, can certainly use two different arrays though. You can't however just use 0 and 1 as your array handles - you need to use glGetAttribLocation with the name of your attribute variable in the shader to be able to upload the data. For example: In vertex shader (GLSL): [code]attribute vec4 position; attribute ivec4 some_other_attrib; //do some stuff in main[/code] In C/C++: [code] GLint pos_id = glGetAttribLocation(shader_program_id,"position"); GLint other_id = glGetAttribLocation(shader_program_id,"some_other_attrib"); glBindBuffer(GL_ARRAY_BUFFER, positionVBO); glEnableVertexAttribArray(pos_id); glVertexAttribPointer(pos_id, VERTEX_STRIDE, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ARRAY_BUFFER, intVBO); glEnableVertexAttribArray(other_id); glVertexAttribPointer(other_id, VERTEX_STRIDE, GL_INT, ..., 0); [/code] If you can, try not to ever use the fixed function pipeline anymore. Shaders are supported on all modern cards and have been for ages, and allow you to do a lot more than FFP. If a card doesn't support vertex and fragment shaders, well, that's the end user's fault for not upgrading their machine in 8 years. Note: I've used similar code to this in a project recently, but I only really learnt it for that, so if something's wrong, someone correct me! ;)
  5. Calculating Tangent and Bitangent

    Anyone have any ideas? :S *bump*
  6. I need some feedback and advice! We have a deferred rendering system, which allows for animated objects, and are hoping to add tangent space normals to it. The renderer currently sends the following per-pixel to the post renderer: Diffuse Color World Space Normals Specular Intensity and Exponent World Space Position The way I see it, there are two methods of doing this: 1. Calculate tangent and bi-tangent per frame of each mesh's animation, using the new object space position of the vertices and their UV co-ordinates. This seems very much the traditional method of doing things. The normal can be calculated as the cross product of the tangent and bi-tangent. 2. Render the UV co-ordinates of each vertex to a render target. Then use the world space co-ordinate texture to calculate the tangent per fragment in each frame (in screen space). Use a cross product of the global normal texture and the tangent to calculate the bi-tangent. From either method, use the tangent and bitangent to apply the tangent space normal map. I'm thinking that the second method, although requiring the generation of a UV texture, would still be faster as the calculations are done on the GPU, as opposed to the CPU for the first method. But is it worth the extra texture for the speed boost I'd gain? Thoughts appreciated! Sput
  7. Problem with getting pixel pos from depth

    This may help you: topic_id=579762 Sput
  8. The speed of writing to and reading from a texture is dependent on its size. This speed is called the fill rate, and the more pixels in the texture, the longer it takes to fill. In your textures, there are: 200 x 150: 30000 pixels 800 x 600: 480000 pixels - 16x bigger than 200 x 150. 1280 x 1024: 1310720 pixels - 44x bigger than 200 x 150. So you can see from this that the speed of operations will be much slower on the largest texture, because it has 44 times more pixels for the graphics card to fill. If you don't need the precision, keep the buffer a smaller size than the viewport, maybe half the size in each dimension? Experiment with different values to find out what you need in your application. You could also try simplifying your shader code so that it takes less time for each fragment. If you want some help doing this, post it up and I'll take a look. And you may have unnecessary steps in your rendering code, so post that up too :P Sput
  9. Hey, As far as I know, there's no way to access the data stored in a RenderBuffer. If you want a to obtain a texture, simply bind a texture instead of the renderbuffer, like so: //Generating the texture. glGenTextures(1,&texture]); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, 1024, 1024, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL); //Set Up FBO here, then... glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, texture, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, texture, 0); Then you can read from the texture as if it were any other. Hope this helps, Sput
  10. GLSL Getting 3D Position from Depth

    Thanks very much for your reply! I've changed my normal texture to use floats and that works fine, so I'll be working on the position later today! Just wondering, have you thought about storing only two floats for a normal and computing the third whenever you need it, since the squares all add to one? It means you can use the blue component of the float texture for storing something else. Or would that be more trouble than it's worth? :S Thanks, Sput EDIT: Ah, never mind, I just realized this method can only be used in camera space, where all the Z normals are positive. In world space there's no way of knowing the sign of the Z normal. [Edited by - LordSputnik on August 22, 2010 11:43:38 AM]
  11. GLSL Getting 3D Position from Depth

    Thanks for your response! So you're storing that in a floating point texture and passing the texture into the deferred renderer? I did think about that, but then I thought it might use to much memory on the GPU? Also, never used floating point textures with much success before, have you got any sample code I could see about how to set them up in GL, or a good tutorial? For example, how does the GPU know whether to clamp the value to [0.0,1.0] or not? Is it able to detect that a floating point texture is the color attachment? Sput [Edited by - LordSputnik on August 17, 2010 5:49:25 PM]
  12. yasp: basic camera operation

    What happens currently when you drag the mouse? It may be that you need to undo the rotation after you've rendered the object, but I can't say for sure until I know what happens with the current code. If this is the problem, there are two ways of solving this: Method 1 - Reverse the rotation: gl.glRotatef(rot.x,1.0f,0.0f,0.0f); gl.glRotatef(rot.y,0.0f,1.0f,0.0f); //Draw object gl.glRotatef(-rot.y,0.0f,1.0f,0.0f); gl.glRotatef(-rot.x,1.0f,0.0f,0.0f); Method 2 - Push and Pop the Modelview matrix stack: gl.glMatrixMode(GL.GL_MODELVIEW); gl.glPushMatrix(); gl.glRotatef(rot.x,1.0f,0.0f,0.0f); gl.glRotatef(rot.y,0.0f,1.0f,0.0f); //Draw object gl.glPopMatrix(); I normally use the second method. You may also want to try gluLookAt, if that's available in Java. It would be helpful if you could post up the code from when you set up your camera to the end of your rendering function. :) Sput P.S. Don't worry too much about quaternions, matrices are far more useful to learn about. I mainly only use quaternions for storage or where I have to apply a rotation quickly.
  13. Hey everyone! Over the past week, I've been writing a shader which takes the Depth co-ordinate from my depth buffer, and a two texture co-ordinates, and attempts to use them to reconstruct the 3D world-space position for use in deferred lighting. However, something is all wrong with the co-ordinates it generates. I'm pretty sure it's something to do with the inverse combined camera and projection matrix I'm using, but I'm not completely sure. This is the relevant GLSL shader code: uniform mat4 ModelProject; vec3 DepthToPos(vec2 texcoord) { vec2 screen = texcoord; float depth = texture2D(tex4, screen).x; screen.x = (screen.x * 2.0) - 1.0; screen.y = (screen.y * 2.0) - 1.0; depth = (depth * 2.0) - 1.0; vec4 world = inverse(ModelProject) * vec4(screen, depth, 1.0); return world.xyz / world.w; } vec3 fragpos = DepthToPos(gl_TexCoord[0].st); vec4 final; final.x = fragpos.x; final.y = fragpos.y; final.z = -fragpos.z; final.w = 0.0; final /= 32; //Scale the output down so that values are in the range [-1.0,1.0]. gl_FragColor = final; I have my engine rendering a cube at the moment. The cube is centered around (0.0,0.0,-30.0). Here's the output of my shader: Taking the center of the cube as an example, you can see that the RGB is: R: 82 G: 49 B: 133 According to my shader, these values correspond to: X: (82/255) = 0.322 0.322 * 32 = 10.3 Y: (49/255) = 0.192 0.192 * 32 = 6.2 Z: (133/255) = 0.522 0.522 * 32 = 16.7 And I know for a fact it's a 2x2 cube, meaning that the x and y values should be no bigger than the sqrt(1+1), right? I'm passing the matrix in like so: mat44 CombinedMatrix = ProjectionMatrix * CameraMatrix; loc = glGetUniformLocation(ShaderID, "ModelProject"); glUniformMatrix4fv(loc, 1, false, CombinedMatrix.data()); So, what's up with it? :S Thanks for all your help, Sput
  14. Polygon count in modern games?

    Quote:Original post by InvalidPointer Quote:Original post by Waaayoff I realize that you can't give me a precise number but a range on average would be nice. I would like to know the ranges for these kind of games: FPS - Such as Call of duty 6 RTS - Such as Age of Empires As for MMORPGs, how many polygons if the game focuses on graphics such as Age of Conan, and if it focuses on gameplay, such as Darkfall which promises that it can handle battles with 200+ players Thanks :) I'd like to point out that Age of Empires never used polygons at all. It's all blitting/sprites. I think he's referring to Age of Empires 3, since CoD 6 is a recent game. If that's the case, the game does have polygons, and there's an option to switch between low poly and high poly models. As for polygon count, I'd expect that both CoD6 and AoE have less than a million polygons on screen at any time. In CoD, you have the player model about 4 ally models, and up to about 15 enemies, each of which with ~20K polys. With them alone, that's 400K polys. Then you have buildings, scenery and, which probably aren't as detailed as the players - so they'll probably take up around 200K polys. Even with another 100K for weaponry models and vehicles, that's still only 700K, and then on top of that there's scene management, which reduces the visible poly count. For Age of Empires III, battles usually take place with up to 200 units visible on the screen. We'll say each of these units has about 500 polys, since they're very small when viewed by the player. That's 100K on units. then you have the terrain, which is quite detailed in AoE 3, but still likely under 10K polys visible. Buildings and other static meshes are likely another 10K at any one time. So that's only 120K by my estimate. As for the two MMOs, I've never played them, so I can't really say :)
  15. OpenGL Simple problem...Blank Screen [Solved]

    Change your array specification to: glVertexPointer(2, GL_INT, 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); And your initialization of Vertices to: GLuint vertices[] = {0, 0, image->getWidth(), 0, image->getWidth(), image->getHeight(), 0, image->getHeight()}; Or else the types GLuint and GL_INT don't match. As for the transparency, is your blend function still glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); ? Are there any other quads being drawn and can you post a screenshot?
  • Advertisement