Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

100 Neutral

About Dynx

  • Rank
  1. Are there any recent lightweight C++ skeletal character animation libraries that don't have extra other features? (i.e. not a full game engine) We are using Cal3D but seems it is not maintained as well as we need it to be. I'm looking for something that is also graphics API independent or at least one for OpenGL. Thanks
  2. Yes that was it, thanks!
  3. Hi all, I fine tuned my 2d platformer physics and when I added slow-motion I realized that it is messed up. The problem I have is that for some reason the physics still depends on framerate. So when I scale down time elapsed, every force is scaled down as well. So the jump force is scaled down, meaning in slow-motion, character jumps vertically smaller height and gravity force is scaled down as well so the character goes further in the air without falling. I'm sending update function in hopes that someone can help me out here (I separated vertical (jump, gravity) and walking (arbitrary walking direction on a platform - platforms can be of any angle) vectors): characterUpdate:(float)dt { //Compute walking velocity walkingAcceleration = direction of platform * walking acceleration constant * dt; initialWalkingVelocity = walkingVelocity; if( isWalking ) { if( !isJumping ) walkingVelocity = walkingVelocity + walkingAcceleration; else walkingVelocity = walkingVelocity + Vector( walking acceleration constant * dt, 0 ); } // Compute jump/fall velocity if( !isOnPlatform ) { initialVerticalVelocity = verticalVelocity; verticalVelocity = verticalVelocity + verticalAcceleration * dt; } // Add walking velocity position = position + ( walkingVelocity + initialWalkingVelocity ) * 0.5 * dt; //Add jump/fall velocity if not on a platform if( !isOnPlatform ) position = position + ( verticalVelocity + initialVerticalVelocity ) * 0.5 * dt; verticalAcceleration.y = Gravity * dt;
  4. Thanks for all the replies. Working with UIImageView's solved the problem. So as you guys said, placing an image into a UIView makes the GPU draw it once and cache the result. I have my frames back again!
  5. Hi all, Without testing or anything whatsoever, I developed my entire framework around core graphics on iPhone. I set up a timer for 60 Hz and every frame I reset the view and draw the following images: 320x480 (4x - background) Here is my draw function that is called every time timer is fired: - (void)drawRect:(CGRect)rect { if( !context ) context = UIGraphicsGetCurrentContext(); //Draw background CGContextDrawImage(context, CGRectMake(0,0,320, 480), background1Image); CGContextDrawImage(context, CGRectMake(0,0,320, 480), background2Image); CGContextDrawImage(context, CGRectMake(0,0,320, 480), background3Image); CGContextDrawImage(context, CGRectMake(0,0,320, 480), background4Image); } I removed all interaction code, everything and the game runs around 20 Hz. If I draw colored rectangles instead of images from PNGs, it runs around 55Hz. Is this expected (as in you can't make a 2D game with textures using CG on iPhone) or am I doing something terribly wrong?
  6. Dynx


    So the units are 64 bits as I wanted them to be. What is a 16 bit integer? Is it a short? Does GPUs have 16 bit registers? Also if I do this, what is the size of b going to be? Normally I would assume 32 bit, but then, who knows? :) uint alpha = texture3D( 3dtexture, vec3( 0.1, 0.1, 0.1 ) ).a; uint b = 1;
  7. I need a 64 bit of data cube (packed) that I need to transfer to GPU to do bitwise operations on. Would this be correct texture creation? glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexImage3D( GL_TEXTURE_3D, 0, GL_RGBA16UI_EXT, BW, BH, BD, 0, GL_RGBA, GL_UNSIGNED_INT, data ); What I don't quite understand is what happens if I do this in GLSL? uint alpha = texture3D( 3dtexture, vec3( 0.1, 0.1, 0.1 ) ).a; What is the size of is 'alpha'? Is it a 16 bit unsigned int? That doesn't make sense. Does OpenGL convert the data used while creating the texture to a 32 bit uint? If so how can I get the last 10 bits? Thanks!
  8. I can guarantee that my vectors are orthogonal. Also I am not complaining about a shear, which I guess tells that there are no orthonormalization issues. My problem is that I am starting off with a direction and a side vector not an up and direction like you would do in a gluLookAt or similar functions, but the side vector is always a vector on XZ plane. I compute the up vector from side and dir and reorient side vector so that it is orthogonal to both. I don't see anything wrong with this. Here is a simpler explanation of what happens: Oh and it may be a good idea to say that I am on iPhone. The side vector is obtained from the heading value on the compass, direction is the gravity vector. Say the object is at 0,0,0 and we are looking from above in negative Y direction. There is a certain heading value (thus a certain side vector) for which when I tilt the iPhone up (changing the gravity vector) the object goes away from the bottom of the screen which is expected. Now if I rotate the device so that it is in the opposite (-180) of that side direction, when I tilt the device up, it goes away from the upper edge of the screen which is not expected. Take a perpendicular rotation of the device (-90) and when I tilt it up, it goes away from the corners which is extremely unexpected. I can't tell what's quite going on here. Hope this all makes sense to someone.
  9. Hi all, I am working to create my own view matrix and what I have is a camera direction (dir) and a side vector on XZ plane (Y being the OpenGL up). up = side x dir / ||side x dir|| side = dir x up / ||dir x up|| matrix = ( sx, sy, sz, ux, uy, uz, -dx, -dy, -dz ) This matrix ends up being inconsistent. There is a certain side direction for which the camera rotates fine as direction changes. But other than that, for instance if the side direction is the negative of that, camera rotates the opposite of the direction vector. And it between it's just skewed. How can I make it more consistent?
  10. Hi all, I am rendering a volume (a cube with all corners assigned different colors - 2x2x2 3D texture). And I noticed that with my current raycaster, banding artifacts occur as rays deviate from the view direction: EDIT: Actually it is related to the number of samples accumulated within the ray. So I can I guess make sure that each ray contains the same number of samples by scaling the exit point of the ray from the volume, would that be a correct representation of the volume though (in colors) ? <br/> Does anyone have an insight into this?
  11. Nope I am not doing displacement mapping nor raytracing. I'm implementing the 2009 I3D paper: GigaVoxels: Ray-Guided Streaming for Efficient and Detailed Voxel Rendering. Uses raycasting to render volumes with data sizes of 30 gigs in real-time on GPU, pretty cool. And if my theory is correct 2nd option from my first post (rays from near plane) should work just fine for camera inside the volume.
  12. Then it is exactly what I think it is with extra features :) I am implementing a volume renderer and I need to accumulate colors along the path of a ray and the ray may not always be sampled exactly at points where colors are defined in a 3D texture, therefore I need trilinear interpolation on the cube formed by the closest values in the texture. Now, tell me if that's not what I'm doing when I'm sampling a 2x2x2 GL_LINEAR texture at (0.5, 0.5, 0.5) in pixel shader, that's all I need :)
  13. Ok, got it working. The reason why linear_mipmap_linear was giving me a black box is..well I forgot to generate mipmaps :) And actually GL_LINEAR does trilinear interpolation for 3d textures, so no need for l_m_l.
  14. Ok they are both correct but the first option will not give me correct results inside the volume because there will not be any pixels drawn. The question now changes to, how do you get the world coordinate of a pixel on the view plane? And while typing this I found the solution :) Render a quad at the location of view plane and use those pixels instead of rendering a bounding box. Which again makes me think, how do you not get rid of the advantages that a bbox brings (only casting from the pixels that are on the surface of the bbox) while being able to also see inside the volume? Any suggestions?
  15. I mean fetching pixel colors from 3D textures. (sampler3D in glsl) What do you mean by palette textures? And I think the blackness on the edges was because of pixels blending with the color that is defined outside of the texture area.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!