Raptisoft

Members
  • Content count

    58
  • Joined

  • Last visited

Community Reputation

171 Neutral

About Raptisoft

  • Rank
    Member
  1. Just wanted to weigh in with what the problem is/was...   Did you know XCode will take your png and modify it?  As part of the modification process, it premultiplies the alpha.  That's why this was happening.  I don't see any real way around it but to include pnglib in my project, or stop using pngs.   I've been programming for 20 years and this is the most infuriating thing I've ever seen.  The idea that they would MODIFY the data you're including in the bundle... and worse, they don't give you a way to turn it off.  There's notes on the web on how to turn it off, but none of them actually work.   Infuriating.
  2. Hi Erik,   I just tried the change from modulate to replace-- I got the dark rim on the unfiltered text as well doing that.   BUT, just flailing randomly, I tried this: glBlendFunc(GL_SRC_ALPHA,GL_ONE);   ... and it seemed to work, at least for white stuff (haven't tried colored yet).   What are the implications of that?  That's not a correct alpha blend, is it?... maybe that's giving me additive, I don't remember.
  3. Hola,   Basically I have rectangles on the texture atlas-- so I'd say, for this image, draw with this texture rectangle.   The texture rectangle is modified on the OpenGL version-- so if I had rect=(0,0 - 10,10) on OpenGL it'll actually be rect=(.5,.5 - 9.5,9.5).  I have also tried rect=(.5,.5 - 10,10) and rect=(.5,.5-10.5,10.5) with those second two making it worse.  (Naturally all coordinates are converted to 0.0 - 1.0 by the width of the image itself).   So essentially it looks like:   Image rect is (0,0-10,10) Fix it for OpenGL-- now rect is (.5f,.5f - 9.5,9.5) Divide rect by width/height of texture to convert to 0-1   I also have tried kludging the actual draw positions onscreen, by adding .5 to them as well, with no luck.  Again, I've tried lots of combinations, everything from adding .5 to all four corners of the rect, to shrinking the rect by .5, and even tried expanding the rect by .5 as I randomly started lashing about for solutions.   * * *   For loading textures, at this time they're a simple PNG that I ensured had a white (invisible) background in photoshop.  For completeness, I tried drawing with no alpha and got white squares (if the background had been dark I'd have gotten the imprint of the letters over the background color).  Extra note: The utility I wrote to put the images onto the texture atlas automatically bleeds the pixel's rbg outward, so even without the extra step of going into photoshop to guarantee the white background, it SHOULD work.  But I did the photoshop thing for completeness.   I tried premultiplied alpha, but it didn't seem to help anything (it's easy to premultiply the alpha in my load pipeline)-- still got the rim.    Extra wrinkle:  This is particularly frustrating because it worked at one point (I had this same fight a couple months ago, but it was easy to fix by simply adding the .5).  It stopped working after I accepted an update to OSX Lion.
  4. Hi all,   I have some drawing code that is expected to work on "all systems" ... it's working okay on DirectX/XNA, but in OpenGL, I get graphical glitches.   Here's a screenshot: http://i.imgur.com/qSm4mgR.jpg   (Top text/image in each picture is filtering on, bottom tex/image in each picture is filtering off)   Some explanation (please read before you answer that I need to offset my pixels/texels by .5 in openGL):   1. Everything you see there, including the font, are just images combined onto a texture atlas 2. The texture atlas DOES have a white, zero alpha background-- I have no idea where the darker color is blending in from. 3. There IS space around all images on the texture atlas-- they are not grabbing pixels from another, adjacent image 4. I have tried many versions of "add .5 to pixels, add .5 to texels" to try to get the OpenGL version working.  I can make it look worse, but I can't make it look better.   My default render settings are these: glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); glTexEnvf(GL_TEXTURE_ENV,GL_TEXTURE_ENV_MODE,GL_MODULATE); glDisable(GL_LIGHTING); glShadeModel(GL_SMOOTH); glDisable(GL_DEPTH_TEST); glDisable(GL_CULL_FACE); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glAlphaFunc(GL_GREATER,.01f); I am suspecting the GL_ONE_MINUS_SRC_ALPHA, because I read that that's supposed to be for premultiplied alpha.  I do not have premultiplied alpha in the images-- but what's the fix?  I tried every blend mode that didn't sound crazy, and I get either white boxes for everything, or the images you see here.   Please help!   Thanks!
  5. Hi all,   I have some code that has to be implemented in both DirectX and OpenGL ES 1.0 (meaning no shaders).   What I'm doing is setting the texture matrix.  In DirectX via: gDevice->SetTransform(D3DTS_TEXTURE0,((D3DMATRIX*)theMatrixPtr)); And in OpenGL via: glMatrixMode(GL_TEXTURE); glLoadIdentity(); glLoadMatrixf((float*)theMatrixPtr); I'm using this code right now only to TRANSFORM the texture matrix-- basically translating small fractions to make the texture scroll.   In DirectX I found that I have to put the translation numbers in a 3x3 matrix rather than a 4x4 matrix.  I.E. the x/y transform coordinates have to go in matrix m20, m21 rather than what I use for all other transforms (m30,m31).   Is this also true in OpenGL?  Or will OpenGL accept the texture coordinate transforms in m30,m31?   Thanks! --John  
  6. OpenGL Same model, different UV's

    Fan-freaking tastic.  Thanks a million Kalle!
  7. OpenGL Same model, different UV's

    In that case, the number of texture swaps might become prohibitive as well-- even if I write up a batching system.  The low end machine we're targetting is iPad2.    This is probably the solution I'm going to go with, but I did want to exhaust my efforts first.  It does seem like a tremendous oversight to not have included some method of transforming UV in the pipeline.
  8. Hi all,   I realize this might be an impossible task, but then again, it might be a possible one!   First, I am locked to OpenGLES 1.0 for technical reasons at my company.  I realize doing it with a shader would be simple, but unless I want to announce that I'm going to be rewriting the whole core graphical section of the company framework (for which I will get fired), I have to do it some other way.   So here's the issue:   We have a model, all UV'd.   We have some skins of the model baked onto a texture atlas.   At load time, the model gets its UVs adjusted to fit the texture on the atlas so it draws well.  Like, we literally have a function that takes UV's in the range of 0-1 and adjusts them to the texture's position on the atlas.   Here's my problem:   I want a way to, on the fly, move those UV's over to another place on the atlas.  Because the skins that are backed on the atlas are all the same size, it would be sufficient to simply transform the UV's-- they don't need to be resized or recomputer.   But, because UV's are baked to the XYZ position (a featured designed by a madman), I would literally be stuck either making a whole new model with the UV coordinates, or changing the UV's on the fly.   Because this routine needs to be used to populate a screen that could have a lot of versions of the same object with different skins, both of those objects are prohibitive-- I'd need about a hundred models at worst, or I'd need to manually change the UV's a hundred times.   Since the target is mobile, no good!   So: does there exist some sort of call you can make to translate the texture coordinates before drawing a primitive?  Sort of the way you'd set the world matrix to move your model, except for texture coordinates?  Answer accepted for either DirectX or OpenGL.     Thanks! John          
  9. Hi all,   I've found a lot of algos online that tell how to cast a ray directly into the screen from the mouse position.  My problem is, I am dealing with code that has a legacy camera setup, and the setup is a little bit unique.  As a result, I'm able to get the DIRECTION of the picking ray accurately, but I'm not able to get a good origin point.   Here is the code I'm using to generate the picking ray (mPMatrix and mVMatrix are stored copies of the projection and view matrix.  The world matrix is identity, I haven't even gotten that far yet). void Game::MouseToRay(float theX, float theY) { theY=gG.HeightF()-theY; Matrix aProjection=mPMatrix; Vector aV; aV.mX=(((2.0f*theX)/mDrawViewport.mWidth)-1)/aProjection.mData._11; aV.mY=-(((2.0f*theY)/mDrawViewport.mHeight)-1)/aProjection.mData._22; aV.mZ=1.0f; Matrix aView=mVMatrix; aView.Invert(); Vector aRayOrigin; Vector aRayDir; aRayDir.mX = aV.mX*aView.mData._11 + aV.mY*aView.mData._21 + aV.mZ*aView.mData._31; aRayDir.mY = aV.mX*aView.mData._12 + aV.mY*aView.mData._22 + aV.mZ*aView.mData._32; aRayDir.mZ = aV.mX*aView.mData._13 + aV.mY*aView.mData._23 + aV.mZ*aView.mData._33; aRayOrigin.mX = aView.mData._41; aRayOrigin.mY = aView.mData._42; aRayOrigin.mZ = aView.mData._43; mProjected.mPos[0]=aRayOrigin; mProjected.mPos[1]=aRayOrigin+(aRayDir*100); } This code produces a 100% accurate direction, but it produces an origin that gets less and less accurate the further the mouse is from pointing right at 0,0,0.  Aspect ratio also seems to be an issue, in that the inaccuracy seems to be different along the y axis.  I am not sure what is missing.   Now, as I said, the camera code I'm using is unorthodox, but I don't have the option to not use it.  I already had a lot of problems with the camera code and setting fog values (I had to manually add a tweak), so I expect the secret is in there.  Here is what the camera setup code for the scene looks like: void Set3DCamera(float theCameraX, float theCameraY, float theCameraZ, float theLookatX, float theLookatY, float theLookatZ, float theUpVectorX, float theUpVectorY, float theUpVectorZ, float theFOV) { float aWorldMatrix[4][4]; float aViewMatrix[4][4]; IDENTITYMATRIX(aWorldMatrix); SCALEMATRIX(aWorldMatrix,1,1,-1); theCameraZ = -theCameraZ; theLookatZ = -theLookatZ; theUpVectorZ = -theUpVectorZ; // We translate the world the opposite direction of the camera (relatively speakng, of course) TRANSLATEMATRIX(aWorldMatrix,-theCameraX,-theCameraY,-theCameraZ); // // Look-at matrix vectors // float aLookatVectorX = theLookatX-theCameraX; float aLookatVectorY = theLookatY-theCameraY; float aLookatVectorZ = theLookatZ-theCameraZ; // Side vector (UP cross LOOKAT) float aSideVectorX = theUpVectorY * aLookatVectorZ - theUpVectorZ * aLookatVectorY; float aSideVectorY = theUpVectorZ * aLookatVectorX - theUpVectorX * aLookatVectorZ; float aSideVectorZ = theUpVectorX * aLookatVectorY - theUpVectorY * aLookatVectorX; // Correct the UP vector (LOOKAT cross SIDE) theUpVectorX = aLookatVectorY * aSideVectorZ - aLookatVectorZ * aSideVectorY; theUpVectorY = aLookatVectorZ * aSideVectorX - aLookatVectorX * aSideVectorZ; theUpVectorZ = aLookatVectorX * aSideVectorY - aLookatVectorY * aSideVectorX; // Normalize the lookat vector float len = (float)sqrt(aLookatVectorX*aLookatVectorX + aLookatVectorY*aLookatVectorY + aLookatVectorZ*aLookatVectorZ); aLookatVectorX /= len; aLookatVectorY /= len; aLookatVectorZ /= len; // Normalize side vector len = (float)sqrt(aSideVectorX*aSideVectorX + aSideVectorY*aSideVectorY + aSideVectorZ*aSideVectorZ); aSideVectorX /= len; aSideVectorY /= len; aSideVectorZ /= len; // Normalize the up vector len = (float)sqrt(theUpVectorX*theUpVectorX + theUpVectorY*theUpVectorY + theUpVectorZ*theUpVectorZ); theUpVectorX /= len; theUpVectorY /= len; theUpVectorZ /= len; // // The view matrix (look-at) // aViewMatrix[0][0]=-aSideVectorX; aViewMatrix[1][0]=-aSideVectorY; aViewMatrix[2][0]=-aSideVectorZ; aViewMatrix[3][0]=0; aViewMatrix[0][1]=-theUpVectorX; aViewMatrix[1][1]=-theUpVectorY; aViewMatrix[2][1]=-theUpVectorZ; aViewMatrix[3][1]=0; aViewMatrix[0][2]=aLookatVectorX; aViewMatrix[1][2]=aLookatVectorY; aViewMatrix[2][2]=aLookatVectorZ; aViewMatrix[3][2]=0; aViewMatrix[0][3]=0; aViewMatrix[1][3]=0; aViewMatrix[2][3]=0; aViewMatrix[3][3]=1; // Combine the world and view (GL doesn't support View matrices) MULTIPLYMATRIX(aWorldMatrix,aViewMatrix); SetMatrix(1,&aWorldMatrix); // // Perspective projection matrix (as per Blinn) // float aAspect = (float)gPageWidth/(float)gPageHeight; float aNear = gZNear; // This was 1.0... is it hurting anything? I made it 0.5f so things could be closer to the camera without float aFar = GetZDepth(); float aWidth=COS(theFOV / 2.0f); float aHeight=COS(theFOV / 2.0f); if (aAspect > 1.0) aWidth /= aAspect; else aHeight *= aAspect; //aHeight/=.75f; float s = SIN(theFOV / 2.0f); float d = 1.0f - aNear/aFar; float aMatrix[4][4]; aMatrix[0][0]=aWidth; aMatrix[1][0]=0; aMatrix[2][0]=0; aMatrix[3][0]=0; aMatrix[0][1]=0; aMatrix[1][1]=aHeight; aMatrix[2][1]=0; aMatrix[3][1]=0; aMatrix[0][2]=0; aMatrix[1][2]=0; aMatrix[2][2]=s/d; aMatrix[3][2]=-(s * aNear / d); aMatrix[0][3]=0; aMatrix[1][3]=0; aMatrix[2][3]=s; aMatrix[3][3]=0; gFogTweak=s; SetMatrix(2,&aMatrix); } See that line gFogTweak near the end?  I had to add that line to make fog work correctly-- it was the exact same situation I'm dealing with here, where there were lots of answers on the web, but none of them worked correctly.  I believe that something about how that projection matrix is set up is what's throwing off my picking ray.    I'm only middling-strong on math and I've hit a wall.  And this point all I am able to do is try random things in the hopes that one of them will work, but none of them will.  If anyone can see the problem, or what I need to add into that MouseToRay function to make this all line up, you will have my infinite gratitude, and a "special thanks" line in the credits of the game I'm working on!
  10. Hey uglybdavis, that's a good start to what I need!   My question would be how to involve the second plane.    For instance, I have plane1, which is the plane of the existing triangle, and the 3 points of the triangle...   ...Then I have plane2, which is the plane I want the triangle rotated into (remember, I want to retain the shape of the triangle).   How would you go about converting those points from plane 1 to plane 2?  Pseudocode would be most welcome!
  11. Hi all,   For a visual effect, I want to flatten the XYZ coordinates of an arbitrary triangle into a 2D plane where I can then convert those to UV coordinates while maintaining a specific size/aspect ratio for the triangle.   So for instance, I have a triangle in an arbitrary plane, but I want to convert it so that it rests in plane z=0 while maintaining the dimensions of the polygon.    Is there a quick and easy way to do this?  I can think of all kinds of weird methods, like picking one point on the triangle and then using distance between the points to extrapolate the same triangle with z=0, but if there's a smarter way to do it, I'd like to know!   Thanks!
  12. Thanks!  So it was just something I didn't know.  I think this is the first time I ever even tried to initialize one twice (I tend to treat references a pointers that can never be null).   However, just to be clear-- I WAS trying to overwrite the contents-- after I was done with them.  I was just re-using a variable.  I expected the contents of the Quad& reference to be filled in the same way that *Quad's contents would have been filled-- procedurally as the program went along.
  13. Hi all,   I ran into a funny problem today... using visual studio 2010, I did this: Quad& aQuad=aSprite1.GetTextureQuad(); printf("Quad Center: %f,%f",aQuad.Center().mX,aQuad.Center().mY); // // Re-use the reference... // aQuad=aSprite2.GetTextureQuad(); printf("Quad Center (2): %f,%f",aQuad.Center().mX,aQuad.Center().mY); ....and I get the same value for both printfs-- the value that came from aSprite2.  It's like it completely discarded the referenced data to aSprite1.   If I change it to this: Quad& aQuad=aSprite1.GetTextureQuad(); printf("Quad Center: %f,%f",aQuad.Center().mX,aQuad.Center().mY); // // Use a whole new reference! // Quad& aQuad2=aSprite2.GetTextureQuad(); printf("Quad Center (2): %f,%f",aQuad2.Center().mX,aQuad2.Center().mY); ...Then it works as expecting, printing out two different center points for two different quads.   So my question is, am I running into a compiler bug here, or is this a known shortcoming of references that I've just never heard of until this point?   Thanks!
  14. Hi all,   I have a projection matrix that I'm building manually...   Can anyone tell me how I can turn this to an orthographic view matrix?  I keep trying examples online, but always get non-orthographic, sometimes hilarious, results.   (Note: For reasons of a tyrannical co-programmer, I can't simply use the Direct3DX functions to do this)   Thanks! float aAspect = (float)gPageWidth/(float)gPageHeight; float aNear = gZNear; float aFar = GetZDepth(); float aWidth=COS(theFOV / 2.0f); float aHeight=COS(theFOV / 2.0f); if (aAspect > 1.0) aWidth /= aAspect; else aHeight *= aAspect; float s = SIN(theFOV / 2.0f); float d = 1.0f - aNear/aFar; float aMatrix[4][4]; aMatrix[0][0]=aWidth; aMatrix[1][0]=0; aMatrix[2][0]=0; aMatrix[3][0]=0; aMatrix[0][1]=0; aMatrix[1][1]=aHeight; aMatrix[2][1]=0; aMatrix[3][1]=0; aMatrix[0][2]=0; aMatrix[1][2]=0; aMatrix[2][2]=s/d; aMatrix[3][2]=-(s * aNear / d); aMatrix[0][3]=0; aMatrix[1][3]=0; aMatrix[2][3]=s; aMatrix[3][3]=0;
  15. OpenGL Uniform fog across scene

    Thanks for the responses, guys.  I was combining view into projection because on the directX end, I'd set my camera in the view matrix, then move meshes around using the world matrix.  I was trying to find a way to do it in OpenGL that wouldn't force me to basically set the camera every time I place a mesh.   So what's the right way to do this...?  Take my "world" matrix, multiply it by "view," and then set that as ModelView?  I'll probably keep my camera matrix seperate, and then just multiply the "actual" modelview matrix every time I move a model.  Does that sound like a viable solution?