GamerSg

Members
  • Content count

    984
  • Joined

  • Last visited

Community Reputation

378 Neutral

About GamerSg

  • Rank
    Advanced Member
  1. Vertex Array speed

    You could use couple of methods i) Multi-texturing allowing you to bind more than 1 texture at once (usually for multi texturing the same polygon) ii) Texture Atlas index (More suited but it's newer and might not be as well supported on older HW) Ive not used any of the methods before so i cant get into detail
  2. DirectX or OpenGL

    IPhone explicitly does not allow Java on it. The only "Legal" languages on iPhone are C/C++/Objective C of which >70% of the iOS API is exposed in Objective-C with the rest in C. Though i would not call the iPhone an easy platform to program for, it is very very well documented with plenty of examples, so it is quite doable once you get past the Objective-C syntax to atleast use the API. And for graphics, it uses OpenGL ES which is a almost like an exact copy of OpenGL3 with some features removed. Ive found that 90% of my OpenGL code can be copy-pasted to work on iPhone with some very minor changes(remove EXT post-fix, etc..)
  3. But i want to use the core profile to make sure im not using any depreceated features. Anyway after wasting an entire day on it, i found the problem. It's not SFML that is causing the problem. The problem was that there is a bug in GLEW for Core profiles. GLEW makes use of glGetString(GL_EXTENSIONS) to retrieve extensions, which is depreceated since 3.0. Therefore GLEW fails to get any extensions in a core gl3 context. As to why it worked with 3.1 and 3.0, the gl specs state that wglCreateContextAttribsARB will return a compatibility profile by default for 3.0 and 3.1. Since SFML does not specify a core/compatibility profile, it always returns a compatibility profile pre 3.2 which allowed glGetString(GL_EXTENSIONS) to work in GLEW. The bug tracker in GLEW has this issue, but there is noone willing to fix the issue in the codebase yet. Ive made a simple fix for it in glew.c assuming the minimal context will be 3.0 or higher so that i can bypass the old GLEW extension code entirely. Hopefully this helps others with this problem.
  4. Im pretty baffled by what is going on, when i request for a 3.2 context, glGenFramebuffers is a null pointer. If i request for a 3.1 or 3.0 context, there is no problem and everything runs fine. As far as im aware glGenFramebuffers was not depreciated in 3.2. Im using SFML v2 from the SVN snapshot and glew to load my extensions. sf::ContextSettings Settings; Settings.MajorVersion = 3; Settings.MinorVersion = 2;//Opengl 3.2 canvas.Create(sf::VideoMode(WIDTH, HEIGHT, 32), "High Poly Viewer", sf::Style::Close, Settings); GLenum err = glewInit();//Initialise Glew to handle extensions if (GLEW_OK != err) { std::cout<<"Error, could not init GLEW\n"; } glGenFramebuffers(1, &FBOid);//Fails here, glGenFramebuffers is a null pointer Just for more info, this is the SFML source code which creates the actual context. // Create the OpenGL context -- first try an OpenGL 3.0 context if it is requested while (!myContext && (mySettings.MajorVersion >= 3)) { PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB = reinterpret_cast<PFNWGLCREATECONTEXTATTRIBSARBPROC>(wglGetProcAddress("wglCreateContextAttribsARB")); if (wglCreateContextAttribsARB) { int attributes[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, mySettings.MajorVersion, WGL_CONTEXT_MINOR_VERSION_ARB, mySettings.MinorVersion, 0, 0 }; myContext = wglCreateContextAttribsARB(myDeviceContext, sharedContext, attributes); } // If we couldn't create an OpenGL 3 context, adjust the settings if (!myContext) { if (mySettings.MinorVersion > 0) { // If the minor version is not 0, we decrease it and try again mySettings.MinorVersion--; } else { // If the minor version is 0, we decrease the major version and stop with 3.x contexts mySettings.MajorVersion = 2; } } } In all cases, Glew returns OK during init. Also, in the 3.2 case, GL_VERSION returns 3.20. Hardware is a nvidia G210 and using latest drivers.
  5. Well i just chanced upon this video and i think it mirrors the reaction from the majority of OpenGL users. Really good laugh. http://www.youtube.com/watch?v=sddv3d-w5p4
  6. Im trying to do a first person camera, and math isnt exactly my strongest point. I realise i have to apply 2 rotations to the camera node which stores the transformation as a single 4X4 matrix which i pass to OpenGL directly. Currently i do the following, myCam->rotateAroundWorldAxis( Syp::AngleAxis(0,1,0,-diff)); //Rotates around Y-Axis myCam->rotateAroundLocalAxis( Syp::AngleAxis(1,0,0,diffY) );//Rotate around X-axis diff and diffY refer to the relative changes in mouse positions from the previous frame. To prevent complications, im positioning the camera at the origin but it still doesnt look right. If i comment out any 1 of the lines above and rotate in a single axis only, it works right. I have a feeling my rotateAroundWorldAxis is probably wrong, here is the code void Node::rotateAroundWorldAxis(AngleAxis a) { float rot[16]; float ans[16]; float inv[16]; float axis[3]; float axis2[3]; axis[0] = a.x; axis[1] = a.y; axis[2] = a.z; Math::InverseMatrix(mat, inv);//find inverse of it Math::MultVectorByMatrix(inv, axis,axis2); a.x = axis2[0]; a.y = axis2[1]; a.z = axis2[2]; Math::getMatrix(a, rot);//get our new rotation matrix from AngleAxis Math::multMat(rot, mat, ans);//mat is the node's transformation matrix setMatrix(ans); }
  7. OpenGL OpenGL 3.0 canceled?

    Quote:Original post by stimarco I think it's clear that pragmatic issues forced the ARB's hand here. I'm guessing that simply dumping 2.x and starting afresh was unfeasible as it would require companies like ATI, NVidia and Intel to maintain both 2.x and 3.x drivers. As OpenGL is a niche API, this makes little sense: ATI, NVidia and Intel make their money from consumer graphics products, which means DirectX gets the bulk of the attention. Making OpenGL even harder to support over the near-term was never going to go down well with the IHVs. Just out of curiosity, when a IHV releases new hardware, do they not have to implement drivers for DX 7/8/9/10? Else older games coded in DX7 will not work on newer hardware.
  8. [MDX] Performance bottleneck

    Well im not a DX guy, but 200k should not be a problem for your card at all, especially since they are static meshes, so i would probably look into your drawing code. It's likely that you are not drawing in the most optimal way.
  9. Instead of loading mipmaps, you might want to try hardware mipmap generation, though i do not know how much faster it will be but any time saved from loading from disk is likely to be more than mipmap generation on hardware which is very fast if supported.
  10. distant object renders over near object

    Are you by any chance clearing the depth buffer between drawing the 2 objects? I cant think of any other reason besides your depth buffering not being enabled.
  11. OpenGL OpenGl 2.0 games

    Quote:Original post by MARS_999 idTech 5 may use GL3.0 John said, he hasn't made up his mind. I supposed he is waiting like the rest of us sorry saps. I am acutally happy with GL2.1 with the Nvidia extensions that make up DX10 features in GL. But not everyone has a 8800, and if they don't get one! ;) I thought idTech5 development was done and as it stands, there is no sign of GL3 yet. Ive just stopped waiting and will probably not bother about GL3.0 until both ATI/nVidia have drivers performing on par with current drivers. That is if it is ever released.
  12. Same here y gives me a longer bar. If it means red is FBO, than there is a quite a large difference in performance on my machine. Im on ATI X1600. phantom : While i understand that the fps he gave doesnt mean much, the point i think he is trying to make is that, if the state changes of a FBO takes up more time than than a framebuffer copy to texture, why even use it in the first place regardless of how complicated a scene is. The FBO method is just going to take up more time than a framebuffer copy anyway regardless of scene complexity. [Edited by - GamerSg on April 17, 2008 3:15:01 PM]
  13. OpenGL GL_EXT_convolution

    Quote:Original post by Vilem Otte GLU - you can try to use it, but I'd recommend you to find something newer (GLU is pretty ancient library). GLU isn't available directly from OpenGL website, because I think it's not made by them (but I'm not sure, so if somebody who knows more about it come along - please let me know ;-)). Ancient doesnt mean bad. Glu functions do what they need to do and they do it well. Most of what they do is more or less a solved problem, and there is no need to update the implementation. Stay away from glaux though, ive heard it is buggy and depreceated.
  14. OpenGL Animation in OpenGL

    A few ways, but ultimately, you are drawing a new model every frame. 1) Your model format has a different model for every frame. You can do this for low poly models/characters with few/short animations, suitable for RTS units. This is the fastest method. 2) KeyFramed Based. You store models for particular keyframes of your model in a start and end pose. During runtime, you interpolate vertices between the 2 poses depending on time. This uses much less memory and is pretty fast. 3) Skeletal Animation Uses a bone system with 1 copy of the mesh. Bone animation is stored in keyframes, during runtime, bone position is calculated every frame and mesh is deformed according to bone transformation. Smooth skinning can have more than 1 bone affecting a vertex. Lowest memory consumption but highest CPU usage. With Vertex shaders, the deformation can be offloaded to GPU's. Implementation is more difficult than previous 2 methods.
  15. Quote:Original post by ViperG Or you could just compile the exe so you don't need the manifest or the redist. I don't think that is possible in VS2005 onwards. But if it is, ill be glad to know how.