• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

162 Neutral

About Metalcore

  • Rank
  1. A Problem in using GLSL

    "I want to know how can I know if an object has texture or not in fragment program." You're asking the wrong question. The correct thinking would be "which fragment program should I use to render either object". The answer is that you need two GLSL programs, one which handles the textured rendering, one which handles the untextured. You know in your scene which is needed for which object, switch between them. Does the untextured object need GLSL at all? If not, just switch it off with glUseProgram(0) for that object. Another solution is to set a white 1x1 texture for the untextured case and add the approproate color modulation instruction in the shader to render the untextured object. That would also affect the textured one, so if you don't want that, you'd need to set the color to white. Starts to get complicated, you see? This goes partly into the do-it-all-shader direction which you control with uniforms to do one or the other, but that's normally the slower solution. Try to make your shaders as small as possible, that prohibits fill rate limited cases.
  2. wglChoosePixelFormat()

    Quote:Original post by MARS_999 OMG is this going to be fixed with Vista? I thought Vista was going to have GL1.4 at least as the minimum? Nope. If it would your apps wouldn't be backwards compatible anymore, so why bother.
  3. I read on OpenGL.org that Radeons sometimes fall back to SW rendering with lines and shaders, mostly when any primitive smoothing is enabled. Here's one link http://www.opengl.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic;f=11;t=000956 If you don't send some fancy gl_Vertex.w from the app the even simpler shader would use gl_Position = ftransform();
  4. offset incoming index value?

    No, OpenGL doesn't have an index offset like DX. When the vertex enters the vertex pipeline you already referenced it by the index you provided, too late to add a vertex array offset there. Geometry shader come after that. You could change the array pointers to point to a different start location instead. Not pretty. If you don't use indexing but glDrawArrays you can do such offsetting with its "first" argument. Likely using up more vertex arrays space (though not in the linear 0,1,2,3,4 case you gave).
  5. If you have two contexts running on the same or compatible pixelformats, you normally use wglShareLists (or an equivalent function under other OSes) to unify the display list IDs (and all other OpenGL objects except occlusions) right after creation of the contexts, so that you can simply use the display list built in one context in the other. There's no need for a "copy", it's the same list. If the pixelformats are not compatible (e.g. render_to_window and render_to_bitmap pixelformats often run on different OpenGL implementations), you cannot share any OpenGL context specific data among them via OpenGL. There you would need to create the display list in both contexts individually.
  6. OpenGL 'Stripes' in heightmap terrain

    gl.glClearDepth(10000000); // is clamped to 1.0. Manual: "Values specified by glClearDepth are clamped to the range [0,1]." glu.gluPerspective(angleOfScreen, displayRatio, 0.01, zoom); What is the value of zoom? Same as ClearDepth? You get depth precision problems with such big values. Make it as small as possible. You should avoid all double variants of OpenGL entry points. Most, if not all, OpenGL implementations are implemented in float precision.

    Use GL_TRIANGLES if you only render triangles. It's the more general primitive and probably the most used one (read: most optimized!). GL_POLYGON handles one primitive per polygon call, but GL_TRIANGLES (mind the plural) is an "independent primitive" which means you can group plenty of triangles in one primitive. GL_POLYGON behaves special when used with flat shading! It uses the color of the _first_ vertex because the number of vertices you can send is unlimited and it would be really stupid to wait for the last vertex until you can render. The other nine OpenGL primitives use the last vertex per sub-primitive to determine the flat shaded color.
  8. You can't just change the context to get multisample buffers. Multisampling is part of the pixelformat and you can only set one pixelformat during the lifetime of a window. Because the query for multisample pixelformat requires a wgl-extension and the availability can only be queried with a valid OpenGL context, you must create a dummy window, set a pixelformat, get the wgl-function pointers, destroy the dummy window, create a new window and choose a pixelformat with the new wgl-functions you got. Awkward, but that's how Windows is.
  9. Need help for GL_ARB_TEXTURE_FLOAT

    AFAIK FBOs currently don't define render targets for alpha only and some don't support formats with less than three components, which both makes ALPHA32F an invalid format. Read the FBO spec http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt chapter 4.4.4 Framebuffer Completeness Quote: * The following base internal formats from table 3.15 are "color-renderable": RGB, RGBA, FLOAT_R_NV, FLOAT_RG_NV, FLOAT_RGB_NV, and FLOAT_RGBA_NV. The sized internal formats from table 3.16 that have a color-renderable base internal format are also color-renderable. No other formats, including compressed internal formats, are color-renderable. Means only NVIDIA offers a one component format, but it's the red channel not alpha. (Not sure which extension might have extended the Table 3.16 in the specs.) If ARB_texture_float is supported, GL_RGBA32F_ARB should work. [Edited by - Metalcore on November 4, 2006 4:37:14 PM]
  10. Strange Offset problems using VBOs

    Check what sizeof(vertex) really is and what the offset of the texture coordinate array 0 inside the class is. I'm a hardcore C coder and wouldn't use the size of a class to identify my vertex data and their buffer offsets. I would use a struct with only the vertex data and pointer differences to the first members or offsetof() to determine the VBO start offsets.
  11. Things I would try to debug: - Is the depth buffer cleared? - Is depth test disabled? (Then you wouldn't need the depth mask call.) - If the modulation doesn't work correctly, what are the exact texture contents, especially the alpha? - Is this point size related? - The point size threshold changes the alpha value. Are the correct points the ones in the threshold or outside? Draw non-random data, a straight line of points in different distances from back to front. Change the threshold. Set it to 0.0. - Could it be wrong mipmaps? - Is lighting disabled? - Does Vector3d return a pointer to a float[3] array? What's your graphics board and drivers? Tried newer drivers?
  12. OpenGL Odd glGetError issue

    Ok, then the threading may have been a red herring and it's really just the number of glGetError calls. But that shouldn't really affect the rendering performance much because glGet* is not allowed inside immediate mode glBegin-glEnd and all other rendering entry points are more or less batching geometry. Display lists would not compile the glGet commands, they are executed immediately and not on glCallLists. Try if that is really driver related by selecting a pixelformat using of the Microsoft GDI Generic OpenGL implementation (if you have advanced features, stub them out) and see if the performance is still slow. If yes I would say there is something fishy with the Python way of checking errors. Maybe there is an inherent difference in the scheduling between Linux and Windows when using exceptions or somesuch (sorry I don't use Python). If that still doesn't help get a profiler and check which module burns the performance. I wouldn't use glGetError in a release build except for detecting out of memory issues after glEndList and for FBO bindings. Checking it after any OpenGL call where allowed makes only sense during debugging and that is not suited for performance measurements.
  13. Between vertex- and fragmentshader describes it best. It's happening after the projection matrix and before the viewport scaling in the perspective division. You have window positions in the fragment program. Read the OpenGL 2.0 spec chapter 2.11 Coordinate Transformation.
  14. OpenGL Odd glGetError issue

    Quote:Has anyone seen anything like this before? Could it be an issue with PyOpenGL aggressively checking error codes all the time? It seems like a possible cause, although it seems like glGetError calls shouldn't cause any sort of noticeable speed hit. Not true, if the driver runs OpenGL commands in a different thread all glGet commands require a command pipeline drain and are expensive. You are on a dual-core or multi-CPU system? Try switching off the Threaded Optimization (same place where you found the error reporting issue). If that setting is not there, read the driver release notes (I must admit, I read manuals! ;-)) on this page http://www.nvidia.com/object/winxp_2k_93.71.html and search for the sentence "There may be intermittent application compatibility issues with dual core CPUs."
  15. If you program yourself with OpenGL you should save the money and see how far you get with the GF7800. Correctly programmed that should handle lots of triangles just fine. Use vertex buffer objects and you have the most control about the data management and performance.
  • Advertisement