# Manja

Member

14

145 Neutral

• Rank
Member
1. ## Common edge of a triangle fan

Hi, I am rendering a polygon as a triangle fan. The common edge between adjacent triangles get rendered twice and results in artefacts when blending is enabled. How to over come this without using any fragment operations like logic ops, depth test, stencil test, etc.
2. ## Clipping

That is great.
3. ## Clipping

I could not understand what will happen if the two points have different w values. Can you please explan me how to derive NDC for the following two clip coordinates: P1 = (20, 24, 0, 31) and p2 = (15, 31, 0, 24).
4. ## Clipping

Hi, When we clip coordinates to the clipping volume defined by -w <= x <= w, -w <= y <= w, -w <= z <= w do we have to linearly interpolate w also like we interpolate x, y and z? Where can I get more information about this concept, relation between 'w' and clipping? Thanks
5. ## Rendering a triangle fan

I am trying to render set of triangles sharing common edge, similar to triangle fan. Since these commong edges are rendered twice, in blended mode these common edges rendered in different color. Is there any algorithm which helps avoid double rendering of common edges while rendering a triangle fan?
6. ## Culling

Why is culling done using the window coordinates not the eye coordinate itself?
7. ## Texture Mapping - combine functions

How is the texture environment mode GL_COMBINE is used, is it to combine color values obtained from two texture units? Where can I get a sample code for this?
8. ## Perspecitive Division

I ran the following piece of code using the OpenGL ES library from Khronos. I got an inverted right angled triangle with its peak clipped. But when I computed manually I got Eye Coordinates : (24.99, 0.0, 24.99, -5.5) (0.0, 0.0, 24.99, -5.5) (24.99, 0.0, 0.0, 1.0) and clip coordinates (after multiplying with projection matrix and before clipping to view volume) as (26.66, 0.0, -19.98, -252.40) (0.0, 0.0, -19.98, -252.40) (26.66, 0.0, -1.0, 0.0) Should this triangle be lying outside the viewwing volume and should be totally rejected? /******************************************************************************/ GLfloat triangles[] = { 0.5, 0.5, 0.0, 0.0, 0.5, 0.0, 0.5, 0.0, 0.0, }; void init(int width, int height) { GLfloat ModelView [] = { 49.98f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, -49.98, 0.0f, 0.0f, 49.97f, 0.0f, 0.0f, 0.0f, -13.0f, 0.0f, 1.0f }; GLfloat Projection [] = { 1.067f, 0.0f, 0.0f, 0.0f, 0.0f, 1.067f, 0.0f, 0.0f, 0.0f, 0.0f, -1.02f, -1.0f, 0.0f, 0.0f, -10.1f, 0.0f }; glViewport(0, 0, 400, 400); glMatrixMode(GL_PROJECTION); glLoadMatrixf(Projection); glMatrixMode(GL_MODELVIEW); glLoadMatrixf(ModelView); glEnableClientState( GL_VERTEX_ARRAY ); glVertexPointer(3, GL_FLOAT, 0, triangles); } void Draw(void) { glColor4f(1.0, 0.0, 0.0, 0.0); glDrawArrays(GL_TRIANGLES, 0, 3); glFlush(); } /******************************************************************************/
9. ## Perspecitive Division

When glFrustum used for projection and happen to get the z coordinate in eye-space as zero, we get zero 'w' after projection?
10. ## Perspecitive Division

I understand Perspective Division is to get Normalised Device Coordinates from the clip coordinates. As per the OpenGL specs the transformaion pipeline is [Object Coordinates] X [Model View Matrix] => Eye Coords [Eye Coords] X [Projection Matrix] => Clip Coords The Perspective division is done on this clip coords as follows: [ClipCoordX/ClipCoordW, ClipCoordY/ClipCoordW, ClipCoordZ/ClipCoordW] My doubt is should we do the above perspective division when we get ClipCoordW as zero.
11. ## Perspecitive Division

In a coordinate transformation pipeline from world coordinate to window coordinate, Can 'w' become zero after projection transformation? In such a case should we skip perspective division or anything else needs to be done?
12. ## Texture Mapping in OpenGL ES

Thank you. That is very helpful.
13. ## Texture Mapping in OpenGL ES

Thank you. Does it mean the rest of the textures loaded by the application (using glTexImage2d) will reside at the client side and only two of the last bound textures will be in GL server. Then there will be quite a bit of data transfer between GL client and GL server each time a new texture is bound. Please clarify if my understanding is correct. And also can you please explain what is the difference between a texture map and a texture tile. Thanks Manja
14. ## OpenGL Texture Mapping in OpenGL ES

Hello, I have the following two questions: 1. I was debugging an OpenGL ES application, I checked with glGetIntegerv(GL_MAX_TEXTURE_UNITS, &Num) and found out that the maximum texture units the GL supports is 2. But the application loads 10 textures at the start of the application and binds to different texture depending upon the scenario. How is this possible? 2. My understanding is that the texture coordinates range from 0 to 1, but the application sets the texture coordinates to -10 to 10, how is this supported? Can some one help me understand these concepts? Thanks manja