• Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About ecco_the_dolphin

  • Rank
  1. wglMakeCurrent fails

    Quote:Original post by Erik Rufelt Does it fail only on the primary thread, only on the texture rendering thread, or on both? It happens only on primary thread, e.g on thread that renders to ActiveX controls. I have found the following words in remarks section at wglMakeCurrent. : "GDI transformation and clipping in hdc are not supported by the rendering context." Could these be related to ERROR_TRANSFORM_NOT_SUPPORTED code? If that so, then your quess could be true. Thanks for you reply, may be I should ask microsoft about this code...However, i get my HDC only once during window creation, I set pixel format and dont release this HDC untill window is destroyed. Is it true, that if application will get ANOTHER HDC of my window, and change its settings, these changes will affect my own HDC? Quote:Original post by Erik Rufelt Can you solve it for example by a while(failed) { Sleep(10); wglMakeCurrent(...); }? I will try that, but if your quess is true, it will not help, because if application behind the scenes changes my hdc settings and than changes it back, then it happens in application message loop. During Sleep aplication does not respond to window messages. [Edited by - ecco_the_dolphin on December 5, 2009 9:36:56 AM]
  2. wglMakeCurrent fails

    And before doing SendMessage to change texture names, i call glFinish() and wglMakeCurrent(0,0) by the second thread and also detach texture from FBO. Moreover whem I draw frame by the primary thread I call glBindTexture(GL_TEXUTRE_2D,0) at the end of a frame to make shure that texture is not current for the primary thread.
  3. wglMakeCurrent fails

    Quote:Original post by Erik Rufelt Do you always use the same HDC to set the contexts? I don't see what that error message means.. it says The requested transformation operation is not supported... but that doesn't really make much sense. Perhaps setting a matrix that isn't invertible or something? (If that is the type of transformation it's referring to at all). Or perhaps that error just happens to be the last one set because of a previous failure. Everything is synced properly, but I dont use the same HDC to set contexts, however I use HDCs with the equal pixel formats. Moreover in Debug build i have glGetError() call after almost every OpenGL function, so everything seems fine. I dont understand how wglMakeCurrent can fail and can succeed during next frame. I mean, application can run for a week, then for one frame wglMakeCurrent fails, and then application runs as usual. I use first context ONLY by the primary thread and second context ONLY by the second thread (except of initialization procedure). I dont render to an active texture, I synchronize texture names via SendMessageCall.
  4. wglMakeCurrent fails

    No, application is actually multithreaded (actually it is multithreaded ActiveX component). I use two OpenGL contexts: first is used for drawing into windows and second context is used by the second thread for rendering to texture (using FBO, by the way, I render not to an active texture. I mean that I actually have 2 textures, one is used by the first context and another by the second) which is used by the first context. [Edited by - ecco_the_dolphin on December 4, 2009 5:42:26 PM]
  5. wglMakeCurrent fails

    Yes, I call it every frame. I do this because there are several windows in my app, and I use one OpenGL context for all of them, so I need wglMakeCurrent(...) for every window I want to draw in.
  6. wglMakeCurrent fails

    The problem is that sometimes (very-very-very rare) wglMakeCurrent(...) fails. GetLastError returns 2004 (ERROR_TRANSFORM_NOT_SUPPORTED - I dont know what this means). In the case if wglMakeCurrent fails during frame render pass, I just do nothing. The funny thing is that during next frame wglMakeCurrent succeeds and application runs fine. Could someone explain to me what the hell is going on? By the way, i use NVIDIA drivers 190.62; operating system is Windows XP.
  7. Quote:No. Doing so is undefined behavior, technically. If you have a pointer to a block of allocated memory, you are only allowed to do pointer arithmetic within the range of the block of allocated memory (plus 1 past the end). There was a nice discussion of pointer arithmetic and the undefined-ness of making pointers point outside of this range. Thanks a lot! I didn't knew that. According to ISO/IEC 14882:2003 (it is C++ standart) section 5.7 : Quote: ... 5. When an expression that has integral type is added to or subtracted from a pointer, the result has the type of the pointer operand. If the pointer operand points to an element of an array object, and the array is large enough, the result points to an element offset from the original element such that the difference of the subscripts of the resulting and original array elements equals the integral expression. In other words, if the expression P points to the i-th element of an array object, the expressions (P)+N (equivalently, N+(P)) and (P)-N (where N has the value n) point to, respectively, the i+n-th and i–n-th elements of the array object, provided they exist. Moreover, if the expression P points to the last element of an array object, the expression (P)+1 points one past the last element of the array object, and if the expression Q points one past the last element of an array object, the expression (Q)-1 points to the last element of the array object. If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined. ... [Edited by - ecco_the_dolphin on June 9, 2009 8:15:45 PM]
  8. Thaks for reply. You mean that implementation can cashe vertex coordinates (or texture coordinates) which I specify in glVertexPointer (or glTexCoordPointer)? Then if i write something like this : ... struct fvect2d { float x; float y; }; .... void DrawSomething () { fvect2d vertex_data[3]; vertex_data[0].x = 0; vertex_data[0].y = 0; vertex_data[1].x = 1.f; vertex_data[1].y = 1.f; vertex_data[2].x = 1.f; vertex_data[2].y = 0; glVertexPointer(2,GL_FLOAT,0,&vertex_data[0].x); glEnableClientState(GL_VERTEX_ARRAY); glDrawArrays(GL_TRIANGLES,0,3); //I modify vertex_data during enabled vertex array vertex_data[1].x = -1.f; vertex_data[1].y = -1.f; glDrawArrays(GL_TRIANGLES,0,3); glDisableClientState(GL_VERTEX_ARRAY); } then, some implementation can display only first triangle? And to draw 2 triangles I must Disable/Enable array every time i modify vertex data?
  9. Bitmapped Font artifacts

    About what outline effects you are saying? :) You said that you use BITMAPPED fonts. If I correct, and you use blending functions when drawing bitmap with raster position that is not an exact pixel coordinate, then you get blurring artifacts. It is not realy artifacts, it is how blending works :). but if you dont want this blurring, then the only way to archive that is to specify exact pixel coordinate.
  10. Bitmapped Font artifacts

    what kind of aritifacts do you have? Do you use some blending functions? If so try to disable blending.
  11. I have very big array of vertex data (around 5 mb). I want to draw textured lines. But I dont want to store texture coordinates. I can calculate them during runtime. I do the following: void __fastcall CGraphicCore::_SurfaceDrawTextureLine //Pointer to primitive (const Primitive_In * ptr_prim //Some additional info ,CDrawRule * rule) { _ASSERTE(ptr_prim->PointsCount()>1); size_t first_point_index = ptr_prim->GetFirstPointIndex(); size_t points_number = ptr_prim->PointsCount(); //rule->first_point_ptr - it is a pointer to the first point in vertex array const fvec2_s * cur_pnt_ptr = rule->first_point_ptr + first_point_index; const fvec2_s * border_pnt_ptr = cur_pnt_ptr + points_number; double * tex_mem; //this is function for getting memory from buffer (actually template function) //it calls realloc(buffer_data_,sizeof(double)*elements_number) rule->additional_buffer_prt->GetRawMemory(points_number,&tex_mem); double * ptr_tex_storage = tex_mem; double tex_crd = 0; fvec2_s prev_pnt; fvec2_s cur_pnt; prev_pnt = *cur_pnt_ptr; cur_pnt = *cur_pnt_ptr; while (cur_pnt_ptr!=border_pnt_ptr) { cur_pnt = *cur_pnt_ptr; tex_crd = //do some calculations *ptr_tex_storage = tex_crd; ++ptr_tex_storage; ++cur_pnt_ptr; prev_pnt = cur_pnt; } //HACK: possible solution: modify Vertexpointer and in Deinit function reset it to first element. glTexCoordPointer(1,GL_DOUBLE,0,tex_mem - first_point_index); glEnableClientState(GL_TEXTURE_COORD_ARRAY); //render primitive glDrawArrays(GL_LINE_STRIP,first_point_index,points_number); glDisableClientState(GL_TEXTURE_COORD_ARRAY); NDebugTool::DetectGLError(); return; } The question is: can I set glTexCoordPointer(1,GL_DOUBLE,0,tex_mem - first_point_index) with (tex_mem - first_point_index)? I mean is it legal to pass invalid pointer to glTexCoordPointer if ("illegal pointer" + offset) == valid pointer? This code is working for me, but maybe it will not work with some opengl implementation... [Edited by - ecco_the_dolphin on June 8, 2009 2:08:20 PM]
  12. Hello! I have strange problem with latest ATI drivers on Radeon 9550/X1100 Series (tested with Catalyst 8.7,8.6,8.5). I think that it is a driver problem because with very old drivers everything works fine (I tested with Catalyst 6.5 Release date: xx.xx.2006). The problem: First, I render a group of triangles with glPolygonMode(GL_FRONT_AND_BACK,GL_FILL) using GL_VERTEX_ARRAY, then render the same triangles with glPolygonMode(GL_FRONT_AND_BACK,GL_LINE) using GL_VERTEX_ARRAY and GL_EDGE_FLAG_ARRAY and here something strange happens...Some of triangles that must be filled with color won't draw at all. Here is some screenshots: This is a first frame of app...Last triangle must be filled with "teal", however, as you can see this is not true, it filled red. This is one of another frames. Another strange thing...(detailes on screenshot). Here is the source code of application (it is just a demo example): I used glut 3.7.6. I dont think that this problem related with glut, I can rewrite this app using win32 api only if needed. (bad English...sorry...) Source code: #include <Windows.h> #include <gl/GL.h> #include <gl/glut.h> //first group of triangles const GLfloat triangle_vertexes[] ={1,100.0f, 70.0f,90.0f, 1.0f,1.0f ,30.0f,160.0f, 80.0f,70.0f, 60.0f,15.0f ,40.0f,60.0f, 120.0f,100.0f, 130.0f,110.0f ,40.0f,50.0f, 120.0f,90.0f, 130.0f,20.0f}; //second group of triangles (1 triangle) const GLfloat triangle_vertexes2[] = {120.0f,12.0f, 40.0f,80.0f, 10.0f,110.0f}; //edge flags of the first group const GLboolean edge_flags[] = { true ,false,false ,true ,true ,false ,false,false,false ,true ,true ,true}; //edge flags of the second group const GLboolean edge_flags2[] = {true,true,true}; GLfloat angle = 0; int size_x; int size_y; bool increment = true; void Reshape (int w,int h) { glViewport(0,0,w,h); size_x = w; size_y = h; glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0,w/8,0,h/8,-1,1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void Display (void) { glClearColor(1.0f,1.0f,1.0f,1.0f); glClear(GL_COLOR_BUFFER_BIT); //Change modelview matrix glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(angle,0,0,-1.0f); //Draw first group of triangles glPolygonMode(GL_FRONT_AND_BACK,GL_FILL); glVertexPointer(2,GL_FLOAT,0,triangle_vertexes); glEnableClientState(GL_VERTEX_ARRAY); glColor3f(1.0f,0,0); glDrawArrays(GL_TRIANGLES,0,sizeof(triangle_vertexes)/sizeof(GLfloat)); glEnable(GL_POLYGON_OFFSET_FILL); glPolygonOffset(1.0, 1.0); //Draw borders with edge flags glPolygonMode(GL_FRONT_AND_BACK,GL_LINE); glEnable(GL_BLEND); glEdgeFlagPointer(0,edge_flags); glEnableClientState(GL_EDGE_FLAG_ARRAY); glColor3f(0,1,0); glDrawArrays(GL_TRIANGLES,0,sizeof(triangle_vertexes)/sizeof(GLfloat)); glDisable(GL_BLEND); glDisableClientState(GL_EDGE_FLAG_ARRAY); //Change modelview matrix glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(angle,0,0,1.0f); //Draw second group of triangles (1 triangle actually :)) glPolygonMode(GL_FRONT_AND_BACK,GL_FILL); glVertexPointer(2,GL_FLOAT,0,triangle_vertexes2); glColor3f(0,0.5f,0.5f); glDrawArrays(GL_TRIANGLES,0,sizeof(triangle_vertexes2)/sizeof(GLfloat)); //Draw its border with edge flags glPolygonMode(GL_FRONT_AND_BACK,GL_LINE); glEnable(GL_BLEND); glColor3f(0,1,0); glEdgeFlagPointer(0,edge_flags2); glEnableClientState(GL_EDGE_FLAG_ARRAY); glDrawArrays(GL_TRIANGLES,0,sizeof(triangle_vertexes2)/sizeof(GLfloat)); glDisableClientState(GL_EDGE_FLAG_ARRAY); glDisable(GL_BLEND); glDisableClientState(GL_VERTEX_ARRAY); //Change angle if (increment) { angle+=5.0f; if (angle>25.0) increment = false; } else { angle-=5.0f; if (angle<-25.0) increment = true; } glutSwapBuffers(); } void Init (void) { //we want antialiased triangle border glEnable(GL_LINE_SMOOTH); glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); //set flat shading glShadeModel(GL_FLAT); } void TimerFunc (int some) { glutPostRedisplay(); glutTimerFunc(500,TimerFunc,0); } int main (int argc,char **argv) { glutInit(&argc,argv); glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA); glutInitWindowSize(1024,1024); glutInitWindowPosition(100,100); glutCreateWindow("Problem_demo"); glutDisplayFunc(Display); glutReshapeFunc(Reshape); Init(); glutTimerFunc(1000,TimerFunc,0); glutMainLoop(); return 0; } The question is if it is a driver problem or am I doing something wrong? By the way, i tried this example on some NVIDIA cards (GeForce 9400 GT M and GeForce 8400 M GS) and everything works fine. I also tried Microsoft generic implementation it works fine to. P.S. Please sorry for my awful English.