Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 02 Mar 2002
Offline Last Active Feb 06 2013 09:01 AM

Topics I've Started

Division by zero and AMD

03 February 2013 - 06:45 AM

We have a project written in VB6. It uses GL 1.1. It runs fine on multiple Windows versions and different cards.

But with the Catalyst 13.1 driver and also previous one, "division by zero" message is getting thrown.


It happens on HD 5450, Win 7 32 bit and also on another HD series card.

It happens on simple functions like glScalef and glScissor.

The values being submitted to those functions are fine so I don't understand what the problem is. There is some behind the scene thing happening in the AMD drivers.

Are there any benchmarks that demonstrate core profile vs compatibility profile?

30 November 2011 - 07:42 AM

Are there any benchmarks that demonstrate core profile vs compatibility profile?
By compatibility, I don't mean to actually use any old GL functions. Your code would still be strictly GL 3.3 core except that you would be creating a "compatibility context".

Apparently, the performance is worst with a core profile on nvidia. According to a slide by Mark
(http://www.slideshare.net/Mark_Kilgard/gtc-2010-opengl, page 97)

According to him, core profile either would give equal performance or less because there would be more things to check at every GL function call.

That is a pretty bad design decision.

The forums have problems

06 June 2011 - 07:25 AM

I don't know if this is the right place to post.

These forums have problems for a long while. Ever since I began to use them, there use to be ASP problems.
After the update, there was a high CPU usage problem for 5 seconds every time I clicked on a link.
Now, there are timeout problems. Do the servers have problems?

Even NeHe is down (nehe.gamedev.net)

I'm just reporting it because perhaps it is only me and some people effected.

freeGLUT, crash at glGenVertexArrays

22 March 2011 - 11:24 AM

I'm trying to convert some code to freeGLUT.
I'm trying to create a GL 3.3 context (yes, my system supports GL 3.3)
The problem is that it crashes on the first GL call to glGenVertexArrays

I am using GLEW and GLenum err = glewInit() succeeds.
Any ideas?

// Triangle_opengl_3_1
// A cross platform version of
// http://www.opengl.org/wiki/Tutorial:_OpenGL_3.1_The_First_Triangle_%28C%2B%2B/Win%29
// with some code from http://www.lighthouse3d.com/opengl/glsl/index.php?oglexample1
// and from the book OpenGL Shading Language 3rd Edition, p215-216
// Daniel Livingstone, October 2010

#include <GL/glew.h>
#include <GL/freeglut.h>
#include <iostream>
#include <fstream>
#include <string>

using namespace std;

// Globals
// Real programs don't use globals :-D
// Data would normally be read from files
GLfloat vertices[] = {	-1.0f,0.0f,0.0f,
						0.0f,0.0f,0.0f };
GLfloat colours[] = {	1.0f, 0.0f, 0.0f,
						0.0f, 1.0f, 0.0f,
						0.0f, 0.0f, 1.0f };
GLfloat vertices2[] = {	0.0f,0.0f,0.0f,
						1.0f,0.0f,0.0f };

// two vertex array objects, one for each object drawn
unsigned int vertexArrayObjID[2];
// three vertex buffer objects in this example
unsigned int vertexBufferObjID[3];

// loadFile - loads text file into char* fname
// allocates memory - so need to delete after use
// size of file returned in fSize
char* loadFile(char *fname, GLint &fSize)
	ifstream::pos_type size;
	char * memblock;
	string text;

	// file read based on example in cplusplus.com tutorial
	ifstream file (fname, ios::in|ios::binary|ios::ate);
	if (file.is_open())
		size = file.tellg();
		fSize = (GLuint) size;
		memblock = new char [size];
		file.seekg (0, ios::beg);
		file.read (memblock, size);
		cout << "file " << fname << " loaded" << endl;
		cout << "Unable to open file " << fname << endl;
	return memblock;

// printShaderInfoLog
// From OpenGL Shading Language 3rd Edition, p215-216
// Display (hopefully) useful error messages if shader fails to compile
void printShaderInfoLog(GLint shader)
	int infoLogLen = 0;
	int charsWritten = 0;
	GLchar *infoLog;

	glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen);

	// should additionally check for OpenGL errors here

	if (infoLogLen > 0)
		infoLog = new GLchar[infoLogLen];
		// error check for fail to allocate memory omitted
		glGetShaderInfoLog(shader,infoLogLen, &charsWritten, infoLog);
		cout << "InfoLog:" << endl << infoLog << endl;
		delete [] infoLog;

	// should additionally check for OpenGL errors here

void init(void)
	// Would load objects from file here - but using globals in this example	

	// Allocate Vertex Array Objects
	glGenVertexArrays(2, &vertexArrayObjID[0]);
	// Setup first Vertex Array Object
	glGenBuffers(2, vertexBufferObjID);
	// VBO for vertex data
	glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObjID[0]);
	glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vertices, GL_STATIC_DRAW);
	glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0); 

	// VBO for colour data
	glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObjID[1]);
	glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), colours, GL_STATIC_DRAW);
	glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0);

	// Setup second Vertex Array Object
	glGenBuffers(1, &vertexBufferObjID[2]);

	// VBO for vertex data
	glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObjID[2]);
	glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vertices2, GL_STATIC_DRAW);
	glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0); 



void initShaders(void)
	GLuint p, f, v;

	char *vs,*fs;

	v = glCreateShader(GL_VERTEX_SHADER);
	f = glCreateShader(GL_FRAGMENT_SHADER);	

	// load shaders & get length of each
	GLint vlen;
	GLint flen;
	vs = loadFile("minimal.vert",vlen);
	fs = loadFile("minimal.frag",flen);
	const char * vv = vs;
	const char * ff = fs;

	glShaderSource(v, 1, &vv,&vlen);
	glShaderSource(f, 1, &ff,&flen);
	GLint compiled;

	glGetShaderiv(v, GL_COMPILE_STATUS, &compiled);
	if (!compiled)
		cout << "Vertex shader not compiled." << endl;

	glGetShaderiv(f, GL_COMPILE_STATUS, &compiled);
	if (!compiled)
		cout << "Fragment shader not compiled." << endl;
	p = glCreateProgram();

	glBindAttribLocation(p,0, "in_Position");
	glBindAttribLocation(p,1, "in_Color");

	delete [] vs; // dont forget to free allocated memory
	delete [] fs; // we allocated this in the loadFile function...

void display(void)
	// clear the screen

	glBindVertexArray(vertexArrayObjID[0]);	// First VAO
	glDrawArrays(GL_TRIANGLES, 0, 3);	// draw first object

	glBindVertexArray(vertexArrayObjID[1]);		// select second VAO
	glVertexAttrib3f((GLuint)1, 1.0, 0.0, 0.0); // set constant color attribute
	glDrawArrays(GL_TRIANGLES, 0, 3);	// draw second object



void reshape(int w, int h)

int main (int argc, char* argv[])
	glutInit(&argc, argv);
	glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
	glutInitContextVersion(3, 3);
	glutCreateWindow("Triangle Test");

	GLenum err = glewInit();
	if (GLEW_OK != err)
		/* Problem: glewInit failed, something is seriously wrong. */
		cout << "glewInit failed, aborting." << endl;
		exit (1);
	cout << "Status: Using GLEW " << glewGetString(GLEW_VERSION) << endl;
	cout << "OpenGL version " << glGetString(GL_VERSION) << " supported" << endl;


	return 0;

PS : I am on Win32. I am compiling with VC++ 2008 Express

Are these forums slow?

24 February 2011 - 08:49 AM

I know that it has been a while since gamedev made the switch to the new forum. I've just been lazy to say anything about this problem.
I am using Firefox 3.6

and whenever I click on a forum topic, Firefox jams up for 3-7 seconds. I can see that it uses one of the CPU cores to 100% during 3 seconds.
I'm quite surprised that a web page can suck so much CPU time.

Does this happen to you guys?
What's a better browser for these forums, other than IE.
I tried IE7 and it seems to be even slower than Firefox.

PS : I am on Windows Vista 32 bit.

PPS : I tested with a Win XP machine which had IE6 on it and it was horrendously slower. 2 to 3 times slower than the Firefox.
I imagine quite a few people would have to buy a new machine to use these forums.