Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Jul 2009
Offline Last Active Sep 27 2015 08:23 AM

Topics I've Started

Displaying an image in OpenGL

09 November 2013 - 08:18 PM

Hi all!


I am trying to render an image that I generate after reading it from a .asc file. I perform all the necessary transformations and then display it on the screen. The output is a ppm image. Initially I used the windows API related BitBlt function to render the image and it shows perfectly on the screen. Then I tried using OpenGL to render the same image and it doesn't.


If I use DrawPixels, it shows a black screen and if I use texture mapping it shows a partial white box.


Here are the images and the bits of code. I have worked with targa images before but this is the first time I am creating a ppm image but I do not think that is an issue as I am just trying to render the buffer which is of type char*.

This image is produced using BitBlt.

Attached File  screenShot1.jpg   33.48KB   4 downloads

This one using OpenGL.

Attached File  screenShot2.jpg   42.99KB   4 downloads

And here is the code of my files: Main.cpp and COpenGLRenderer.cpp:

 DrawFrameBuffer(using BitBlt)

void DrawFrameBuffer()
	HBITMAP m_bitmap;
	HDC memDC;
	memDC = CreateCompatibleDC(hDC);
	//display the current image
	char buffer[sizeof(BITMAPINFO)];
	BITMAPINFO* binfo = (BITMAPINFO*)buffer;
	binfo->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);

	//create the bitmap
	BITMAPINFOHEADER* bih = &binfo->bmiHeader;
	bih->biBitCount = 3*8; //3 - channels
	bih->biWidth =  pApp->GetFrameBufferWidth();
	bih->biHeight = pApp->GetFrameBufferHeight();
	bih->biPlanes = 1;
	bih->biCompression = BI_RGB;
	bih->biSizeImage = 0;	//for rgb bitmaps we set it to 0

	m_bitmap = CreateDIBSection(hDC,binfo,DIB_RGB_COLORS,0,0,0);

	binfo->bmiHeader.biBitCount = 0;
	GetDIBits(memDC, m_bitmap, 0, 0, 0, binfo, DIB_RGB_COLORS);
	binfo->bmiHeader.biBitCount = 24;
	binfo->bmiHeader.biHeight = -abs(binfo->bmiHeader.biHeight); //for top-down image
	SetDIBits(memDC,m_bitmap,0, pApp->GetFrameBufferHeight(), pApp->GetFrameBuffer(), binfo, DIB_RGB_COLORS); //replace the 3rd last argument with the framebuffer

	SetStretchBltMode(hDC, COLORONCOLOR);
	RECT client;
	GetClientRect(hwnd, &client);

 DrawFrameBuffer(OpenGL) function

void COpenGLRenderer::setFrameBuffer(const char* buffer)
	m_pFrameBuffer = (char*)buffer;

	glGenTextures(1, &m_pTextureID);
	glBindTexture(GL_TEXTURE_2D, m_pTextureID);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_nWindowWidth, m_nWindowHeight, 0 , GL_RGB, GL_UNSIGNED_BYTE, buffer);
	//gluBuild2DMipmaps(GL_TEXTURE_2D,GL_RGB,m_nWindowWidth, m_nWindowHeight,GL_RGB,GL_UNSIGNED_BYTE, buffer);

void COpenGLRenderer::drawFrameBuffer(const char* buffer)
	glBindTexture(GL_TEXTURE_2D, m_pTextureID);
		glTranslatef(0.0f, 0.0f, -10); //z translation is the last value
			glTexCoord2f(0,0); glVertex2f(0,0);
			glTexCoord2f(1,0); glVertex2f(m_nWindowWidth,0);
			glTexCoord2f(1,1); glVertex2f(m_nWindowWidth,m_nWindowHeight);
			glTexCoord2f(0,1); glVertex2f(0,m_nWindowHeight);
	glBindTexture(GL_TEXTURE_2D, 0);


	glRasterPos2i(0, 0);

	if(buffer != NULL)
		glDrawPixels(m_nWindowWidth, m_nWindowHeight, GL_RGB, GL_BYTE, buffer);

I have commented the DrawPixels technique that did not work. Can anyone tell me what might be going on? Its basically trying to render image data stored in a char* variable using different methods. One method works so I know for sure that the buffer does not contain invalid data.

line segments having common endpoint

05 January 2013 - 06:44 PM

Hey guys,


 I had a question regarding 2 line segments. Say we have 2 line segments whose origin and lengths are given as: (P0, L0) and (P1, L1) respectively. I need to find when can they end at the same point. The line segments lie anywhere in 3D space.


One of the approaches I could think about is: Let's say this common end point is T and the points are A and B. So for the line segments with A and B as origins, A,B and T must form a triangle. Length of vector AT = L0 and length of vector BT = L1. But since the orientation of the line segment is not known, there can be a lot of possibilities. Lets say we choose a particular orientation for line segment AT as (i,j,k) - 1st octant. So now we can move anywhere in space  from T but only by a distance L1 to find BT. 


This is where I m not sure how to move forward. 

Setting up Nvidia PhysX SDK 3.2

02 October 2012 - 12:49 AM

Hi people,

I have a runtime DEBUG Assertion error when I m trying to setup a basic application using the latest Nvidia PhysX 3.2 sdk. I was just using the documentation and the tutorial mentioned at this blog : http://mmmovania.blo...th-physx-3.html as the only guide. He does mention a small change required for the 3.2 sdk.

This is where I m getting the assertion:
gPhysxFoundation = PxCreateFoundation(PX_PHYSICS_VERSION, gDefaultAllocatorCallback, gDefaultErrorCallback);

The declarations are:
PxFoundation* gPhysxFoundation = NULL;
static PxDefaultErrorCallback gDefaultErrorCallback;
static PxDefaultAllocator gDefaultAllocatorCallback;

And the Assertion is : Expression: (reinterpret_cast<size_t>(ptr) & 15) == 0 in the file: \include\extensions\pxdefaultallocator.h
Now according to the documentation, PhysX does implement a default version of the Default Allocator class and for the windows platform it does call _aligned_malloc(size, 16). Because this is what is happening. Its not able to align it to 16 bits.

Does anyone know how to solve this issue? I m only starting to learn PhysX and I cannot move forward without solving this issue.

Basic Ray tracing

20 April 2012 - 07:05 PM

Hey people, I am trying to implement a basic raytracer application. I m not using in graphics library. Just making use of the fundamental concepts.
I m having a problem while displaying my image. Its a simple utah teapot.

This is how I am implementing my ray tracing. I m not considering shadows and secondary rays yet until I get the teapot rendered.

1. Constructing the ray through the center of the pixel. Given: camera position, lookat, up vector, camera FOV. All of them are in world space.
- I m using the ray equation r = origin + t*direction
origin is my camera position. I find the 3 camera axes u,v,w:
w = lookat - camposition; u = crossproduct(w,camera->up); v = crossproduct(u,w); Normalize u, v, w
I m choosing distance to the image plane, d = -5
I m using my image plane as : left = -4, right = 4, top = 3, bottom = -3
using this i m calculating the pixelPositions: x = left + (right-left) * (i+0.5) / screenX; y = bottom + (top-bottom) * (j+0.5)/screenY
i,j is the i,j the pixel
So I get the ray direction = pixelScreenPosition - Origin
= xU + yV - d*W
I normalize this.
This completes this function

2. Before performing the intersection, i do the transformation. I m using the triangle primitives. All the vertices are in Model Space and even the Vertex Normals. So I convert them to world space.
3. Then I do the intersection of the ray with the triangle and return the closest point.
4. Then I perform the shading calculations.

I m not sure about the ray construction step. I could include the code. I hope the theory is right

Rendering more than 1 moving objects in a maze

20 April 2012 - 01:59 PM

Hey people,

I have a maze and some agents moving through it using A star search. I'm using GLUT to do the rendering. After completing the path the agent returns to its original position and the cycle continues. The problem is in the render loop:
if the number of agents = 2, at first the 2nd agent gets rendered, then it kind of vanishes so that only 1 agent is shown. This agent completes the path and then when it restarts from its original position, agent 2 shows up and then both of them move. Even when I increase the number of agents to more than 2, initially all of them show up and then not more than 2 agents are shown at a time. Here's my code:

This is the main render loop:

static void render(void)
// gluLookAt(-20.0f,5.0f,0.0f,-20.0+cos(270.0),5.0f,0.0+sin(270.0f),0.0,1.0,0.0);

//glTranslatef(-27,-10,-85); //setting the appropriate camera look at (do it in a separate function)

glutSwapBuffers(); //swaps the front and back buffers

Render scene is:
void renderScene(float x, float y, float z)

glRotatef(-90.0, 1, 0, 0);

My update function: I m using glTimer:

void updateScreen(int value)

and this is the move agent function:
void moveAgent()
for(int i = 0; i < numRows; i++)
//reset the current position of the agent and then move it
int a = pAgents[i].Position.x;
int b = pAgents[i].Position.y;


if(pAgents[i].positionIndex >= pAgents[i].PathInMaze.size())
pAgents[i].positionIndex = 0;

pAgents[i].Position.x = pAgents[i].PathInMaze.at(pAgents[i].positionIndex).getCellX();
pAgents[i].Position.y = pAgents[i].PathInMaze.at(pAgents[i].positionIndex).getCellY();

a = pAgents[i].Position.x;
b = pAgents[i].Position.y;


Here is the renderAgent function:
void renderAgent()

for(int i = 0; i < numRows; i++)
float x1,z1;

x1 = -3.75 + ((float)pAgents[i].Position.x * 2.50f);
z1 = -9.0 + 1.25 + 2.50f + ((float)pAgents[i].Position.y * 2.50f);


glRotatef(-90.0, 1,0,0);
// glScalef(0.5,0.5,0.5);

glVertex3f((x1 + 0.25f) , 0, (z1 - 1.25f));
glVertex3f((x1 - 0.25f) , 0 , (z1 - 1.25f));
glVertex3f((x1 - 0.25f) , 0 , (z1 + 1.25f));
glVertex3f((x1 + 0.25f) , 0 , (z1 + 1.25f));
// Bottom

glVertex3f((x1+0.25) ,-0, (z1+1.25)); // Top Right Of The Quad (Bottom)
glVertex3f((x1-0.25) ,-0, (z1+1.25)); // Top Left Of The Quad (Bottom)
glVertex3f((x1-0.25) ,-0, (z1-1.25)); // Bottom Left Of The Quad (Bottom)
glVertex3f((x1+0.25) ,-0, (z1-1.25)); // Bottom Right Of The Quad (Bottom)
// Front

glVertex3f((x1+0.25), 0, (z1+1.25)); // Top Right Of The Quad (Front)
glVertex3f((x1-0.25), 0, (z1+1.25)); // Top Left Of The Quad (Front)
glVertex3f((x1-0.25),-0, (z1+1.25)); // Bottom Left Of The Quad (Front)
glVertex3f((x1+0.25),-0, (z1+1.25)); // Bottom Right Of The Quad (Front)
// Back

glVertex3f((x1-0.25),0,(z1-1.25)); // Top Right Of The Quad (Back)
glVertex3f((x1+0.25),0,(z1-1.25)); // Top Left Of The Quad (Back)
glVertex3f((x1+0.25),-0,(z1-1.25)); // Bottom Left Of The Quad (Back)
glVertex3f((x1-0.25),-0,(z1-1.25)); // Bottom Right Of The Quad (Back)
// left of cube

glVertex3f((x1-0.25), 0,(z1+1.25)); // Top Right Of The Quad (Left)
glVertex3f((x1-0.25), 0,(z1-1.25)); // Top Left Of The Quad (Left)
glVertex3f((x1-0.25),-0,(z1-1.25)); // Bottom Left Of The Quad (Left)
glVertex3f((x1-0.25),-0,(z1+1.25)); // Bottom Right Of The Quad (Left)
// Right of cube

glVertex3f((x1+0.25), 0,(z1-1.25)); // Top Right Of The Quad (Right)
glVertex3f((x1+0.25), 0,(z1+1.25)); // Top Left Of The Quad (Right)
glVertex3f((x1+0.25),-0,(z1+1.25)); // Bottom Left Of The Quad (Right)
glVertex3f((x1+0.25),-0,(z1-1.25)); // Bottom Right Of The Quad (Right)


So I m not sure why they get rendered only 2 at a time. If anyone can point out what is going wrong, I would be appreciate it