• Advertisement
Sign in to follow this  

OpenGL glOrtho 2d help

This topic is 2500 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello there, I am trying to develop a webcam application thingy and would really love this to be 2d rather than 3d, with the webcam feed always taking up the screen. I have spent hours trying to get glOrtho to work for me, but my display of the webcam feed is always offscreen, or just not drawn. I have spent hours on it messing around with disabling lighting/blending etc and experimenting with loadIdentities but for the life of me cannot figure out whats wrong.

The key thing is that i have a define ' #define USE_ORTHO ' which is used to switch between 2d and 3d as I dont feel like losing everything in this process and might want to go back at some point to 3d.

I have set up a 'setuportho' function which is maybe the cause:

void SetupOrtho(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.f, 2.f, 0.f, 2.f, 0.f, 100.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}

and my render function is here:
void Render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glLoadIdentity();//reset matrix

glColor3f(0,1,0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);

#ifdef USE_ORTHO
glBegin(GL_QUADS);
glTexCoord2s(1,1); glVertex3f(0.1, 0.1, 0.9);
glTexCoord2s(1,0); glVertex3f(0.1, 0.9, 0.9);
glTexCoord2s(0,0); glVertex3f(0.9, 0.9, 0.9);
glTexCoord2s(0,1); glVertex3f(0.9, 0.1, 0.9);
glEnd();
#else
glBegin(GL_QUADS);
glTexCoord2s(1,1); glVertex3f(-8, -6, -10);
glTexCoord2s(1,0); glVertex3f(-8, 6, -10);
glTexCoord2s(0,0); glVertex3f(8, 6, -10);
glTexCoord2s(0,1); glVertex3f(8, -6, -10);
glEnd();
#endif

//glDeleteTextures(GL_TEXTURE_2D, &cameraImageTextureID);
//glDisable(GL_TEXTURE_2D);

glFlush();
SwapBuffers(g_HDC); //bring the back buffer to the foreground
}

but maybe the problem is burried in the rest of the code with lighting etc.

I have uploaded the entire project here: http://www.putfile2.com/f/1049/ntkdtp

for those of you who probably find it easier to work in MSVC rather than burying through this text. Hopefully you wont have issues as everything including libs are there. Done in MSVC++ 2008 express 32bit win7. People have had problems in the past figuring out how to download, hence I have got my bro to put in a MASSIVE green download button. That should help.


Here is the entire code dump, sorry its kind of long as everyhting is in one file DOH! I was hoping it wouldnt come to this and feel like a smeghead asking for help.

#include <stdio.h>
#include <stdarg.h>
#include <stdlib.h>
#include <iostream>
#include <sstream>
#include <vector>

#include <windows.h>
#include <mmsystem.h>

#include "videoInput.h"
#include <gl/gl.h>
#include <gl/glu.h>

#include <tchar.h>


#define USE_ORTHO


//////Defines
//#define BITMAP_ID 0x4D42


//Global variables

HDC g_HDC; // global device context
bool fullScreen = false; // true = fullscreen;false = windowed
bool keyPressed[256]; // holds true for keys that are pressed
bool leftMouseButton = false; // is the left mouse button pressed
bool rigthMouseButton = false; // is the right mouse button pressed.

//unsigned char * pixel_buffer_1;
//unsigned char * pixel_buffer_2;


#pragma region GLOBLE VARIABLE

int size = 921600;
int device1 = 0;

GLuint cameraImageTextureID;
int frameWidth = 640;
int frameHeight = 480;

// Create a videoInput object
videoInput VI;

unsigned char * frame = new unsigned char[size];




//-----var about control fps
static float lasttime = 0.0f; // store the last time
static float lasttime2 = 0.0f;
static float currenttime; // store the current time

#ifndef USE_ORTHO
//light variables structure and names copied (values slightly edited) from 'OpenGL Game Programming' by Kevin Hawkins and Dave Astle, 2001
float ambientLight[] = { 1.0f, 1.0f, 1.0f, 1.0f }; // ambient light
float diffuseLight[] = { 1.0f, 1.0f, 1.0f, 1.0f}; // diffuse light
float lightPosition[] = { 0.0f, 0.0f, 1000.0f, 1.0f}; // the light position

float matAmbient[] = {0.0f, 0.0f, 0.0f, 1.0f};
float matDiff[] = {1.0f, 1.0f, 1.0f, 1.0f};
//end of light variables and copied structure material
#endif

#pragma endregion

void SetupOrtho(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.f, 2.f, 0.f, 2.f, 0.f, 100.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}

void Render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glLoadIdentity();//reset matrix

glColor3f(0,1,0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);

#ifdef USE_ORTHO
glBegin(GL_QUADS);
glTexCoord2s(1,1); glVertex3f(0.1, 0.1, 0.9);
glTexCoord2s(1,0); glVertex3f(0.1, 0.9, 0.9);
glTexCoord2s(0,0); glVertex3f(0.9, 0.9, 0.9);
glTexCoord2s(0,1); glVertex3f(0.9, 0.1, 0.9);
glEnd();
#else
glBegin(GL_QUADS);
glTexCoord2s(1,1); glVertex3f(-8, -6, -10);
glTexCoord2s(1,0); glVertex3f(-8, 6, -10);
glTexCoord2s(0,0); glVertex3f(8, 6, -10);
glTexCoord2s(0,1); glVertex3f(8, -6, -10);
glEnd();
#endif

//glDeleteTextures(GL_TEXTURE_2D, &cameraImageTextureID);
//glDisable(GL_TEXTURE_2D);

glFlush();
SwapBuffers(g_HDC); //bring the back buffer to the foreground
}


void initGL()
{
glClearColor(0.3f,0.3f,1.0f,1.0f); //clear to blue
glFrontFace(GL_CW); //set it so that polygons are drawn clockwise (does have an affect for some reason).
glEnable(GL_TEXTURE_2D);


#ifndef USE_ORTHO
glEnable(GL_LIGHTING); //enable lighting
glEnable(GL_DEPTH_TEST); //hidden surface removal


glShadeModel(GL_SMOOTH);
glEnable(GL_BLEND);
glShadeModel(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_CULL_FACE); //do not draw inside of polygons for performance
glMaterialfv(GL_FRONT, GL_AMBIENT, matAmbient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, matDiff);

//LIGHTING
glLightfv(GL_LIGHT0, GL_AMBIENT, ambientLight); //set up the ambient element
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuseLight); //set up the diffuse element
glLightfv(GL_LIGHT0, GL_POSITION, lightPosition); //place the light
glEnable(GL_LIGHT0);

glEnable(GL_COLOR_MATERIAL); //-----------------PUTH THIS BACK MAYBE
glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE);
//END OF LIGHTING

glMatrixMode(GL_PROJECTION);
gluPerspective(60,1.0f,0.1f,1000.0f); //set the perspective for 60 degree FOV witha a 1:1 aspect ratio and a close clipping pane of 0.1 and far of 1000 units.
glMatrixMode(GL_MODELVIEW);

//gluLookAt(0, 100, 100, 0,0,0,0,0,1); //important that glulookat is in the modelview matrix
#else
glDisable(GL_CULL_FACE);
glDisable(GL_LIGHTING); //enable lighting
glDisable(GL_DEPTH_TEST); //hidden surface removal
//glDisable(GL_BLEND);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
SetupOrtho(frameWidth, frameHeight);
#endif


}

void Initialize()
{
initGL();

// Get first camera frame here


//Prints out a list of available devices and returns num of devices found
int numDevices = VI.listDevices();


// If you want to capture at a different frame rate (default is 30) specify it here, you are not guaranteed to get this fps though.
VI.setIdealFramerate(0, 60);

// Setup the first device - there are a number of options:
// this could be any deviceID that shows up in listDevices
// VI.setupDevice(device1); // Setup the first device with the default settings
// VI.setupDevice(device1, VI_COMPOSITE); // or setup device with specific connection type
VI.setupDevice(device1, frameWidth, frameHeight); // or setup device with specified video size
// VI.setupDevice(device1, 320, 240, VI_COMPOSITE); // or setup device with video size and connection type
// VI.setFormat(device1, VI_NTSC_M); // if your card doesn't remember what format it should be
// call this with the appropriate format listed above
// NOTE: must be called after setupDevice!

// Optionally setup a second (or third, fourth ...) device - same options as above
// VI.setupDevice(device2);

// As requested width and height can not always be accomodated
// Make sure to check the size once the device is setup

frameWidth = VI.getWidth(device1);
frameHeight = VI.getHeight(device1);
size = VI.getSize(device1);


// pixel_buffer_1 = unsigned char[size];
// pixel_buffer_2 = unsigned char[size];

// To get the data from the device first check if the data is new
if (VI.isFrameNew(device1))
{
// VI.getPixels(device1, pixel_buffer_1, false, false); //fills pixels as a BGR (for openCV) unsigned char array - no flipping
// VI.getPixels(device1, pixel_buffer_2, true, true); //fills pixels as a RGB (for openGL) unsigned char array - flipping!
frame = VI.getPixels(device1, true, true); //fills pixels as a BGR (for openGL) unsigned char array - no flipping
}

// Same applies to device2 etc

// To get a settings dialog for the device
VI.showSettingsWindow(device1);

glGenTextures(1,&cameraImageTextureID);
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);


glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, frameWidth,frameHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, frame);
// gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, frameWidth, frameHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, frame);


// Shut down devices properly
// VI.stopDevice(device1);
// VI.stopDevice(device2);
}

void Reshape(int w, int h)
{
double ww = w;
double hh = h;

#ifndef USE_ORTHO
glViewport(0, 0, w, h); /* Establish viewing area to cover entire window. */
glMatrixMode(GL_PROJECTION);
glLoadIdentity (); //replace the current matrix with the Identity Matrix
gluPerspective(60, ww/hh,0.1f,1000.0f);
glMatrixMode(GL_MODELVIEW);
#else
SetupOrtho(w, h);
#endif


}

void GameLoop()
{
// Find out how much time has passed
// static double time = timeGetTime();
// double time2 = timeGetTime();
// double timeDiff = (time2 - time) / 1000;
// time = time2;

// Sort out user input
if(keyPressed['W']){}
if(keyPressed['S']){}
if(keyPressed['A']){}
if(keyPressed['D']){}
if(keyPressed['B']){}
if(keyPressed['R']){}


// Capture frame from camera here

if (VI.isFrameNew(device1))
{
// VI.getPixels(device1, pixel_buffer_1, false, false); // fills pixels as a BGR (for openCV) unsigned char array - no flipping
// VI.getPixels(device1, pixel_buffer_2, true, true); // fills pixels as a RGB (for openGL) unsigned char array - flipping!
VI.getPixels(device1, frame, false, true);
}

// End of capture frame from camera


/*

// Clear the tracker buffer
for(unsigned int y=0; y<frameHeight; y++)
{
for(unsigned int x=0; x<frameWidth; x++)
{
trackBuffer[x][y] = 0;
}
}


// Process the image
for(unsigned int i=0; i<frameHeight; i++)
{
for(unsigned int j=0; j<frameWidth; j++)
{


}
}

// Process the tracker buffer
for(unsigned int y=0; y<frameHeight; y++)
{
for(unsigned int x=0; x<frameWidth; x++)
{
trackBuffer[x][y] == 0;
}
}

// Render the tracker buffer
for(unsigned int y=0; y<frameHeight; y++)
{
for(unsigned int x=0; x<frameWidth; x++)
{
if(trackBuffer[x][y] == 1)
{

}
}
}

*/

// End of processing the image


// Turn image into texture
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);

glGenTextures(1,&cameraImageTextureID);
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, frameWidth, frameHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, frame);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth,frameHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE,frame);

Render();
}


void SetupPixelFormat(HDC hDC)
{
int nPixelFormat; //our pixel format index

static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), 1, PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, PFD_TYPE_RGBA, 32, 0,0,0,0,0,0,0,0,0,0,0,0,0,16,0,0,PFD_MAIN_PLANE, 0,0,0,0};
nPixelFormat = ChoosePixelFormat(hDC, &pfd); //chose matching pixel format
SetPixelFormat(hDC, nPixelFormat, &pfd); //set the pixel format to DC
}


LRESULT CALLBACK WndProc(HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam)
{
static HGLRC hRC; // rendering context
static HDC hDC; // device context
int width, height; // window width and height
int oldMouseX, oldMouseY; // old mouse coordinates

double zDeltaDifference = 0;
static short zDelta;


switch(message)
{
case WM_CREATE:
hDC = GetDC(hwnd); // get current windows device context
g_HDC = hDC;
SetupPixelFormat(hDC); // call our pixel format setup function

//create rendering context and make it current
hRC = wglCreateContext(hDC);
wglMakeCurrent(hDC, hRC);

return 0;
break;

case WM_CLOSE: // window is closing
// Deselect rendering context and delete it
wglMakeCurrent(hDC, NULL);
wglDeleteContext(hRC);

// Send WM_QUIT to message queue
PostQuitMessage(0);

return 0;
break;

case WM_SIZE:
height = HIWORD(lParam); //retrieve width and height
width = LOWORD(lParam);

if (height==0)
{
height=1;
}

// Reset the viewport to new dimensions
glViewport(0,0, width, height);

#ifndef USE_ORTHO
// Set projection matrix current matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();

// Calculate aspect ratio of window
gluPerspective(60.0f, (GLfloat)width/(GLfloat)height, 1.0f, 5000.0f);

glMatrixMode(GL_MODELVIEW); //set the modelview matrix
glLoadIdentity(); //reset the modelview matrix
#else
SetupOrtho(width, height);
#endif
return 0;
break;

case WM_KEYDOWN: //is a key pressed?
keyPressed[wParam] = true;
// camera.MoveZ(1);
return 0;
break;

case WM_KEYUP:
keyPressed[wParam] = false;
return 0;
break;

case WM_LBUTTONDOWN:
leftMouseButton = true;
return 0;
break;

case WM_RBUTTONDOWN:
rigthMouseButton = true;
return 0;
break;

case WM_LBUTTONUP:
leftMouseButton = false;
return 0;
break;

case WM_RBUTTONUP:
rigthMouseButton = false;
return 0;
break;

case WM_MOUSEMOVE:

// Save old mouse coordinates
// oldMouseX = mouseX;
// oldMouseY = mouseY;

// Get mouse coordinates from Windows
// mouseX = LOWORD(lParam);
// mouseY = HIWORD(lParam);



// These lines limit the camera's range
// if (mouseY < 200)
{
// mouseY = 200;
}
// if (mouseY > 450)
{
// mouseY = 450;
}
if (leftMouseButton)
{
// camera.zoom += (mouseY - oldMouseY) * 0.50f;
// camera.pitch += (mouseY - oldMouseY) * 0.1f;
// camera.angle += (oldMouseX - mouseX) * 0.30f;
}

/*
if (mouseX - oldMouseX != 0)
{
mouseX - oldMouseX > 0 ? angle += 3.0f : angle -= 3.0f ;
}
*/

return 0;
break;

case WM_MOUSEWHEEL:
// zDelta = GET_WHEEL_DELTA_WPARAM(wParam);
// camera.zoom = camera.zoom - (camera.zoom/8 * zDelta/120); //camera movement is proportional, so if your further away zooming is faster.
return 0;
break;

default:
break;
}

return (DefWindowProc(hwnd, message, wParam, lParam));
}


int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd)
{
WNDCLASSEX windowClass; // windows class
HWND hwnd; // window handle
MSG msg; // message
bool done; // flag saying when our app is complete
DWORD dwExStyle; // window extended style
DWORD dwStyle; // window style
RECT windowRect;

// Screen/display attributes
int width = 800;
int height = 600;
int bits = 32;

windowRect.left =(long)0; // set left value to 0
windowRect.right =(long)width; // set right value to requested width
windowRect.top =(long)0; // set the top value to 0
windowRect.bottom =(long)height; // set bottom value to requested height

// Fill out the windows class structure
windowClass.cbSize = sizeof(WNDCLASSEX);
windowClass.style = CS_HREDRAW | CS_VREDRAW;
windowClass.lpfnWndProc = WndProc;
windowClass.cbClsExtra = 0;
windowClass.cbWndExtra = 0;
windowClass.hInstance = hInstance;
windowClass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
windowClass.hCursor = LoadCursor(NULL, IDC_ARROW);
windowClass.hbrBackground = NULL;
windowClass.lpszMenuName = NULL;
windowClass.lpszClassName = L"MyClass";
windowClass.hIconSm = LoadIcon(NULL, IDI_WINLOGO);

// Register the windows class
if (!RegisterClassEx(&windowClass))
return 0;

if (fullScreen) //full screen
{
DEVMODE dmScreenSettings; //device mode
memset(&dmScreenSettings, 0, sizeof(dmScreenSettings));
dmScreenSettings.dmSize = sizeof(dmScreenSettings);
dmScreenSettings.dmPelsWidth = width; //screen width
dmScreenSettings.dmPelsHeight = height; //screen height
dmScreenSettings.dmBitsPerPel = bits;
dmScreenSettings.dmFields=DM_BITSPERPEL | DM_PELSWIDTH | DM_PANNINGHEIGHT;

if(ChangeDisplaySettings(&dmScreenSettings, CDS_FULLSCREEN) != DISP_CHANGE_SUCCESSFUL)
{
// Setting display mode failed, switch to windowed
MessageBox(NULL, L"Display mode failed", NULL, MB_OK);
fullScreen=FALSE;
}
}

if (fullScreen) //are we still in full screen mode
{
dwExStyle=WS_EX_APPWINDOW; //window extended style
dwStyle=WS_POPUP; //window style
ShowCursor(FALSE); //hide mouse pointer
}
else
{
dwExStyle=WS_EX_APPWINDOW | WS_EX_WINDOWEDGE; //window exteded style
dwStyle=WS_OVERLAPPEDWINDOW; //window style
}

AdjustWindowRectEx(&windowRect, dwStyle, FALSE, dwExStyle);

// Class registered so now create our window
hwnd = CreateWindowEx(NULL, L"MyClass", L"Stuart Page's Physics Game", dwStyle | WS_CLIPCHILDREN | WS_CLIPSIBLINGS, 0,0, windowRect.right - windowRect.left, windowRect.bottom - windowRect.top, NULL, NULL, hInstance, NULL);

// Check if window creation failed (hwnd would equal NULL)
if(!hwnd)
return 0;

ShowWindow(hwnd, SW_SHOW); //display the window
UpdateWindow(hwnd); //update the window

done = false; //initialize the loop condition variable

// Main message loop
Initialize();

while (!done)
{
if(PeekMessage(&msg, hwnd, NULL, NULL, PM_REMOVE)!=0) //there is a new message in the que
{
if(msg.message == WM_QUIT) //do we recieve a WM_QUIT message?
{
done = true;
}
else
{
//GAME LOOP HERE
GameLoop();
TranslateMessage(&msg);
DispatchMessage(&msg);
}

}
else //there are no new messages, so dont handle any
{
GameLoop();
}
}

if (fullScreen)
{
ChangeDisplaySettings(NULL, 0); //if so switch back to the desktop
ShowCursor(TRUE);
}
return msg.wParam;

}

Share this post


Link to post
Share on other sites
Advertisement

glOrtho(0.f, 2.f, 0.f, 2.f, 0.f, 100.f);

glTexCoord2s(1,1); glVertex3f(0.1, 0.1, 0.9);
glTexCoord2s(1,0); glVertex3f(0.1, 0.9, 0.9);
glTexCoord2s(0,0); glVertex3f(0.9, 0.9, 0.9);
glTexCoord2s(0,1); glVertex3f(0.9, 0.1, 0.9);
[/quote]

Can you try changing these z coordinates to -0.9, instead of +0.9? When calling glOrtho, the final 2 values are the zNear and zFar values. However the default camera in openGL looks down the negative z axis, so with this glOrtho command you'll be viewing coordinates in the box from (0,0,0) to (2,2,-100). This puts your quad right behind the camera.

Alternatively, you could leave the z coordinate at +0.9, and change the zFar plane to -100, which should have the same effect.

Share this post


Link to post
Share on other sites
its amazing how a fresh/different pair of eyes can spot the problem immediately. Thanks! That worked.
Stu

Share this post


Link to post
Share on other sites
by the way can someone look at this bit of code in there:


// Turn image into texture
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);

glGenTextures(1,&cameraImageTextureID);
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, frameWidth, frameHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, frame);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth,frameHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE,frame);

It looks like im creating a texture based on webcam inpu each frame.
Is it possible and faster to just edit the texture from last frame rather than go through the whole process again?

Would this be better/same or could be improved upon?:

static bool generatedTexture = false;

if (!generatedTexture)
{
// Turn image into texture
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);

glGenTextures(1,&cameraImageTextureID);
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, frameWidth, frameHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, frame);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth,frameHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE,frame);
generatedTexture = true;
}
else
{
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, frameWidth, frameHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, frame);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth,frameHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE,frame);
}

Stu

Share this post


Link to post
Share on other sites
Yeah, I would say your second set of code is much preferable, though it has a few issues. I would clean it up like so:



//during constructor:
cameraImageTextureID = 0;
//

//during render loop:
//if (!generatedTexture) //don't need another variable, can use the ID to tell if its initialized or not
if(!cameraImageTextureID){

//this is bad, and probably generates an error. You can't bind the ID before the texture ID has been generated
//glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);

glGenTextures(1,&cameraImageTextureID);
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

//You need to decide if you want to use mipmaps or not. You're generating mipmaps
//everytime you load textures, but you're disabling using them here. If you want
//mipmaps, enable them in the minification filter. If you don't, then don't waste
//processing time by generating them each frame.

//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_LINEAR);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

//when you first create the texture, just call glTexImage2D with '0' for the data,
//this initializes the texture with the parameters you set but does not fill it with data.
//then call glTexSubImage when you want to overwrite the pixels each frame
glTexImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth,frameHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE,0);

//gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, frameWidth, frameHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, frame);
//glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth,frameHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE,frame);
//generatedTexture = true; //dont need this
} else {
glBindTexture(GL_TEXTURE_2D, cameraImageTextureID);
}


//If you want mipmaps, you should call it AFTER you load the texture data, not before
//gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, frameWidth, frameHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, frame);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth,frameHeight,GL_BGR_EXT,GL_UNSIGNED_BYTE,frame);
//gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, frameWidth, frameHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, frame);
//I think this is preferrable to the glu routine as it utilizes
//the GPU to generate mipmaps, so it should be faster. Can
//probably use either one you want.
glGenerateMipmap(GL_TEXTURE_2D);

//good practice to check for errors when generating new code
int err = glGetError();
if(err){
throw (something);
}


Share this post


Link to post
Share on other sites
The fastest way would be to render to a texture, take a look at framebuffer objects, draw buffers, and texture rectangle extensions, avoid the copy altogether and just render to a texture then display it.

Share this post


Link to post
Share on other sites

The fastest way would be to render to a texture, take a look at framebuffer objects, draw buffers, and texture rectangle extensions, avoid the copy altogether and just render to a texture then display it.


I think hes drawing some webcam data which is probably presented to him as an array of pixels. If you're actually rendering something then it makes more sense to draw it into a framebuffer, but I think his approach is fine for presenting data from a source outside the program.

Share this post


Link to post
Share on other sites
Thanks for the help - especially that long post from karwost with the comments explaining why each thing was bad and what should be done. I had always thought you needed to use mipmaps if you were going to apply a texture to something that wasnt the same size as the image you were passing in? TBH i only care about speed and not looks as I am doing this for objet recognition/tracking and not for a skype video call type thing. Once i have messed around with the example stuff you have given me I will get back to you .
Thanks again you guys are awesome!

Share this post


Link to post
Share on other sites
Dont want to sound like a little girl but...

Omg thank you, thank you, thank you. I put in your code and got rid of all mipmaps. The thing now runs SUPER fast at 960 x 720 resolution. Its running even faster than it was at 640 x 480 before. I tried not using mipmaps before but must have done something wrong as it would not render but does now. This is so much better than what i thought was possible. Now I can just focus on the tracking part of the project thanks!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement