Sign in to follow this  
d h k

OpenGL Lighting (screens inside)...

Recommended Posts

Hello everybody, this thread actually started in the Maths area, because I was sure, that my problem with lighting in OpenGL comes from some wrong normal calculation, but it seems like that is not the case. Thus I continue the thread here. I load objects as .3ds models into my application. I already checked the CW or CCW orientation of the triangles, but it turned out that they are consistent. I calculate the normals from my model data ( which is also correct, since I use the same data to actually draw my models ) and then I try to add lighting ( ambient, diffuse and specular for now ). My light is initially at position 0, 0, 0. When it is there, everything looks fine. The correct faces are lit up or not lit up according to their position/orientation. This is how it looks when the light is at the origin and everything is fine ( the light is at the origin of the particle system ). Now when I move my light in any direction, the faces start to flicker and sometimes wrong faces are lit up etc. Actually pretty much all the lighting messes up big time. This is how it looks when the lighting is messed up. And when I move the light even further away from the origin, the whole diffuse color ( which is the light component that messes up by the way ) disappears, and everything is lit only using the ambient color. So, this is how it looks now. I don't know what could be causing this problem. Here are the relevant code parts: Part of my initialization code...
	// enable texture mapping
	glEnable ( GL_TEXTURE_2D );

	// enable smooth shading
	glShadeModel ( GL_SMOOTH );

	// enable lighting
	glEnable ( GL_LIGHTING );

	// enable normalizing
	glEnable ( GL_NORMALIZE );

	glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE);
	glEnable(GL_COLOR_MATERIAL);

	// set background color
	glClearColor ( 0.6f, 0.6f, 0.6f, 1.0f );

	// depth buffer setup
	glClearDepth ( 1.0f );

	// initialize a light
	float ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
	float diffuse[] = { 1.0f, 0.0f, 0.0f, 1.0f };
	float specular[] = { 12.0f, 12.0f, 12.0f, 12.0f };
	lgt.init ( ambient, diffuse, specular );
	lgt.toggle ( true );
}


My complete object::draw ( ) function...
void object::draw ( )
// draws the object normally to the screen
{
	// reset the matrix
	glLoadIdentity ( );

	// position the camera
	cam.update ( );

	// update acceleration values
	update_position ( );

	// move and rotate to place the object
	glTranslatef ( position.x, position.y, position.z );
	glRotatef ( rotation.x, 1.0f, 0.0f, 0.0f );
	glRotatef ( rotation.y, 0.0f, 1.0f, 0.0f );
	glRotatef ( rotation.z, 0.0f, 0.0f, 1.0f );

	// set up render modes
	glColorMask ( 1, 1, 1, 1 );
	glColor4f ( 1.0f, 1.0f, 1.0f, 1.0f );
	glDisable ( GL_BLEND );
	glDisable ( GL_CLIP_PLANE0 );
	glEnable ( GL_DEPTH_TEST );
	glDisable ( GL_STENCIL_TEST );

	if ( sphere_mapped == true )
	// if the object is sphere mapped
	{
		// enable sphere mapping
		glEnable ( GL_TEXTURE_GEN_S );
		glEnable ( GL_TEXTURE_GEN_T );
	}
	else
	// if the object is texture mapped
	{
		// disable shere mapping
		glDisable ( GL_TEXTURE_GEN_S );
		glDisable ( GL_TEXTURE_GEN_T );
	}

	// activate texture
	surface_texture.activate ( );

	// push the matrix
	glPushMatrix ( );

	// scale
	glScalef ( size, size, size );

	for ( int i = 0; i < actor.num_polygons; i++ )
	// loop through each polygon
	{
		float vertex1[3], vertex2[3], vertex3[3];

		// prepare vertex data
		vertex1[0] = actor.vertex[actor.polygon[i].a ].x;
		vertex1[1] = actor.vertex[actor.polygon[i].a ].y;
		vertex1[2] = actor.vertex[actor.polygon[i].a ].z;
		vertex2[0] = actor.vertex[actor.polygon[i].b ].x;
		vertex2[1] = actor.vertex[actor.polygon[i].b ].y;
		vertex2[2] = actor.vertex[actor.polygon[i].b ].z;
		vertex3[0] = actor.vertex[actor.polygon[i].c ].x;
		vertex3[1] = actor.vertex[actor.polygon[i].c ].y;
		vertex3[2] = actor.vertex[actor.polygon[i].c ].z;
		
		// get the face normal
		get_face_normal ( actor.normal, vertex1, vertex2, vertex3 );

		// multiply it by -1
		actor.normal[0] *= -1;
		actor.normal[1] *= -1;
		actor.normal[2] *= -1;

		cgGLSetParameter3f ( vertexPosition, vertex1[0], vertex1[1], vertex1[2] );
		cgGLSetParameter3f ( vertexNormal, actor.normal[0], actor.normal[1], actor.normal[2] );

		// begin drawing polygon
		glBegin ( GL_TRIANGLES );
		
		// activate face normal
		glNormal3f ( actor.normal[0], actor.normal[1], actor.normal[2] );		

		// draw first vertex
		glTexCoord2f ( actor.mapcoord[actor.polygon[i].a].u, actor.mapcoord[actor.polygon[i].a].v );
		glVertex3f ( actor.vertex[actor.polygon[i].a ].x, actor.vertex[actor.polygon[i].a ].y, actor.vertex[actor.polygon[i].a ].z );
		
		// draw second vertex
		glTexCoord2f ( actor.mapcoord[actor.polygon[i].b].u, actor.mapcoord[actor.polygon[i].b].v );
		glVertex3f ( actor.vertex[actor.polygon[i].b].x, actor.vertex[actor.polygon[i].b].y, actor.vertex[actor.polygon[i].b].z );

		// draw third vertex
		glTexCoord2f ( actor.mapcoord[actor.polygon[i].c].u, actor.mapcoord[actor.polygon[i].c].v );
		glVertex3f ( actor.vertex[actor.polygon[i].c].x, actor.vertex[actor.polygon[i].c].y, actor.vertex[actor.polygon[i].c].z );

		// done drawing polygon
		glEnd ( );
    }

	// pop the matrix
	glPopMatrix ( );
		
	// disable sphere mapping
	glDisable ( GL_TEXTURE_GEN_S );
	glDisable ( GL_TEXTURE_GEN_T );
}


My light class definitions ( that I used in the initialization code above )...
void light::init ( float ambient[], float diffuse[], float specular[] )
// initializes a light
{
	glLightfv ( GL_LIGHT1, GL_AMBIENT, ambient );
	glLightfv ( GL_LIGHT1, GL_DIFFUSE, diffuse );
	glLightfv ( GL_LIGHT1, GL_SPECULAR, specular );
	glLightModeli ( GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE );
	glEnable ( GL_LIGHT1 );
}

void light::toggle ( void )
// toggles a light on or off
{
	if ( mode == true )
	// if light is activated
	{
		glDisable ( GL_LIGHT1 );
		mode = false;
	}
	else
	// if light is deactivated
	{
		glEnable ( GL_LIGHT1 );
		mode = true;
	}
}

void light::toggle ( bool new_mode )
// switch a light to a specific state
{
	mode = new_mode;

	if ( mode == true )
	// if light is activated
		glEnable ( GL_LIGHT1 );
	else
	// if light is activated
		glDisable ( GL_LIGHT1 );
}


void light::set_position ( float light_position[] )
// updates the position of the light
{
	// reset the matrix
	glLoadIdentity ( );

	// position the camera
	cam.update ( );

	// move the light
	glLightfv ( GL_LIGHT1, GL_POSITION, light_position );
}



My normal calculations ( should be correct though )...
void cross_product ( float *c,float a[3], float b[3] )
// finds the cross product of two vectors
{  
	c[0] = a[1] * b[2] - b[1] * a[2];
	c[1] = a[2] * b[0] - b[2] * a[0];
	c[2] = a[0] * b[1] - b[0] * a[1];
}

void normalize ( float * vect )
// scales a vector to a length of 1
{
	float length;
	int a;

	length = ( float ) sqrt ( pow ( vect[0], 2 ) + pow ( vect[1], 2 ) + pow ( vect[2], 2 ) );

	for ( a = 0; a < 3; ++a )
	// divides vector by its length to normalize
	{
		vect[a] /= length;
	}
}

void get_face_normal ( float *norm, float pointa[3], float pointb[3], float pointc[3] )
// gets the normal of a face
{
	float vect[2][3];
	int a,b;
	float point[3][3];

	for ( a = 0; a < 3; ++a )
	// copies points into point[][]
	{
		point[0][a]=pointa[a];
		point[1][a]=pointb[a]; 
		point[2][a]=pointc[a];
	}

	for ( a = 0; a < 2; ++a )
	// calculates vectors from point[0] to point[1]
	{                       
		for ( b = 0; b < 3; ++b )
		// and point[0] to point[2]
		{
			vect[a][b] = point[2-a][b] - point[0][b];      
		}
	}

	// calculates vector at 90° to to 2 vectors
	cross_product ( norm, vect[0], vect[1] );

	// makes the vector length 1
	normalize ( norm );
}


Can anybody help me? What is wrong? [Edited by - d h k on October 8, 2005 9:01:30 AM]

Share this post


Link to post
Share on other sites
Sorry to bump this thread, but it was just about to slip away from the first page. And considering there where ~60 views and not one single reply, I suppose I didn't express my problem correct.

Is anything not clear to you or do you guys just don't have any idea of what could be wrong? I know I posted four sections of code, but they're not that long ( together maybe 100 lines maximum ).

Again I am sorry if I seem annoying with this problem, but I really need help for this. It's one of the last technical steps for this application.

Share this post


Link to post
Share on other sites
First off, I suggest you draw your normals to make sure they're correct. They probably are, but it pays to make sure you're debugging something thats actually broken. :) Simply drawing lines for every vertex between vertexPos and vertexPos+normal*scale (just tweek 'scale' until the lines are of sensible length) will let you instantly tell if they're correct or not.

However I think your problem is with your order of operations. For hardware lights you want to do:

- clear screen
- set projection matrix
- set camera position (in modelview matrix)
- enable/set lights
- for each object
-- push modelview matrix
-- position / rotate object via modelview
-- render object
-- pop modelview matrix

The key difference is that you seem to be setting the lights, then setting the camera, which is the wrong way of doing things. When lights are specified they are transformed using the current set of matrices into screen space. Instead do view and camera setup first, then lights.

Hope that helps.

Share this post


Link to post
Share on other sites
Thanks for your reply.

First of all I checked the normals and they look fine.

Then I tried to follow your advice, but it seems I still have problems with it. I guess since the problem is the order, the problem is in the global draw function, that determines the order:

This is the way I used to have this function set up:

void draw_world ( void )
// draws the world
{
// clear screen buffer, depth buffer and stencil buffer
glClear ( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );

// reset the matrix
glLoadIdentity ( );

float lgt_pos[] = { prt_system.position.x, prt_system.position.y, prt_system.position.z };
lgt.set_position ( lgt_pos );

// draw the gui
draw_gui ( );

// and draw the map
map.draw ( );

// then draw the objects
draw_objects ( );
}



This is the new function, that I now use:

void draw_world ( void )
// draws the world
{
glClear ( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );

glLoadIdentity ( );

glMatrixMode ( GL_PROJECTION );

glPushMatrix ( );

cam.update ( );

glPopMatrix ( );

glMatrixMode ( GL_MODELVIEW );

glPushMatrix ( );

float lgt_pos[] = { prt_system.position.x, prt_system.position.y, prt_system.position.z };
lgt.set_position ( lgt_pos );

draw_gui ( );
draw_objects ( );

glPopMatrix ( );
}



But it still looks the very same!

Share this post


Link to post
Share on other sites
In your new function, you have:
glMatrixMode ( GL_PROJECTION );
glPushMatrix ( );
cam.update ( );
glPopMatrix ( );
Assuming that cam.update sets the camera position, then that doesn't have any effect. You don't need any pushing or poping here (although a glIdentity before the camera might be a good idea).

You don't say what your new draw_objects() does. Make sure you're not still doing your object::draw in the same way - you don't want to set the camera at this point, it should already be set!

Your whole matrix setup looks convulted and error prone. You really should decide what should happen where and stick with it. It might be an idea to rip out what you've already got and add in bits slowly (following the order i listed before).

Edit: and you don't appear to be setting the camera's modelview matrix up anywhere (which should be *before* you setup lights).

Share this post


Link to post
Share on other sites
You are right, the whole function is quite messed up. I cleaned it up, but now it doesn't render anything anymore...

This is the new draw_world function (that draws everything):

void draw_world ( void )
// draws the world
{
// clear screen
glClear ( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );

// set projection matrix
glMatrixMode ( GL_PROJECTION );

// reset
glLoadIdentity ( );

// move for camera
cam.update ( );

// set modelview matrix
glMatrixMode ( GL_MODELVIEW );

// set lights
float lgt_pos[] = { prt_system.position.x, prt_system.position.y, prt_system.position.z };
lgt.set_position ( lgt_pos );

// draw lights
draw_objects ( );
}




Then this is my draw_objects function (that draws objects):

void draw_objects ( void )
// draws all objects
{
obj[1].mask ( );

obj[0].draw_reflected ( );
obj[2].draw_reflected ( );
prt_system.draw_reflected ( );

obj[1].draw_blended ( );

obj[0].draw ( );
obj[2].draw ( );
prt_system.draw ( );
}




And finally this is my object::draw function (that really draws one object):

void object::draw ( )
// draws the object normally to the screen
{
// set up render modes
glColorMask ( 1, 1, 1, 1 );
glColor4f ( 1.0f, 1.0f, 1.0f, 1.0f );
glDisable ( GL_BLEND );
glDisable ( GL_CLIP_PLANE0 );
glEnable ( GL_DEPTH_TEST );
glDisable ( GL_STENCIL_TEST );

if ( sphere_mapped == true )
// if the object is sphere mapped
{
// enable sphere mapping
glEnable ( GL_TEXTURE_GEN_S );
glEnable ( GL_TEXTURE_GEN_T );
}
else
// if the object is texture mapped
{
// disable shere mapping
glDisable ( GL_TEXTURE_GEN_S );
glDisable ( GL_TEXTURE_GEN_T );
}

// activate texture
surface_texture.activate ( );

// push the matrix
glPushMatrix ( );

// move and rotate to place the object
glTranslatef ( position.x, position.y, position.z );
glRotatef ( rotation.x, 1.0f, 0.0f, 0.0f );
glRotatef ( rotation.y, 0.0f, 1.0f, 0.0f );
glRotatef ( rotation.z, 0.0f, 0.0f, 1.0f );

// scale
glScalef ( size, size, size );

for ( int i = 0; i < actor.num_polygons; i++ )
// loop through each polygon
{
float vertex1[3], vertex2[3], vertex3[3];

// prepare vertex data
vertex1[0] = actor.vertex[actor.polygon[i].a ].x;
vertex1[1] = actor.vertex[actor.polygon[i].a ].y;
vertex1[2] = actor.vertex[actor.polygon[i].a ].z;
vertex2[0] = actor.vertex[actor.polygon[i].b ].x;
vertex2[1] = actor.vertex[actor.polygon[i].b ].y;
vertex2[2] = actor.vertex[actor.polygon[i].b ].z;
vertex3[0] = actor.vertex[actor.polygon[i].c ].x;
vertex3[1] = actor.vertex[actor.polygon[i].c ].y;
vertex3[2] = actor.vertex[actor.polygon[i].c ].z;

// get the face normal
get_face_normal ( actor.normal, vertex1, vertex2, vertex3 );

// multiply it by -1
actor.normal[0] *= -1;
actor.normal[1] *= -1;
actor.normal[2] *= -1;

glBegin ( GL_LINES );

glVertex3f ( vertex1[0], vertex1[1], vertex1[2] );
glVertex3f ( vertex1[0]+actor.normal[0]*10.0f, vertex1[1]+actor.normal[1]*10.0f, vertex1[2]+actor.normal[2]*10.0f );

glVertex3f ( vertex2[0], vertex2[1], vertex2[2] );
glVertex3f ( vertex2[0]+actor.normal[0]*10.0f, vertex2[1]+actor.normal[1]*10.0f, vertex2[2]+actor.normal[2]*10.0f );

glVertex3f ( vertex3[0], vertex3[1], vertex3[2] );
glVertex3f ( vertex3[0]+actor.normal[0]*10.0f, vertex3[1]+actor.normal[1]*10.0f, vertex3[2]+actor.normal[2]*10.0f );

glEnd ( );

// begin drawing polygon
glBegin ( GL_TRIANGLES );

// activate face normal
glNormal3f ( actor.normal[0], actor.normal[1], actor.normal[2] );

// draw first vertex
glTexCoord2f ( actor.mapcoord[actor.polygon[i].a].u, actor.mapcoord[actor.polygon[i].a].v );
glVertex3f ( actor.vertex[actor.polygon[i].a ].x, actor.vertex[actor.polygon[i].a ].y, actor.vertex[actor.polygon[i].a ].z );

// draw second vertex
glTexCoord2f ( actor.mapcoord[actor.polygon[i].b].u, actor.mapcoord[actor.polygon[i].b].v );
glVertex3f ( actor.vertex[actor.polygon[i].b].x, actor.vertex[actor.polygon[i].b].y, actor.vertex[actor.polygon[i].b].z );

// draw third vertex
glTexCoord2f ( actor.mapcoord[actor.polygon[i].c].u, actor.mapcoord[actor.polygon[i].c].v );
glVertex3f ( actor.vertex[actor.polygon[i].c].x, actor.vertex[actor.polygon[i].c].y, actor.vertex[actor.polygon[i].c].z );

// done drawing polygon
glEnd ( );
}

// pop the matrix
glPopMatrix ( );

// disable sphere mapping
glDisable ( GL_TEXTURE_GEN_S );
glDisable ( GL_TEXTURE_GEN_T );
}




The only point I am missing on your list is the "setting-the-cameras-modelview-matrix-up", because I am not quite sure what you meant.

My camera::update function does this:

void camera::update ( void )
// updates the cameras position in the world
{
// move there
glTranslatef ( position.x, position.y, position.z );
glRotatef ( rotation.x, 1.0f, 0.0f, 0.0f );
glRotatef ( rotation.y, 0.0f, 1.0f, 0.0f );
glRotatef ( rotation.z, 0.0f, 0.0f, 1.0f );
}




Thanks for your help though, I already rated you up. (:

Share this post


Link to post
Share on other sites
That looks better, but I think your camera setup is wrong. You want your perspective setup on the projection matrix and your actual camera position on the modelview matrix. Basically something like:

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glPerspective( whatever settings );

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// the stuff you've got in camera.update to set the position

// Rest of your drawing here. Looks ok now though :)

At the moment you're putting your camera orientation/position in the projection matrix, and you're not setting the projection matrix at all.

Share this post


Link to post
Share on other sites
Allright, now it renders again. And is like 12x faster, too. ;) Thanks for that!

Just the original problem is still the same. The lighting screws up whenever I move the light.

This is the new and corrected draw_world function:

void draw_world ( void )
// draws the world
{
// clear screen
glClear ( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );

// set projection matrix
glMatrixMode ( GL_PROJECTION );

// reset
glLoadIdentity ( );

// perspective setup
gluPerspective ( 45.0f, (GLfloat)SCREEN_WIDTH / (GLfloat)SCREEN_HEIGHT, 0.1f, 800.0f );

// set modelview matrix
glMatrixMode ( GL_MODELVIEW );

// reset
glLoadIdentity ( );

// move for camera
cam.update ( );

// set lights
float lgt_pos[] = { prt_system.position.x, prt_system.position.y, prt_system.position.z };
lgt.set_position ( lgt_pos );

// draw objects
draw_objects ( );
}


Share this post


Link to post
Share on other sites
Yes, I'm afraid it is still the same...

Here for you to check (one line): ;)


void light::set_position ( float light_position[] )
// updates the position of the light
{
// move the light
glLightfv ( GL_LIGHT1, GL_POSITION, light_position );
}

Share this post


Link to post
Share on other sites
I made it myself. First model I ever made. But its mesh is not optimized at all and usually I can't model for nothing. Look at www.turbosquid.com to find good models.

Anybody has any suggestions for the problem though? I really don't know what could be wrong. Once I moved the light (and once the lighting is all messed up) I can suddenly change the way it looks by moving / rotation the camera. If the light is at the origin when it all looks good, I can't (like I am supposed to). But I have no idea what could be causing this strange behaviour.

EDIT:

I am sorry for bumping this, but the thread was about to slip of the front page and I really need help here. I have no clue what could still be wrong! Please reply, even if you only say, you need more information or if there is something not clear to you. Any help is appreciated (sp?) big time.

Thanks

Share this post


Link to post
Share on other sites
1)In you camera setup code, you need to first do the rotation, and then the translation(using the negated position). Like this:


glRotatef (rotation.x, 1.0f, 0.0f, 0.0f );
glRotatef (rotation.y, 0.0f, 1.0f, 0.0f );
glRotatef (rotation.z, 0.0f, 0.0f, 1.0f );
glTranslatef (-position.x,-position.y,-position.z );




2)The light position must be a 4-element array, with the 4th element being 1 for positional and 0 for directional light(you need positional):

// set lights
float lgt_pos[] = { prt_system.position.x, prt_system.position.y, prt_system.position.z,1.0 };
lgt.set_position ( lgt_pos );


The way you do it now, you feed a 3-element array when glLightfv expects 4(x,y,z,w), so in the best case it will read garbage and in the worst it will crash.

Generally, I suggest you go back and make a good read of the Red Book, because you seem to be confused about a lot of basic things.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628357
    • Total Posts
      2982225
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now