Jump to content
  • Advertisement
Sign in to follow this  

OpenGL Well,how to Load .3ds file in openGL?

This topic is 3632 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

oh,i'm really puzzled,i find many stuff from the Internet, but i don't see anyting loading from .3ds file in my program. here is a example i found,who could help find some error? ///////////////////////////////////////////////////////////////////////// 前些天在网上搜这个问题时几乎没找到肯定的明确的方法,偶尔在space里看到一篇解决方法,没试过验证一下,放到网上先,共享一下。 opengl----使用3dmax建模后怎样把模型导入 ////////// importmodel.h///////////////////////////////////////////// #include <math.h> #include <vector> #include <windows.h> // Header File For Windows #include <stdio.h> // Header File For Standard Input/Output #include <gl\gl.h> // Header File For The OpenGL32 Library #include <gl\glu.h> // Header File For The GLu32 Library #include <gl\glaux.h> // Header File For The Glaux Library #include <math.h> #define PRIMARY 0x4D4D #define OBJECTINFO 0x3D3D #define VERSION 0x0002 #define EDITKEYFRAME 0xB000 #define MATERIAL 0xAFFF #define OBJECT 0x4000 #define MATNAME 0xA000 #define MATDIFFUSE 0xA020 #define MATMAP 0xA200 #define MATMAPFILE 0xA300 #define OBJ_MESH 0x4100 #define MAX_TEXTURES 100 #define OBJ_VERTICES 0x4110 #define OBJ_FACES 0x4120 #define OBJ_MATERIAL 0x4130 #define OBJ_UV 0x4140 #define MAP_W 32 // size of map along x-axis 32 #define MAP_SCALE 24.0f // the scale of the terrain map #define MAP MAP_W*MAP_SCALE/2 #define KEY_DOWN(vk_code)((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0) #define RAND_COORD(x) ((float)rand()/RAND_MAX * (x)) #define FRAND (((float)rand()-(float)rand())/RAND_MAX) using namespace std; class CVector3 {public: float x, y, z; }; class CVector2 {public: float x, y; }; struct tFace { int vertIndex[3]; int coordIndex[3]; }; struct tMatInfo { char strName[255]; char strFile[255]; BYTE color[3]; int texureId; float uTile; float vTile; float uOffset; float vOffset; } ; struct t3DObject { int numOfVerts; int numOfFaces; int numTexVertex; int materialID; bool bHasTexture; char strName[255]; CVector3 *pVerts; CVector3 *pNormals; CVector2 *pTexVerts; tFace *pFaces; }; struct t3DModel //模型信息结构体 { int numOfObjects; // 模型中对象的数目 int numOfMaterials; // 模型中材质的数目 vector<tMatInfo>pMaterials; // 材质链表信息 vector<t3DObject> pObject; // 模型中对象链表信息 }; struct tChunk //保存块信息的结构 { unsigned short int ID; // 块的ID unsigned int length; // 块的长度 unsigned int bytesRead; // 需要读的块数据的字节数 }; class CLoad3DS// CLoad3DS类处理所有的装入代码 { public: CLoad3DS(); // 初始化数据成员 virtual ~CLoad3DS(); void show3ds(int j0,float tx,float ty,float tz,float size);//显示3ds模型 void Init(char *filename,int j); private: bool Import3DS(t3DModel *pModel, char *strFileName);// 装入3ds文件到模型结构中 void CreateTexture(UINT textureArray[],LPSTR strFileName,int textureID);// 从文件中创建纹理 int GetString(char *); // 读一个字符串 void ReadChunk(tChunk *); // 读下一个块 void ReadNextChunk(t3DModel *pModel, tChunk *); // 读下一个块 void ReadNextObjChunk(t3DModel *pModel,t3DObject *pObject,tChunk *);// 读下一个对象块 void ReadNextMatChunk(t3DModel *pModel, tChunk *); // 读下一个材质块 void ReadColor(tMatInfo *pMaterial, tChunk *pChunk);// 读对象颜色的RGB值 void ReadVertices(t3DObject *pObject, tChunk *); // 读对象的顶点 void ReadVertexIndices(t3DObject *pObject,tChunk *);// 读对象的面信息 void ReadUVCoordinates(t3DObject *pObject,tChunk *);// 读对象的纹理坐标 void ReadObjMat(t3DModel *pModel,t3DObject *pObject,tChunk *pPreChunk);// 读赋予对象的材质名称 void ComputeNormals(t3DModel *pModel); // 计算对象顶点的法向量 void CleanUp(); // 关闭文件,释放内存空间 FILE *m_FilePointer; // 文件指针 tChunk *m_CurrentChunk; tChunk *m_TempChunk; }; ////////////////////////////////////////////////////////////////////////////////// ///////ImportModel.cpp #include "StdAfx.h" //#include "Set3ds.h" #include "ImportModel.h" #include <windows.h> // Header File For Windows #include <stdio.h> // Header File For Standard Input/Output #include <gl\gl.h> // Header File For The OpenGL32 Library #include <gl\glu.h> // Header File For The GLu32 Library #include <gl\glaux.h> // Header File For The Glaux Library #include <math.h> UINT g_Texture[10][MAX_TEXTURES] = {0}; t3DModel g_3DModel[10]; int g_ViewMode = GL_TRIANGLES; bool g_bLighting = true; CLoad3DS::CLoad3DS()// 构造函数的功能是初始化tChunk数据 { m_CurrentChunk = new tChunk; // 初始化并为当前的块分配空间 m_TempChunk = new tChunk; // 初始化一个临时块并分配空间 } CLoad3DS::~CLoad3DS() { CleanUp();// 释放内存空间 for(int j = 0; j <10;j++) for(int i = 0; i < g_3DModel[j].numOfObjects; i++) { delete [] g_3DModel[j].pObject.pFaces;// 删除所有的变量 delete [] g_3DModel[j].pObject.pNormals; delete [] g_3DModel[j].pObject.pVerts; delete [] g_3DModel[j].pObject.pTexVerts; } } //////////////////////////////////////////////////////////////////////// void CLoad3DS::Init(char *filename,int j)// { Import3DS(&g_3DModel[j], filename); // 将3ds文件装入到模型结构体中 for(int i =0; i<g_3DModel[j].numOfMaterials;i++) {if(strlen(g_3DModel[j].pMaterials.strFile)>0)// 判断是否是一个文件名 CreateTexture(g_Texture[j], g_3DModel[j].pMaterials.strFile, i);//使用纹理文件名称来装入位图 g_3DModel[j].pMaterials.texureId = i;// 设置材质的纹理ID } } // 从文件中创建纹理 void CLoad3DS::CreateTexture(UINT textureArray[], LPSTR strFileName, int textureID) { AUX_RGBImageRec *pBitmap = NULL; if(!strFileName) return; // 如果无此文件,则直接返回 pBitmap = auxDIBImageLoad(strFileName); // 装入位图,并保存数据 if(pBitmap == NULL) exit(0); // 如果装入位图失败,则退出 // 生成纹理 glGenTextures(1, &textureArray[textureID]); // 设置像素对齐格式 glPixelStorei (GL_UNPACK_ALIGNMENT, 1); glBindTexture(GL_TEXTURE_2D, textureArray[textureID]); gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pBitmap->sizeX, pBitmap->sizeY, GL_RGB, GL_UNSIGNED_BYTE, pBitmap->data); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR_MIPMAP_LINEAR); if (pBitmap) // 释放位图占用的资源 { if (pBitmap->data) free(pBitmap->data); free(pBitmap); } } void CLoad3DS::show3ds(int j0,float tx,float ty,float tz,float size) //显示3ds模型 { glPushAttrib(GL_CURRENT_BIT);//保存现有颜色属实性 glPushMatrix(); glDisable(GL_TEXTURE_2D); ::glTranslatef( tx, ty, tz); ::glScaled(size,size,size); glRotatef(90, 0, 1.0f, 0); // 遍历模型中所有的对象 for(int i = 0; i < g_3DModel[j0].numOfObjects; i++) {if(g_3DModel[j0].pObject.size() <= 0) break;// 如果对象的大小小于0,则退出 t3DObject *pObject = &g_3DModel[j0].pObject;// 获得当前显示的对象 if(pObject->bHasTexture)// 判断该对象是否有纹理映射 { glEnable(GL_TEXTURE_2D);// 打开纹理映射 glBindTexture(GL_TEXTURE_2D, g_Texture[j0][pObject->materialID]); } else glDisable(GL_TEXTURE_2D);// 关闭纹理映射 //这里原来有错,不行正确调用模型的贴图,g_Texture应该为2维数组 glColor3ub(255, 255, 255); glBegin(g_ViewMode);//开始以g_ViewMode模式绘制 for(int j = 0; j < pObject->numOfFaces; j++) // 遍历所有的面 {for(int tex = 0; tex < 3; tex++) // 遍历三角形的所有点 {int index = pObject->pFaces[j].vertIndex[tex]; // 获得面对每个点的索引 glNormal3f(pObject->pNormals[index].x,pObject->pNormals[index].y, pObject->pNormals[index].z); // 给出法向量 if(pObject->bHasTexture) // 如果对象具有纹理 { if(pObject->pTexVerts) // 确定是否有UVW纹理坐标 glTexCoord2f(pObject->pTexVerts[index].x,pObject->pTexVerts[index].y); } else { if(g_3DModel[j0].pMaterials.size() && pObject->materialID>= 0) { BYTE *pColor = g_3DModel[j0].pMaterials[pObject->materialID].color; glColor3ub(pColor[0],pColor[1],pColor[2]); } } glVertex3f(pObject->pVerts[index].x,pObject->pVerts[index].y,pObject->pVerts[index].z); } } glEnd();// 绘制结束 } glEnable(GL_TEXTURE_2D); glPopMatrix(); glPopAttrib();//恢复前一属性 } ////////////////////////////////////////////////////////////////// // 打开一个3ds文件,读出其中的内容,并释放内存 bool CLoad3DS::Import3DS(t3DModel *pModel, char *strFileName) { char strMessage[255] = {0}; // 打开一个3ds文件 m_FilePointer = fopen(strFileName, "rb"); // 确保所获得的文件指针合法 if(!m_FilePointer) { sprintf(strMessage, "Unable to find the file: %s!", strFileName); MessageBox(NULL, strMessage, "Error", MB_OK); return false; } // 当文件打开之后,首先应该将文件最开始的数据块读出以判断是否是一个3ds文件 // 如果是3ds文件的话,第一个块ID应该是PRIMARY // 将文件的第一块读出并判断是否是3ds文件 ReadChunk(m_CurrentChunk); // 确保是3ds文件 if (m_CurrentChunk->ID != PRIMARY) { sprintf(strMessage, "Unable to load PRIMARY chuck from file: %s!", strFileName); MessageBox(NULL, strMessage, "Error", MB_OK); return false; } // 现在开始读入数据,ReadNextChunk()是一个递归函数 // 通过调用下面的递归函数,将对象读出 ReadNextChunk(pModel, m_CurrentChunk); // 在读完整个3ds文件之后,计算顶点的法线 ComputeNormals(pModel); // 释放内存空间 // CleanUp(); return true; } // 下面的函数释放所有的内存空间,并关闭文件 void CLoad3DS::CleanUp() { // 遍历场景中所有的对象 fclose(m_FilePointer); // 关闭当前的文件指针 delete m_CurrentChunk; // 释放当前块 delete m_TempChunk; // 释放临时块 } // 下面的函数读出3ds文件的主要部分 void CLoad3DS::ReadNextChunk(t3DModel *pModel, tChunk *pPreChunk) { t3DObject newObject = {0}; // 用来添加到对象链表 tMatInfo newTexture = {0}; // 用来添加到材质链表 unsigned int version = 0; // 保存文件版本 int buffer[50000] = {0}; // 用来跳过不需要的数据 m_CurrentChunk = new tChunk; // 为新的块分配空间 // 下面每读一个新块,都要判断一下块的ID,如果该块是需要的读入的,则继续进行 // 如果是不需要读入的块,则略过 // 继续读入子块,直到达到预定的长度 while (pPreChunk->bytesRead < pPreChunk->length) { // 读入下一个块 ReadChunk(m_CurrentChunk); // 判断块的ID号 switch (m_CurrentChunk->ID) { case VERSION: // 文件版本号 // 在该块中有一个无符号短整型数保存了文件的版本 // 读入文件的版本号,并将字节数添加到bytesRead变量中 m_CurrentChunk->bytesRead += fread(&version, 1, m_CurrentChunk->length - m_CurrentChunk->bytesRead, m_FilePointer); // 如果文件版本号大于3,给出一个警告信息 if (version > 0x03) MessageBox(NULL, "This 3DS file is over version 3 so it may load incorrectly", "Warning", MB_OK); break; case OBJECTINFO: // 网格版本信息 // 读入下一个块 ReadChunk(m_TempChunk); // 获得网格的版本号 m_TempChunk->bytesRead += fread(&version, 1, m_TempChunk->length - m_TempChunk->bytesRead, m_FilePointer); // 增加读入的字节数 m_CurrentChunk->bytesRead += m_TempChunk->bytesRead; // 进入下一个块 ReadNextChunk(pModel, m_CurrentChunk); break; case MATERIAL: // 材质信息 // 材质的数目递增 pModel->numOfMaterials++; // 在纹理链表中添加一个空白纹理结构 pModel->pMaterials.push_back(newTexture); // 进入材质装入函数 ReadNextMatChunk(pModel, m_CurrentChunk); break; case OBJECT: // 对象的名称 // 该块是对象信息块的头部,保存了对象了名称 // 对象数递增 pModel->numOfObjects++; // 添加一个新的tObject节点到对象链表中 pModel->pObject.push_back(newObject); // 初始化对象和它的所有数据成员 memset(&(pModel->pObject[pModel->numOfObjects - 1]), 0, sizeof(t3DObject)); // 获得并保存对象的名称,然后增加读入的字节数 m_CurrentChunk->bytesRead += GetString(pModel->pObject[pModel->numOfObjects - 1].strName); // 进入其余的对象信息的读入 ReadNextObjChunk(pModel, &(pModel->pObject[pModel->numOfObjects - 1]), m_CurrentChunk); break; case EDITKEYFRAME: // 跳过关键帧块的读入,增加需要读入的字节数 m_CurrentChunk->bytesRead += fread(buffer, 1, m_CurrentChunk->length - m_CurrentChunk->bytesRead, m_FilePointer); break; default: // 跳过所有忽略的块的内容的读入,增加需要读入的字节数 m_CurrentChunk->bytesRead += fread(buffer, 1, m_CurrentChunk->length - m_CurrentChunk->bytesRead, m_FilePointer); break; } // 增加从最后块读入的字节数 pPreChunk->bytesRead += m_CurrentChunk->bytesRead; } // 释放当前块的内存空间 delete m_CurrentChunk; m_CurrentChunk = pPreChunk; } // 下面的函数处理所有的文件中对象的信息 void CLoad3DS::ReadNextObjChunk(t3DModel *pModel, t3DObject *pObject, tChunk *pPreChunk) { int buffer[50000] = {0}; // 用于读入不需要的数据 // 对新的块分配存储空间 m_CurrentChunk = new tChunk; // 继续读入块的内容直至本子块结束 while (pPreChunk->bytesRead < pPreChunk->length) { // 读入下一个块 ReadChunk(m_CurrentChunk); // 区别读入是哪种块 switch (m_CurrentChunk->ID) { case OBJ_MESH: // 正读入的是一个新块 // 使用递归函数调用,处理该新块 ReadNextObjChunk(pModel, pObject, m_CurrentChunk); break; case OBJ_VERTICES: // 读入是对象顶点 ReadVertices(pObject, m_CurrentChunk); break; case OBJ_FACES: // 读入的是对象的面 ReadVertexIndices(pObject, m_CurrentChunk); break; case OBJ_MATERIAL: // 读入的是对象的材质名称 // 该块保存了对象材质的名称,可能是一个颜色,也可能是一个纹理映射。同时在该块中也保存了 // 纹理对象所赋予的面 // 下面读入对象的材质名称 ReadObjMat(pModel, pObject, m_CurrentChunk); break; case OBJ_UV: // 读入对象的UV纹理坐标 // 读入对象的UV纹理坐标 ReadUVCoordinates(pObject, m_CurrentChunk); break; default: // 略过不需要读入的块 m_CurrentChunk->bytesRead += fread(buffer, 1, m_CurrentChunk->length - m_CurrentChunk->bytesRead, m_FilePointer); break; } // 添加从最后块中读入的字节数到前面的读入的字节中 pPreChunk->bytesRead += m_CurrentChunk->bytesRead; } // 释放当前块的内存空间,并把当前块设置为前面块 delete m_CurrentChunk; m_CurrentChunk = pPreChunk; } // 下面的函数处理所有的材质信息 void CLoad3DS::ReadNextMatChunk(t3DModel *pModel, tChunk *pPreChunk) { int buffer[50000] = {0}; // 用于读入不需要的数据 // 给当前块分配存储空间 m_CurrentChunk = new tChunk; // 继续读入这些块,知道该子块结束 while (pPreChunk->bytesRead < pPreChunk->length) { // 读入下一块 ReadChunk(m_CurrentChunk); // 判断读入的是什么块 switch (m_CurrentChunk->ID) { case MATNAME: // 材质的名称 // 读入材质的名称 m_CurrentChunk->bytesRead += fread(pModel->pMaterials[pModel->numOfMaterials - 1].strName, 1, m_CurrentChunk->length - m_CurrentChunk->bytesRead, m_FilePointer); break; case MATDIFFUSE: // 对象的R G B颜色 ReadColor(&(pModel->pMaterials[pModel->numOfMaterials - 1]), m_CurrentChunk); break; case MATMAP: // 纹理信息的头部 // 进入下一个材质块信息 ReadNextMatChunk(pModel, m_CurrentChunk); break; case MATMAPFILE: // 材质文件的名称 // 读入材质的文件名称 m_CurrentChunk->bytesRead += fread(pModel->pMaterials[pModel->numOfMaterials - 1].strFile, 1, m_CurrentChunk->length - m_CurrentChunk->bytesRead, m_FilePointer); break; default: // 掠过不需要读入的块 m_CurrentChunk->bytesRead += fread(buffer, 1, m_CurrentChunk->length - m_CurrentChunk->bytesRead, m_FilePointer); break; } // 添加从最后块中读入的字节数 pPreChunk->bytesRead += m_CurrentChunk->bytesRead; } // 删除当前块,并将当前块设置为前面的块 delete m_CurrentChunk; m_CurrentChunk = pPreChunk; } // 下面函数读入块的ID号和它的字节长度 void CLoad3DS::ReadChunk(tChunk *pChunk) { // 读入块的ID号,占用了2个字节。块的ID号象OBJECT或MATERIAL一样,说明了在块中所包含的内容 pChunk->bytesRead = fread(&pChunk->ID, 1, 2, m_FilePointer); // 然后读入块占用的长度,包含了四个字节 pChunk->bytesRead += fread(&pChunk->length, 1, 4, m_FilePointer); } // 下面的函数读入一个字符串 int CLoad3DS::GetString(char *pBuffer) { int index = 0; // 读入一个字节的数据 fread(pBuffer, 1, 1, m_FilePointer); // 直到结束 while (*(pBuffer + index++) != 0) { // 读入一个字符直到NULL fread(pBuffer + index, 1, 1, m_FilePointer); } // 返回字符串的长度 return strlen(pBuffer) + 1; } // 下面的函数读入RGB颜色 void CLoad3DS::ReadColor(tMatInfo *pMaterial, tChunk *pChunk) { // 读入颜色块信息 ReadChunk(m_TempChunk); // 读入RGB颜色 m_TempChunk->bytesRead += fread(pMaterial->color, 1, m_TempChunk->length - m_TempChunk->bytesRead, m_FilePointer); // 增加读入的字节数 pChunk->bytesRead += m_TempChunk->bytesRead; } // 下面的函数读入顶点索引 void CLoad3DS::ReadVertexIndices(t3DObject *pObject, tChunk *pPreChunk) { unsigned short index = 0; // 用于读入当前面的索引 // 读入该对象中面的数目 pPreChunk->bytesRead += fread(&pObject->numOfFaces, 1, 2, m_FilePointer); // 分配所有面的存储空间,并初始化结构 pObject->pFaces = new tFace [pObject->numOfFaces]; memset(pObject->pFaces, 0, sizeof(tFace) * pObject->numOfFaces); // 遍历对象中所有的面 for(int i = 0; i < pObject->numOfFaces; i++) { for(int j = 0; j < 4; j++) { // 读入当前面的第一个点 pPreChunk->bytesRead += fread(&index, 1, sizeof(index), m_FilePointer); if(j < 3) { // 将索引保存在面的结构中 pObject->pFaces.vertIndex[j] = index; } } } } // 下面的函数读入对象的UV坐标 void CLoad3DS::ReadUVCoordinates(t3DObject *pObject, tChunk *pPreChunk) { // 为了读入对象的UV坐标,首先需要读入UV坐标的数量,然后才读入具体的数据 // 读入UV坐标的数量 pPreChunk->bytesRead += fread(&pObject->numTexVertex, 1, 2, m_FilePointer); // 分配保存UV坐标的内存空间 pObject->pTexVerts = new CVector2 [pObject->numTexVertex]; // 读入纹理坐标 pPreChunk->bytesRead += fread(pObject->pTexVerts, 1, pPreChunk->length - pPreChunk->bytesRead, m_FilePointer); } // 读入对象的顶点 void CLoad3DS::ReadVertices(t3DObject *pObject, tChunk *pPreChunk) { // 在读入实际的顶点之前,首先必须确定需要读入多少个顶点。 // 读入顶点的数目 pPreChunk->bytesRead += fread(&(pObject->numOfVerts), 1, 2, m_FilePointer); // 分配顶点的存储空间,然后初始化结构体 pObject->pVerts = new CVector3 [pObject->numOfVerts]; memset(pObject->pVerts, 0, sizeof(CVector3) * pObject->numOfVerts); // 读入顶点序列 pPreChunk->bytesRead += fread(pObject->pVerts, 1, pPreChunk->length - pPreChunk->bytesRead, m_FilePointer); // 现在已经读入了所有的顶点。 // 因为3D Studio Max的模型的Z轴是指向上的,因此需要将y轴和z轴翻转过来。 // 具体的做法是将Y轴和Z轴交换,然后将Z轴反向。 // 遍历所有的顶点 for(int i = 0; i < pObject->numOfVerts; i++) { // 保存Y轴的值 float fTempY = pObject->pVerts.y; // 设置Y轴的值等于Z轴的值 pObject->pVerts.y = pObject->pVerts.z; // 设置Z轴的值等于-Y轴的值 pObject->pVerts.z = -fTempY; } } // 下面的函数读入对象的材质名称 void CLoad3DS::ReadObjMat(t3DModel *pModel, t3DObject *pObject, tChunk *pPreChunk) { char strMaterial[255] = {0}; // 用来保存对象的材质名称 int buffer[50000] = {0}; // 用来读入不需要的数据 // 材质或者是颜色,或者是对象的纹理,也可能保存了象明亮度、发光度等信息。 // 下面读入赋予当前对象的材质名称 pPreChunk->bytesRead += GetString(strMaterial); // 遍历所有的纹理 for(int i = 0; i < pModel->numOfMaterials; i++) { //如果读入的纹理与当前的纹理名称匹配 if(strcmp(strMaterial, pModel->pMaterials.strName) == 0) { // 设置材质ID pObject->materialID = i; // 判断是否是纹理映射,如果strFile是一个长度大于1的字符串,则是纹理 if(strlen(pModel->pMaterials.strFile) > 0) { // 设置对象的纹理映射标志 pObject->bHasTexture = true; } break; } else { // 如果该对象没有材质,则设置ID为-1 pObject->materialID = -1; } } pPreChunk->bytesRead += fread(buffer, 1, pPreChunk->length - pPreChunk->bytesRead, m_FilePointer); } // 下面的这些函数主要用来计算顶点的法向量,顶点的法向量主要用来计算光照 // 下面的宏定义计算一个矢量的长度 #define Mag(Normal) (sqrt(Normal.x*Normal.x + Normal.y*Normal.y + Normal.z*Normal.z)) // 下面的函数求两点决定的矢量 CVector3 Vector(CVector3 vPoint1, CVector3 vPoint2) { CVector3 vVector; vVector.x = vPoint1.x - vPoint2.x; vVector.y = vPoint1.y - vPoint2.y; vVector.z = vPoint1.z - vPoint2.z; return vVector; } // 下面的函数两个矢量相加 CVector3 AddVector(CVector3 vVector1, CVector3 vVector2) { CVector3 vResult; vResult.x = vVector2.x + vVector1.x; vResult.y = vVector2.y + vVector1.y; vResult.z = vVector2.z + vVector1.z; return vResult; } // 下面的函数处理矢量的缩放 CVector3 DivideVectorByScaler(CVector3 vVector1, float Scaler) { CVector3 vResult; vResult.x = vVector1.x / Scaler; vResult.y = vVector1.y / Scaler; vResult.z = vVector1.z / Scaler; return vResult; } // 下面的函数返回两个矢量的叉积 CVector3 Cross(CVector3 vVector1, CVector3 vVector2) { CVector3 vCross; vCross.x = ((vVector1.y * vVector2.z) - (vVector1.z * vVector2.y)); vCross.y = ((vVector1.z * vVector2.x) - (vVector1.x * vVector2.z)); vCross.z = ((vVector1.x * vVector2.y) - (vVector1.y * vVector2.x)); return vCross; } // 下面的函数规范化矢量 CVector3 Normalize(CVector3 vNormal) { double Magnitude; Magnitude = Mag(vNormal); // 获得矢量的长度 vNormal.x /= (float)Magnitude; vNormal.y /= (float)Magnitude; vNormal.z /= (float)Magnitude; return vNormal; } // 下面的函数用于计算对象的法向量 void CLoad3DS::ComputeNormals(t3DModel *pModel) { CVector3 vVector1, vVector2, vNormal, vPoly[3]; // 如果模型中没有对象,则返回 if(pModel->numOfObjects <= 0) return; // 遍历模型中所有的对象 for(int index = 0; index < pModel->numOfObjects; index++) { // 获得当前的对象 t3DObject *pObject = &(pModel->pObject[index]); // 分配需要的存储空间 CVector3 *pNormals = new CVector3 [pObject->numOfFaces]; CVector3 *pTempNormals = new CVector3 [pObject->numOfFaces]; pObject->pNormals = new CVector3 [pObject->numOfVerts]; // 遍历对象的所有面 for(int i=0; i < pObject->numOfFaces; i++) { vPoly[0] = pObject->pVerts[pObject->pFaces.vertIndex[0]]; vPoly[1] = pObject->pVerts[pObject->pFaces.vertIndex[1]]; vPoly[2] = pObject->pVerts[pObject->pFaces.vertIndex[2]]; // 计算面的法向量 vVector1 = Vector(vPoly[0], vPoly[2]); // 获得多边形的矢量 vVector2 = Vector(vPoly[2], vPoly[1]); // 获得多边形的第二个矢量 vNormal = Cross(vVector1, vVector2); // 获得两个矢量的叉积 pTempNormals = vNormal; // 保存非规范化法向量 vNormal = Normalize(vNormal); // 规范化获得的叉积 pNormals = vNormal; // 将法向量添加到法向量列表中 } // 下面求顶点法向量 CVector3 vSum = {0.0, 0.0, 0.0}; CVector3 vZero = vSum; int shared=0; // 遍历所有的顶点 for (i = 0; i < pObject->numOfVerts; i++) { for (int j = 0; j < pObject->numOfFaces; j++) // 遍历所有的三角形面 { // 判断该点是否与其它的面共享 if (pObject->pFaces[j].vertIndex[0] == i || pObject->pFaces[j].vertIndex[1] == i || pObject->pFaces[j].vertIndex[2] == i) { vSum = AddVector(vSum, pTempNormals[j]); shared++; } } pObject->pNormals = DivideVectorByScaler(vSum, float(-shared)); // 规范化最后的顶点法向 pObject->pNormals = Normalize(pObject->pNormals); vSum = vZero; shared = 0; } // 释放存储空间,开始下一个对象 delete [] pTempNormals; delete [] pNormals; } } ////////////////////////////////////// 使用模型: class basicpic { public: basicpic(); virtual ~basicpic(); GLUquadricObj *obj; CLoad3DS* m_3ds; void Scene(int obj,float size); }; basicpic::basicpic() { m_3ds=new CLoad3DS(); m_3ds->Init("ccc1.3DS",0); m_3ds->Init("art1.3DS",1); m_3ds->Init("art2.3DS",2); m_3ds->Init("art3.3DS",3); m_3ds->Init("art4.3DS",4); m_3ds->Init("mis1.3DS",5); m_3ds->Init("mis2.3DS",6); m_3ds->Init("mis3.3DS",7); m_3ds->Init("mis4.3DS",8); glEnable(GL_TEXTURE_2D); } void basicpic::Scene(int obj,float size) { m_3ds->show3ds(obj,0,0,0,size); }

Share this post

Link to post
Share on other sites
Perhaps you should use the source tags, tell us what error(s) you're getting and explain what effort you did to fix the problem yourself.


Share this post

Link to post
Share on other sites
There are many .3ds loaders for OpenGL, just look around google or find a librari like lib3ds. And please put all code in [sou.rce] code here
[/sou.rce] (without the periods), and lastly post the errors. If your just going to copy/paste code without understanding it, your not going to go anywhere fast, NEVER do that, understand everything you copy, why it's there, and many other things.

Share this post

Link to post
Share on other sites
oo,i image to learn about this,it is very useful to manage it in my program.And anybody who could give me the file structure of the *.3DS?

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By plz717
      Hello, everyone! I hope my problem isn't too 'beginnerish'. I'm doing research on motion synthesis now, trying to implement the Deep Mimic paper (DeepMimic) by BINPENG XUE, in this paper, I need to first retarget character A's motion to another character B to make the reference motion clips for character B, since we don't have character B‘s reference motion. The most important thing is that in the paper, the author copied character A's joint's rotation with respective to joint's local coordinate system (not the parent) to character B. In my personal understanding, the joint's rotation with respective to joint's local coordinate system is something like that in the attached photo, where for the Elbow joint, i need to get the Elbow's rotation in the elbow's local coordinate system (i'm very grateful for you to share your ideas if i have misunderstanding about it 🙂)
      I have searched many materials on the internet about how to extract the local joint's information from FBX, the most relative one i found is the pivot rotation( and geometric transformation, object offset transformation). I'm a beginner in computer graphics, and i'm confused about whether the pivot rotation( or geometric transformation, object offset transformation) is exactly the joint's local rotation i'm seeking? I hope someone that have any ideas can help me, I'd be very grateful for any pointers in the right direction. Thanks in advance! 

    • By nOoNEE
      hello guys , i have some questions  what does glLinkProgram  and  glBindAttribLocation do?  i searched but there wasnt any good resource 
    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.

    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
    • Total Posts

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!