• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About V3rt3x

  • Rank
  1. Thank you for your answer, and for the link! Is the switch from one VBO to another (via glBindBuffer) costly? like switching from one texture object to another.
  2. Hello, I have some questions about vertex buffers management. Let's say we have a scene composed of: - the world (walls, terrain, etc.), which is static data; - the objects (represented by 3D models), which are dynamic data (because they are animated). When rendering the scene, I have determined a small part of world data and a subset of objects to be drawn. How to render all that stuff using Vertex Buffer Objects? More precisely: *Assuming I compute the animated vertices only for the rendering stage, i.e. I can put them in Video RAM I won't read them back* For the world (static data), I would put everything in one static VBO. For the objects (dynamic data), how many VBOs should I use? is it better to: - use one big vertex buffer? This VBO would be filled with all objects I have to draw, and then rendered in "one stage" (with respect to objects' textures and shaders). The VBO would be cleaned up after each frame, and could be reused for the next frame. It would avoid multiple memory allocations for the "final" vertex data, but I would have to prepare animated vertices of all objects before drawing (or maybe I could draw them independently -- drawing one object during vertex computations for the next one, etc., even if they are stored on the same buffer?) - use a vertex buffer for each object? Since the number of objects to be drawn on the scene is variable, I would have to create/destroy a VBO for each object for each frame. Or maybe I could keep some VBOs and reuse them for others objects during the next frame? I don't know the cost of creating/destroying a VBO. - use multiple VBOs of a "medium" size, in which I could fill multiple objects. I could render objects of a VBO when it is "full", and compute vertices+fill next VBO for the next objects. I could avoid creating/destroying a lot of VBOs. This is like a mix of the two previous solutions. Also, do you think it's better to prepare all vertex data (CPU) and then draw every thing (GPU), or prepare vertex data for an object, then render it, then prepare the next object, render it, etc.? Thanks.
  3. Hello, My 3D scene is composed of a world and objects: - the world has static geometry data; - the objects have dynamic geometry data: 3D models with skeletal animation. I render the world by using a BSP tree or an octree, so that I can draw only visible triangles (thanks to frustum culling). Now I need to determinate which objets I'll need to draw. Currently, I need to compute the bounding box of each object to see if it fits or intersects the frustum. My problem is that in order to compute the bounding box of an object, I have to compute *all* vertices of objet's model (given its animated skeleton). I'd like to avoid having to compute *all* vertices of *every* object present in my world, for just drawing 10% of them, or maybe less. Since multiple 3D models can share a skeleton (and thus, can share animations), it's quite hard to precompute a bounding box. Without skeletal animations, but only model frames (like in the MD2 format), it's easy to precompute the bounding box because I know the geometry of the model for each frame. But with a skeleton, it will depend on the model it is applied to. A solution would be to precompute the bounding box of every 3D model, for each skeleton frame and for each animation I may want it to play. Then at runtime, I'll interpolate the bounding boxes of current and next skeleton frames to get the real model's bounding box. I'd like to know if there are other methods for occlusion culling and skeleton models. For those who had to implement a 3D engine like this, how did you cope with that? Thanks.
  4. Could you provide a screenshot? That may be a bug from your OpenGL implementation (i.e the graphics drivers). Which hardware/operating system/drivers are you using?
  5. The problem comes from compiz, not from your code. You can't do anything but wait the bug to be fixed (or fix it yourself) in freeglut or in compiz... This bug is know since the begining of compiz era.
  6. I got an outlined teapot with this code: glClearStencil (0); glClear (GL_STENCIL_BUFFER_BIT); glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glDepthMask (GL_FALSE); glEnable (GL_STENCIL_TEST); glStencilFunc (GL_ALWAYS, 1, 0xFFFFFFFF); glStencilOp (GL_REPLACE, GL_REPLACE, GL_REPLACE); // Draw front-facing polygons as filled glutSolidTeapot (1.0f); glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glDepthMask (GL_TRUE); glStencilFunc (GL_NOTEQUAL, 1, 0xFFFFFFFF); glStencilOp (GL_KEEP, GL_KEEP, GL_KEEP); // draw back-facing polygons as red lines glEnable (GL_CULL_FACE); glFrontFace (GL_CW); glLineWidth (5.0f); glPolygonMode (GL_BACK, GL_LINE); glCullFace (GL_FRONT); glColor3f (1.0f, 0.0f, 0.0f); glutSolidTeapot (1.0f); glLineWidth (1.0f); glPolygonMode (GL_BACK, GL_FILL); glDisable (GL_CULL_FACE); glDisable (GL_STENCIL_TEST); Don't forget to enable a stencil buffer when you create your GL window.
  7. You're right rollo! Now it works! Thanks a lot!
  8. In the Orange Book, page 96, it is said that the gl_Vertex attribute, like the attribute 0, signal the end of the vertex. Since I send the first frame vertices through gl_Vertex attribute, I think it is correct, no? And yes, I forgot to mention, but the demo works fine, it runs with the working shader, but you just have to replace lerp.vert's code with one of the listings I posted in the original thread to see the problem... I have a GeForceFX 5500 and 76.64 drivers. If I have the time tomorrow, I'll upgrade my drivers and build a windows executable. Now it's time to sleep for me :)
  9. @rollo: I have tested my shaders with glslparser, no error whas reported. I use recent drivers (I'm running Linux). @_the_phantom_: Here are my rendering functions: The main display func: void Display( void ) { // Clean window glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glLoadIdentity(); // some code... glUseProgram( lerpProg ); glBindAttribLocationARB( lerpProg, 5, "firstVertex" ); glBindAttribLocationARB( lerpProg, 6, "secondVertex" ); checkOpenGLErrors( "display" ); // Draw objects cyberpunk.DrawObjectItp( bAnimated ); weapon.DrawObjectItp( bAnimated ); glUseProgram( 0 ); glDisable( GL_LIGHTING ); glDisable( GL_TEXTURE_2D ); } The mesh “setupVertexArrays” func: void Mesh::setupVertexArraysItp( int frameA, int frameB, float interp ) { _itpFrame.vertexArray = _frames[ frameA ].vertexArray; _itpFrame.normalArray = _frames[ frameA ].normalArray; _itpFrame2.vertexArray = _frames[ frameB ].vertexArray; _itpFrame2.normalArray = _frames[ frameB ].normalArray; GLint interpLoc = glGetUniformLocation( lerpProg, "fInterp" ); glUniform1f( interpLoc, interp ); } The mesh rendering func: void Mesh::DrawModelItpWithVertexArrays( void ) { glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableVertexAttribArray( 5 ); glEnableVertexAttribArray( 6 ); // Upload model data to OpenGL glVertexPointer( 3, GL_FLOAT, 0, _itpFrame.vertexArray ); glNormalPointer( GL_FLOAT, 0, _itpFrame.normalArray ); glClientActiveTexture( GL_TEXTURE0 ); glTexCoordPointer( 2, GL_FLOAT, 0, _texCoordArray ); glClientActiveTexture( GL_TEXTURE1 ); glTexCoordPointer( 3, GL_FLOAT, 0, _itpFrame2.vertexArray ); glVertexAttribPointer( 5, 3, GL_FLOAT, GL_FALSE, 0, _itpFrame.vertexArray ); glVertexAttribPointer( 6, 3, GL_FLOAT, GL_FALSE, 0, _itpFrame2.vertexArray ); // Bind to model's texture glBindTexture( GL_TEXTURE_2D, _texId ); // Draw the model glDrawElements( GL_TRIANGLES, _numTris * 3, GL_UNSIGNED_INT, _vertIndices ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableVertexAttribArray( 5 ); glDisableVertexAttribArray( 6 ); } You can download the demo at http://tfc.duke.free.fr/old/models/md2opti.zip The Diplay() function is in Main.cpp, the two other shown above are in Md2.cpp. Shader loading related code is in Shaders.h/.cpp.
  10. Hello, I got strange behaviours with GLSL on nvidia cards (I haven't tried on ATI cards since I have'nt any one). It causes crashes for no reason, for example: I add 4 blank lines in my shader, and it causes au segmentation fault at compile time! o_O I comment a line, it crashes... unless I remove them, or add blank lines >_< Another (concrete) exemple: I want to implement linear interpolation between two frames of a MD2 model in hardware. I send to GLSL the vertices of the two frames, one array via glVertexPointer, the other via glVertexAttribArray. Here is the vertex shader: uniform float fInterp; attribute vec3 secondVertex; void main() { vec4 v1 = gl_Vertex; vec4 v2 = vec4(secondVertex,1.0); vec4 itp = mix( v1, v2, fInterp ); gl_Position = gl_ModelViewProjectionMatrix * itp; } At run time, it seems that secondVertex is always 0! To ensure the data was really sent to OpenGL, I changed my shader to this: uniform float fInterp; attribute vec3 secondVertex; void main() { gl_Position = gl_ModelViewProjectionMatrix * vec4(secondVertex,1.0); } This one worked well! I have seen that if I use the gl_Vertex variable, for exemple declaring a dummy temporary variable affected with gl_Vertex value (vec4 v1 = gl_Vertex;), it breaks the shader! secondVertex became null! It has no sens!!! Why using a built-in attribute would break the others? I also tried this code, sending the two arrays via glVertexAttribArray: uniform float fInterp; attribute vec3 firstVertex; attribute vec3 secondVertex; void main() { vec4 v1 = vec4(firstVertex,1.0); vec4 v2 = vec4(secondVertex,1.0); vec4 itp = mix( v1, v2, fInterp ); gl_Position = gl_ModelViewProjectionMatrix * itp; } It has the same results than using gl_Vertex: secondVertex is null... Finally, I could get my linear interpolation, passing my second vertex array via... glTexCoordPointer... *vomit* uniform float fInterp; attribute vec3 firstVertex; attribute vec3 secondVertex; void main() { vec4 v1 = gl_Vertex; vec4 v2 = gl_MultiTexCoord1; vec4 itp = mix( v1, v2, fInterp ); gl_Position = gl_ModelViewProjectionMatrix * itp; } I tried multiple attribute locations (6, 7, 3, 4, 0), it doesn't change the result. I have seen some GLSL demos (from nvidia) running fine and using only glVertexAttrib to send data. I have modified one of them in order to use glVertex, glNormal, glTexCoord and glVertexAttrib (it was not via vertex arrays) together, and it still worked well... Where's the problem with my code? Is the nVidia's GLSL implementation so poor? (crashing for a blank line) [Edited by - V3rt3x on August 5, 2005 4:22:24 AM]
  11. What are the advantages of using the GL_BGR(A) texture format over GL_RGB(A)? I have read somewhere (in an nVidia demo source code) that BGR was speedier than RGB in nVidia cards, but why?
  12. OpenGL

    My TGA texture loader demo, if it can help you (written in C). It handles a lot of TGA types.
  13. I opted for the first method you have described.
  14. I got the solution at doom3world :-) http://www.doom3world.org/phpbb2/viewtopic.php?p=100174#100020 Problem solved.
  15. The bind pose? is it that the bind pose? I'm using the MD5 model format, but vertices have ”multiple positions” stored separately with the weight factor and joint index (in order to get access to quaternion orientation and joint's position). struct Md5Vertex_t { float st[2]; // Texture coordinates int startWeight; // Start index weights int countWeight; // Number of weights }; struct Md5Weight_t { int joint; // Joint index float bias; // Weight factor Vector3f pos; Vector3f norm; // what I would like in my dreams (o_o) }; So if I want to compute normals I need to use a Skeleton because a vertex weights give a position depending on their joint... I'm completely lost with theses weight positions... In books I have the position is the same but there are multiple transformation matrices (one for each bone). Here we have multiple positions and multiple matrices :(