Sign in to follow this  
xelanoimis

OpenGL Vertex Declarations in OpenGL and DirectX ?

Recommended Posts

xelanoimis    172
Hi, What is the OpenGL echivalent for DirectX9 vertex declarations ? I know that OpenGL can use vertex arrays that are echivalent for vertexbuffers. But how to set vertex declarations, and can it use multiple streams (vertex arrays) ? I want to know this for use with CG effects or vertex programs available for OpenGL. Can anyone give a small code example (not necessarly functional, just curios about the functions used). I am also interested in a platform independent approach to meshes and effects. Can vertex shaders and vertex declarations be wraped in some engine classes to use with both OpenGL and DirectX effects (eventualy CG) ? Thanks

Share this post


Link to post
Share on other sites
jollyjeffers    1570
Moved to OpenGL

For the first part of your question you're more likely to get a better answer from the OpenGL experts - they're the guys who know the API best [wink]

AS for an API/platform independent method, yes it's probably possible - but you can't really take it any further until you know what form both API's can use (or require). From there you can either decide whether you want to take a subset approach (basic functionality that both API's can handle) or some higher-level approach that involves work-arounds/emulation for parts of the respective API's that don't natively handle what you want. For example, if you really want streams, but OpenGL can't do that you could (presumably) emulate it by creating your own Input Assembler - not necessarily efficient, but it'd probably work.

hth
Jack

Share this post


Link to post
Share on other sites
Alex Baker    172
JollyJeffers,

i can't follow your explanation; so, plesae tell me, what do you mean exactly by the term "subset approach" ?

(do you mean splitting up the rendering-work into more/simple subsets by switching only the vertex-dec.format ?)

Share this post


Link to post
Share on other sites
Promit    13246
What it comes down to in OpenGL is that you get to define vertex declarations yourself, and then you get to write the rendering backend code that translates a vertex declaration into BindBuffer and Pointer calls.

It's fairly simple to do. You define an array of structures that have all of the information about the vertex declaration -- what its usage is, what type it is, offset and stream index, etc. Then you loop through that array, calling BindBuffer and pointer as necessary.

I basically have a header that defines stuff for a declaration:

#ifndef _GEOMETRY_H
#define _GEOMETRY_H

#include "../HoverEngine.h"

#include "RendererTypes.h"
#include "VertexBuffer.h"
#include "IndexBuffer.h"

#include <vector>

//type of element, i.e. what it specifies
enum VertexElementType
{
VET_XYZ,
VET_NORMAL,
VET_DIFFUSE,
VET_SPECULAR,
VET_TEXCOORD,
};

//'format' of element, basically the data type
//it's recommended that you avoid the INT64 types, as well as long double
enum VertexElementFormat
{
VEF_SIGNED_BYTE,
VEF_UNSIGNED_BYTE,

VEF_SIGNED_SHORT,
VEF_UNSIGNED_SHORT,

VEF_UNSIGNED_INT,
VEF_SIGNED_INT,

VEF_UNSIGNED_INT64,
VEF_SIGNED_INT64,

VEF_FLOAT,
VEF_DOUBLE,
VEF_LONG_DOUBLE,
};

//FVF quick-system
#define FVF_XYZ 0x00000001
#define FVF_NORMAL 0x00000002
#define FVF_DIFFUSE 0x00000004
#define FVF_SPECULAR 0x00000008

#define FVF_TEXTURE0 0x00010000
#define FVF_TEXTURE1 0x00020000
#define FVF_TEXTURE2 0x00040000
#define FVF_TEXTURE3 0x00080000
#define FVF_TEXTURE(X) (0x00010000 << (X))

//specifies a single part of a vertex, e.g. one position, or one texcoord, etc.
struct VertexElement
{
unsigned int Stream;
unsigned int Count;
std::size_t Offset; //offset in the structure
VertexElementFormat Format;
VertexElementType Type;

//normal ctor
VertexElement() : Stream( 0 ), Count( 0 ), Format( VEF_FLOAT ),
Type( VET_XYZ ), Offset( 0 )
{ }

//inline ctor for laziness
VertexElement( unsigned int vStream, unsigned int vCount, std::size_t vOffset,
VertexElementFormat vFormat, VertexElementType vType )
: Stream( vStream ), Count( vCount ), Format( vFormat), Type( vType ),
Offset( vOffset )
{ }

static std::size_t FormatSize( VertexElementFormat vef );

//Compute the size of this element in bytes
std::size_t SizeBytes() { return FormatSize( Format ) * Count; }
};


//specifies a complete vertex, basically just a list of elements
struct VertexDeclaration
{
typedef std::vector<VertexElement> ElementList;
typedef ElementList::iterator Iterator;

ElementList Elements;

VertexDeclaration()
{
Elements.reserve( 4 );
}

static VertexDeclaration CreateFromFVF( unsigned int FVF );
};

//in D3D, streams will correspond to real streams
//in OGL, the streams will be somewhat virtualised, but effectively the same
#define MAX_VERTEX_STREAMS 8
//holds all the data for a single stream
struct StreamSource
{
VertexBuffer* Source;
std::size_t Offset;
std::size_t Stride;

StreamSource() : Source( NULL ), Offset( 0 ), Stride( 0 )
{ }
};


//holds everything geometric about an object
class Geometry
{
protected:
VertexDeclaration m_Decl;

RenderMode m_Mode;
std::size_t m_PrimitiveCount;
std::size_t m_IndexCount;

StreamSource m_Streams[MAX_VERTEX_STREAMS];
IndexBuffer* m_Indices; //if IB is NULL, use non-indexed primitive
std::size_t m_IndexOffset;

std::size_t m_FirstVertex;
std::size_t m_NumVertices;

public:
Geometry();
virtual ~Geometry();

//Sets the rendering mode and primitive count for this geom
void SetRenderMode( RenderMode Mode, std::size_t IndexCount );

//Sets the vertex buffer for the specified stream
void SetStreamSource( unsigned int Stream, VertexBuffer* Source, std::size_t Offset, std::size_t Stride );
//Sets the indices for this geom (NULL for no indexing -- Offset is used for VB)
void SetIndices( IndexBuffer* Indices, std::size_t Offset );
//Sets the range of vertex used by the indices (ignored if not using indices)
void SetRange( std::size_t First, std::size_t Count );

//used to access the vertex declaration
VertexDeclaration& Decl() { return m_Decl; }
//access the streams
StreamSource* Stream( unsigned int Idx ) { assert( Idx < MAX_VERTEX_STREAMS ); return &m_Streams[Idx]; }
//access the indices
IndexBuffer* Indices() { return m_Indices; }
//index buffer offset
std::size_t IndexOffset() const { return m_IndexOffset; }
//Primitive count
std::size_t PrimitiveCount() const { return m_PrimitiveCount; }
std::size_t IndexCount() const { return m_IndexCount; }
//render mode for this geom
RenderMode Mode() const { return m_Mode; }
//Get the index range
void GetRange( std::size_t& First, std::size_t& Count ) const { First = m_FirstVertex; Count = m_NumVertices; }

};

#endif

And then the function to parse this into OpenGL goes like this:
void OGLRenderer::BeginRender( Geometry* Geom )
{
if( m_RenderBegun )
return;
if( Geom == NULL )
return;

//keeps track of tex coord sets in use
unsigned int TexCoord = 0;

//first, get the Geom's declaration and set stuff up
VertexDeclaration::Iterator it = Geom->Decl().Elements.begin();
while( it != Geom->Decl().Elements.end() )
{
if( m_CurrentVB != Geom->Stream( it->Stream )->Source )
{
m_CurrentVB = down_cast<OGLVertexBuffer*>( Geom->Stream( it->Stream )->Source );
if( m_CurrentVB != NULL )
{
m_CurrentVB->Bind();
}
else if( GLEE_ARB_vertex_buffer_object )
{
glBindBufferARB( GL_ELEMENT_ARRAY_BUFFER_ARB, 0 );
}
}

//initialize the pointers for this element
GLsizei Stride = (GLsizei) Geom->Stream( it->Stream )->Stride;
GLsizei StreamOffset = (GLsizei) Geom->Stream( it->Stream)->Offset;
switch( it->Type )
{
case VET_XYZ:
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer( it->Count, TranslateVertexFormat( it->Format ), Stride, m_CurrentVB->GetPointer() + it->Offset + StreamOffset );
break;
case VET_NORMAL:
glEnableClientState( GL_NORMAL_ARRAY );
glNormalPointer( TranslateVertexFormat( it->Format ), Stride, m_CurrentVB->GetPointer() + it->Offset + StreamOffset );
break;
case VET_DIFFUSE:
glEnableClientState( GL_COLOR_ARRAY );
glColorPointer( it->Count, TranslateVertexFormat( it->Format ), Stride, m_CurrentVB->GetPointer() + it->Offset + StreamOffset );
break;
case VET_SPECULAR:
glEnableClientState( GL_SECONDARY_COLOR_ARRAY );
glSecondaryColorPointer( it->Count, TranslateVertexFormat( it->Format ), Stride, m_CurrentVB->GetPointer() + it->Offset + StreamOffset );
break;
case VET_TEXCOORD:
glClientActiveTexture( GL_TEXTURE0 + TexCoord );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glTexCoordPointer( it->Count, TranslateVertexFormat( it->Format ), Stride, m_CurrentVB->GetPointer() + it->Offset + StreamOffset );
++TexCoord;
break;
}

++it;
}

//set up indices if we have any
if( m_Indices != Geom->Indices() )
{
m_Indices = down_cast<OGLIndexBuffer*>( Geom->Indices() );
if( m_Indices != NULL )
{
m_Indices->Bind();
}
else if( GLEE_ARB_vertex_buffer_object )
{
glBindBufferARB( GL_ELEMENT_ARRAY_BUFFER_ARB, NULL );
}
}

m_RenderBegun = true;
m_CurGeom = Geom;
}

(Note: This appears to be an older version of my source. The newer version has some modifications that allow arbitrary VertexElementTypes, which is useful for shader stuff.)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now