• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By kanageddaamen
      Hello all,
      I am currently working on a game engine for use with my game development that I would like to be as flexible as possible.  As such the exact requirements for how things should work can't be nailed down to a specific implementation and I am looking for, at least now, a default good average case scenario design.
      Here is what I have implemented:
      Deferred rendering using OpenGL Arbitrary number of lights and shadow mapping Each rendered object, as defined by a set of geometry, textures, animation data, and a model matrix is rendered with its own draw call Skeletal animations implemented on the GPU.   Model matrix transformation implemented on the GPU Frustum and octree culling for optimization Here are my questions and concerns:
      Doing the skeletal animation on the GPU, currently, requires doing the skinning for each object multiple times per frame: once for the initial geometry rendering and once for the shadow map rendering for each light for which it is not culled.  This seems very inefficient.  Is there a way to do skeletal animation on the GPU only once across these render calls? Without doing the model matrix transformation on the CPU, I fail to see how I can easily batch objects with the same textures and shaders in a single draw call without passing a ton of matrix data to the GPU (an array of model matrices then an index for each vertex into that array for transformation purposes?) If I do the matrix transformations on the CPU, It seems I can't really do the skinning on the GPU as the pre-transformed vertexes will wreck havoc with the calculations, so this seems not viable unless I am missing something Overall it seems like simplest solution is to just do all of the vertex manipulation on the CPU and pass the pre-transformed data to the GPU, using vertex shaders that do basically nothing.  This doesn't seem the most efficient use of the graphics hardware, but could potentially reduce the number of draw calls needed.

      Really, I am looking for some advice on how to proceed with this, how something like this is typically handled.  Are the multiple draw calls and skinning calculations not a huge deal?  I would LIKE to save as much of the CPU's time per frame so it can be tasked with other things, as to keep CPU resources open to the implementation of the engine.  However, that becomes a moot point if the GPU becomes a bottleneck.
    • By DiligentDev
      I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub.
      True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported:
      Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics
      The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode:
      #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ...  GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State
      Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.)
      Creating Shaders
      To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member:
      SHADER_SOURCE_LANGUAGE_DEFAULT  - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL  - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL  - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] =  {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object
      To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format:
      // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below:
      // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO:
      m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources
      Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state:
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details.
      Setting the Pipeline State and Invoking Draw Command
      Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context:
      // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples
      The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage.
      Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API.   Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers.   Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader.   Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy.   Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance.   Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads.   Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe.   Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm.   Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes.
      AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface.

      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. 

      The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. 

      Integration with Unity
      Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.

    • By Yxjmir
      I'm trying to load data from a .gltf file into a struct to use to load a .bin file. I don't think there is a problem with how the vertex positions are loaded, but with the indices. This is what I get when drawing with glDrawArrays(GL_LINES, ...):

      Also, using glDrawElements gives a similar result. Since it looks like its drawing triangles using the wrong vertices for each face, I'm assuming it needs an index buffer/element buffer. (I'm not sure why there is a line going through part of it, it doesn't look like it belongs to a side, re-exported it without texture coordinates checked, and its not there)
      I'm using jsoncpp to load the GLTF file, its format is based on JSON. Here is the gltf struct I'm using, and how I parse the file:
      #define GLTF_TARGET_ARRAY_BUFFER (34962) #define GLTF_TARGET_ELEMENT_ARRAY_BUFFER (34963) #define GLTF_COMPONENT_TYPE_BYTE (5120) #define GLTF_COMPONENT_TYPE_UNSIGNED_BYTE (5121) #define GLTF_COMPONENT_TYPE_SHORT (5122) #define GLTF_COMPONENT_TYPE_UNSIGNED_SHORT (5123) #define GLTF_COMPONENT_TYPE_INT (5124) #define GLTF_COMPONENT_TYPE_UNSIGNED_INT (5125) #define GLTF_COMPONENT_TYPE_FLOAT (5126) #define GLTF_COMPONENT_TYPE_DOUBLE (5127) #define GLTF_PARAMETER_TYPE_BYTE (5120) #define GLTF_PARAMETER_TYPE_UNSIGNED_BYTE (5121) #define GLTF_PARAMETER_TYPE_SHORT (5122) #define GLTF_PARAMETER_TYPE_UNSIGNED_SHORT (5123) #define GLTF_PARAMETER_TYPE_INT (5124) #define GLTF_PARAMETER_TYPE_UNSIGNED_INT (5125) #define GLTF_PARAMETER_TYPE_FLOAT (5126) #define GLTF_PARAMETER_TYPE_FLOAT_VEC2 (35664) #define GLTF_PARAMETER_TYPE_FLOAT_VEC3 (35665) #define GLTF_PARAMETER_TYPE_FLOAT_VEC4 (35666) struct GLTF { struct Accessor { USHORT bufferView; USHORT componentType; UINT count; vector<INT> max; vector<INT> min; string type; }; vector<Accessor> m_accessors; struct Asset { string copyright; string generator; string version; }m_asset; struct BufferView { UINT buffer; UINT byteLength; UINT byteOffset; UINT target; }; vector<BufferView> m_bufferViews; struct Buffer { UINT byteLength; string uri; }; vector<Buffer> m_buffers; vector<string> m_Images; struct Material { string name; string alphaMode; Vec4 baseColorFactor; UINT baseColorTexture; UINT normalTexture; float metallicFactor; }; vector<Material> m_materials; struct Meshes { string name; struct Primitive { vector<UINT> attributes_indices; UINT indices; UINT material; }; vector<Primitive> primitives; }; vector<Meshes> m_meshes; struct Nodes { int mesh; string name; Vec3 translation; }; vector<Nodes> m_nodes; struct Scenes { UINT index; string name; vector<UINT> nodes; }; vector<Scenes> m_scenes; vector<UINT> samplers; struct Textures { UINT sampler; UINT source; }; vector<Textures> m_textures; map<UINT, string> attributes_map; map<UINT, string> textures_map; }; GLTF m_gltf; // This is actually in the Mesh class bool Mesh::Load(string sFilename) { string sFileAsString; stringstream sStream; ifstream fin(sFilename); sStream << fin.rdbuf(); fin.close(); sFileAsString = sStream.str(); Json::Reader r; Json::Value root; if (!r.parse(sFileAsString, root)) { string errors = r.getFormatedErrorMessages(); if (errors != "") { // TODO: Log errors return false; } } if (root.isNull()) return false; Json::Value object; Json::Value value; // Load Accessors array, these are referenced by attributes with their index value object = root.get("accessors", Json::Value()); // store object with key "accessors", if not found it will default to Json::Value() if (!object.isNull()) { for (Json::ValueIterator it = object.begin(); it != object.end(); it++) { GLTF::Accessor accessor; value = (*it).get("bufferView", Json::Value()); if (!value.isNull()) accessor.bufferView = value.asUINT(); else return false; value = (*it).get("componentType", Json::Value()); if (!value.isNull()) accessor.componentType = value.asUINT(); else return false; value = (*it).get("count", Json::Value()); if (!value.isNull()) accessor.count = value.asUINT(); else return false; value = (*it).get("type", Json::Value()); if (!value.isNull()) accessor.type = value.asString(); else return false; m_gltf.accessors.push_back(accessor); } } else return false; object = root.get("bufferViews", Json::Value()); if(!object.isNull()) { for (Json::ValueIterator it = object.begin(); it != object.end(); it++) { GLTF::BufferView bufferView; value = (*it).get("buffer", Json::Value()); if(!value.isNull()) bufferView.buffer = value.asUInt(); else return false; value = (*it).get("byteLength", Json::Value()); if(!value.isNull()) bufferView.byteLength = value.asUInt(); else return false; value = (*it).get("byteOffset", Json::Value()); if(!value.isNull()) bufferView.byteOffset = value.asUInt(); else return false; value = (*it).get("target", Json::Value()); if(!value.isNull()) bufferView.target = value.asUInt(); else return false; m_gltf.m_bufferViews.push_back(bufferView); } } else return false; object = root.get("buffers", Json::Value()); if(!object.isNull()) { for (Json::ValueIterator it = object.begin(); it != object.end(); it++) { GLTF::Buffer buffer; value = (*it).get("byteLength", Json::Value()); if(!value.isNull()) buffer.byteLength = value.asUInt(); else return false; // Store the filename of the .bin file value = (*it).get("uri", Json::Value()); if(!value.isNull()) buffer.uri = value.asString(); else return false; } } else return false; object = root.get("meshes", Json::Value()); if(!object.isNull()) { for(Json::ValueIterator it = object.begin(); it != object.end(); it++) { GLTF::Meshes mesh; value = (*it).get("primitives", Json::Value()); for(Json::ValueIterator value_it = value.begin(); value_it != value.end(); value_it++) { GLTF::Meshes::Primitive primitive; Json::Value attributes; attributes = (*value_it).get("attributes", Json::Value()); vector<string> memberNames = attributes.getMemberNames(); for(size_t i = 0; i < memberNames.size(); i++) { Json::Value member; member = attributes.get(memeberNames[i], Json::Value()); if(!member.isNull()) { primitive.attributes_indices.push_back(member.asUInt()); m_gltf.attributes_map[member.asUInt()] = memberNames[i]; // Each of these referes to an accessor by indice, so each indice should be unique, and they are when loading a cube } else return false; } // Indice of the accessor used for indices Json::Value indices; indices = (*value_it).get("indices", Json::Value()); primitive.indices = indices.asUInt(); mesh.primitives.push_back(primitive); } m_gltf.m_meshes.push_back(mesh); } } vector<float> vertexData; vector<USHORT> indiceData; int vertexBufferSizeTotal = 0; int elementBufferSizeTotal = 0; GLTF::Meshes mesh = m_gltf.m_meshes[0]; vector<GLTF::Meshes::Primitive> primitives = mesh.primitives; // trying to make the code easier to read for (size_t p = 0; p < primitive.size(); p++) { vector<UINT> attributes = primitives[p].attributes_indices; for(size_t a = 0; a < attributes.size(); a++) { GLTF::Accessor accessor = m_gltf.m_accessors[attributes[a]]; GLTF::BufferView bufferView = m_gltf.m_bufferViews[accessor.bufferView]; UINT target = bufferView.target; if(target == GLTF_TARGET_ARRAY_BUFFER) vertexBufferSizeTotal += bufferView.byteLength; } UINT indice = primitives[p].indices; GLTF::BufferView bufferView = m_gltf.m_bufferViews[indice]; UINT target = bufferView.target; if(target == GLTF_TARGET_ELEMENT_ARRAY_BUFFER) elementBufferSizeTotal += bufferView.byteLength; } // These have already been generated glBindVertexArray(g_pGame->m_VAO); glBindBuffer(GL_ARRAY_BUFFER, g_pGame->m_VBO); glBufferData(GL_ARRAY_BUFFER, vertexBufferSizeTotal, nullptr, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, g_pGame->m_EBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, elementBufferSizeTotal, nullptr, GL_STATIC_DRAW); int offset = 0; int offset_indice = 0; for (size_t p = 0; p < primitive.size(); p++) { vector<UINT> attributes = primitives[p].attributes_indices; int pos = sFilename.find_last_of('\\') + 1; string sFolder = sFilename.substr(0, pos); for (size_t a = 0; a < attributes.size(); a++) { LoadBufferView(sFolder, attributes[a], data, offset); } UINT indice = primitives[p].indices; GLTF::BufferView bufferView_indice = m_gltf.m_bufferViews[indice]; UINT target_indice = bufferView_indice.target; bool result = LoadBufferView(sFolder, indice, data, offset_indice); if(!result) return false; } return true; } bool Mesh::LoadBufferView(string sFolder, UINT a, vector<float> &vertexData, vector<float> &indiceData, int &offset_indice) { ifstream fin; GLTF::Accessor accessor = m_gltf.m_accessors[a]; GLTF::BufferView bufferView = m_gltf.m_bufferViews[accessor.bufferView]; GLTF::Buffer buffer = m_gltf.m_buffers[bufferView.buffer]; const size_t count = accessor.count; UINT target = bufferView.target; int elementSize; int componentSize; int numComponents; string sFilename_bin = sFolder + buffer.uri; fin.open(sFilename_bin, ios::binary); if (fin.fail()) { return false; } fin.seekg(bufferView.byteOffset, ios::beg); switch (accessor.componentType) { case GLTF_COMPONENT_TYPE_BYTE: componentSize = sizeof(GLbyte); break; case GLTF_COMPONENT_TYPE_UNSIGNED_BYTE: componentSize = sizeof(GLubyte); break; case GLTF_COMPONENT_TYPE_SHORT: componentSize = sizeof(GLshort); break; case GLTF_COMPONENT_TYPE_UNSIGNED_SHORT: componentSize = sizeof(GLushort); break; case GLTF_COMPONENT_TYPE_INT: componentSize = sizeof(GLint); break; case GLTF_COMPONENT_TYPE_UNSIGNED_INT: componentSize = sizeof(GLuint); break; case GLTF_COMPONENT_TYPE_FLOAT: componentSize = sizeof(GLfloat); break; case GLTF_COMPONENT_TYPE_DOUBLE: componentSize = sizeof(GLfloat); break; default: componentSize = 0; break; } if (accessor.type == "SCALAR") numComponents = 1; else if (accessor.type == "VEC2") numComponents = 2; else if (accessor.type == "VEC3") numComponents = 3; else if (accessor.type == "VEC4") numComponents = 4; else if (accessor.type == "MAT2") numComponents = 4; else if (accessor.type == "MAT3") numComponents = 9; else if (accessor.type == "MAT4") numComponents = 16; else return false; vector<float> fSubdata; // I'm pretty sure this is one of the problems, or related to it. If I use vector<USHORT> only half of the vector if filled, if I use GLubyte, the entire vector is filled, but the data might not be right vector<GLubyte> nSubdata; elementSize = (componentSize) * (numComponents); // Only fill the vector I'm using if (accessor.type == "SCALAR") { nSubdata.resize(count * numComponents); fin.read(reinterpret_cast<char*>(&nSubdata[0]), count/* * elementSize*/); // I commented this out since I'm not sure which size the .bin is storing the indice values, and I kept getting runtime errors, no matter what type I used for nSubdata } else { fSubdata.resize(count * numComponents); fin.read(reinterpret_cast<char*>(&fSubdata[0]), count * elementSize); } switch (target) { case GLTF_TARGET_ARRAY_BUFFER: { vertexData.insert(vertexData.end(), fSubdata.begin(), fSubdata.end()); glBindBuffer(GL_ARRAY_BUFFER, g_pGame->m_VBO); glBufferSubData(GL_ARRAY_BUFFER, offset, fSubdata.size() * componentSize, &fSubdata[0]); int attribute_index = 0; // I'm only loading vertex positions, the only attribute stored in the files for now glEnableVertexAttribArray(attribute_index); glVertexAttribPointer(0, numComponents, GL_FLOAT, GL_FALSE, componentSize * numComponents, (void*)(offset)); }break; case GLTF_TARGET_ELEMENT_ARRAY_BUFFER: { indiceData.insert(indiceData.end(), nSubdata.begin(), nSubdata.end()); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, g_pGame->m_EBO); // This is another area where I'm not sure of the correct values, but if componentSize is the correct size for the type being used it should be correct glBufferSubData is expecting the size in bytes, right? glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, offset, nSubdata.size() * componentSize, &nSubdata[0]); }break; default: return false; } if (accessor.type == "SCALAR") offset += nSubdata.size() * componentSize; else offset += fSubdata.size() * componentSize; fin.close(); return true; } these are the draw calls, I only use one at a time, but neither is currently display properly, g_pGame->m_indices is the same as indiceData vector, and vertexCount contains the correct vertex count, but I forgot to copy the lines of code containing where I set them, which is at the end of Mesh::Load(), I double checked the values to make sure.
      glDrawElements(GL_LINES, g_pGame->m_indices.size(), GL_UNSIGNED_BYTE, (void*)0); // Only shows with GL_UNSIGNED_BYTE
      glDrawArrays(GL_LINES, 0, g_pGame->m_vertexCount);
      So, I'm asking what type should I use for the indices? it doesn't seem to be unsigned short, which is what I selected with the Khronos Group Exporter for blender. Also, am I reading part or all of the .bin file wrong?
    • By ritzmax72
      That means how do I use base DirectX or OpenGL api's to make a physics based destruction simulation? 
      Will it be just smart rendering or something else is required?
    • By jsquare89
      I am somewhat new to game development and trying to create a basic 3d engine. I have managed to set up a first person camera and it seems to be working fine for the most part. While I am able to look up, down, left and right just fine the camera is constrained to the mouse movement in the window (i.e when the mouse reaches edges of the window it discontinues camera rotation and mouse is out of window bounds. I tried to use SDL_WarpMouseInWindow(window, center.x,center.y) but when I do this then it messes up the camera and the camera is stuck, even though there is some slight movement of the camera, it keeps going back to the center.
      void Camera::UpdateViewByMouse(SDL_Window &window, glm::vec2 mousePosition)
      float xDistanceFromWindowCenter = mousePosition.x - ((float)1024 / 2) ;
      float yDistanceFromWindowCenter = ((float)720 / 2) - mousePosition.y;
      yaw = xDistanceFromWindowCenter * cameraRotationSpeed;
      pitch = yDistanceFromWindowCenter * cameraRotationSpeed;
      SDL_WarpMouseInWindow(&window, 1024 / 2, 768 / 2); }
      i’ve been stuck on this for far too long. any help would be much appreciated
      i have also tried relative mouse movement,  and .xrel and .yrel to avail. polling mouse state with sdl_event. I do also know that SDL_WarpMouseInWindow makes change to event and have tried also ignore and reenabling to no avail
  • Advertisement
  • Advertisement


This topic is now archived and is closed to further replies.

OpenGL Character Animation in OpenGL

This topic is 5094 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey All, I have been creating a game engine using OpenGL for a while. It''s going fairly well. I have written a model loader, that loads a custom model format, I have shadows, and lighting, tons of other snazy fetures, and things are lookin great. I pull in wonderfull framerates.. whatever... I only say this cuz I''m proud of my little creation it being my first serious OpenGL endevour since I learned the API :D Anyway, one thing I don''t know how to do is any sort of model animation and I was hopeing some of you guys might point me in the right direction so that I may learn more about it. I have been toying around with various aproaches, I was considering saving my model in a series of "frames", that is, intermediate poses and cycling thrugh them during animation as you would a 2D sprite. Is this a simple, and fesable way of doing this? What is the best way to get your feet wet in what is obviously a very complicated part of 3D programming. Thank you very much.

Share this post

Link to post
Share on other sites

I was considering saving my model in a series of "frames", that is, intermediate poses and cycling thrugh them during animation as you would a 2D sprite. Is this a simple, and fesable way of doing this?

Certainly. This is the form that the Quake2 MD2 animations take. With this method (sometimes called vertex key-framing) you can use interpolation to smoothly transition from one frame to the next. Keyframes can be generated for, say, every 4 or 5 game display frames, then an interpolant between 0 and 1 is used to smoothly morph from the previous keyframe to the next keyframe. This eliminates the jerkiness that is inherent to key-framed animation (very visible with 2D sprite graphics, as interpolation is not feasible). To start with, this is a good animation route to take.

Another (more advanced) method is skeletal animation. In vertex-keyframing, memory usage can get out of hand for large models and long animations, as each keyframe requires a local copy of vertex data. In skeletal animation, a model is constructed and animated as a "skin" attached to a hierarchical frame or skeleton of "bones". Each vertex is transformed by a bone or a set of bones with weighting factors applied. Bones are represented as quaternions or some other form of rotation, a location relative to the parent hierarchy, and an endpoint to which any children in the chain are attached. With skeletal animation, instead of storing each vertex per key-frame, you can instead merely store the key-frame''s bone orientations, and maintin one reference copy of the mesh in it''s rest pose. Vertex transformations are performed on the fly, generated by transforming the reference copy of the mesh either into a temporary buffer or with the use of a vertex shader, and displaying the transformed geometry.

Skeletal animation offers additional benefits besides just decreased memory usage. With vertex key-framing, interpolation from one keyframe to the next is fastest accomplished with linear interpolation. However, in the case of mesh components that are hierarchical in nature and constrained, this can result in visual artifacts. Instead of, say, an arm rotating around an elbow joint, the mesh instead linearly morphs from one orientation to the next, causing shortening of the arm over the course of the movement. With keyframes that are spaced closely together this sort of shortening is not obvious. But in the case of rapidly moving animation with keyframes far apart, it becomes more apparent. Skeletal animation interpolates the orientation of the mesh sections, not their final positions, so that the arm is seen to rotate around the elbow joint rather than trying to morph "through" it.

With skeletal animation, it is also easier to compose different animation sequences together. For instance, it is easy to generate an animation sequence for walking, which modifies only the leg bones and perhaps the torso to a slight degree. Other animations might affect only the head, for head turning or gaze tracking, or the arms such as when a sword or other weapon is swung. With vertex key-framing, character multi-tasking in this manner is limited, but with skeletal animation these different animations which affect different parts of the body can easily be combined together, so that the character is seen to walk, turn his head, and swing his sword all at once.

Skeletal animation also allows the possibility of realistic physics, such as ragdoll physics and the like, wherein the members of a body are subject to real-time effects of impact, gravity, application of force, etc... Animations can then be generated that are not limited to what the designer creates in an animation package, but instead follow more (hopefully) realistic sequences created by the laws of the physics system applied.

As is to be expected, these latter methods can get to be extremely complex, and are not always suited to the application. Skeletal animation itself is far more complex, and thus more resource intensive, than simple vertex interpolation. Shaders and vertex programs can offload the grunt work of vertex transformation to the GPU, as long as programs are hardware supported. Realistic physics, too, add their complications, though some of these difficulties can be overcome by using pre-made physics packages such as OpenDE or Tokamak or others, which can handle many of the tricky calculations and allow you to concentrate on the big picture.

But not all games require realistic, powerful physics or modelling simulations. If you are willing to constrain animation to pre-packaged movements, ala traditional 2D animation, vertex key-framing can be more than sufficient for your needs. Even in fantastically modelled simulations, simple vertex morphing can have its place, in animations not so well suited to hierarchical skeletal arrangement.

Blender--The Gimp--Python--Lua--SDL

Share this post

Link to post
Share on other sites

Thanks allot for that excellent, well thought out, and articulate reply. Because of your thurough explanation, I realize that Linear interpolation sounds like the way I want to go.

My game is a turn-based RPG, so it dosn''t need fancy physycs or anything like that. My models are also not incredibly complex either, so I don''t think the memory usage will be too over the top.

I found some tutorials using .md2 models over at DigiBen. It has a good explanation of how it''s done in the header file. Though my game dosn''t use md2s, rather a much simpler format devised by myself, the tutorial still gives a good understanding to hack thugh it for my personal needs.

However, if anyone has any advice, links, or relevant suggestions concerning Liniar Interpolation please, by all means, let me know.

Thank you very much.

Share this post

Link to post
Share on other sites
Linear interpolation is really very simple.

Say you have two points: PointA and PointB. Each is a vertex in a mesh (x,y, and z components). The standard linear interp. function is:

c = a + t*(b-a)

Where t is a number in the range of [ 0,1].

So, if PointA=(5, 2, 12) and PointB=(18, 4, 20) then we can find any location in between using the above formula. For instance, at t=0.5 (the exact midpoint between the two points):

PointC = PointA + t*(PointB-PointA);

PointC.x = 5 + 0.5*(18-5) = 11.5
PointC.y = 2 + 0.5*(4-2) = 3
PointC.z = 12 + 0.5*(20-12) = 16

So, PointC at t=0.5 is equal to (11.5, 3, 16)

Now, each keyframe of the animation is going to consist of an array or list of vertices for the entire mesh for that frame. An animating object needs to keep track of two keyframes: Previous and Next. It will also track a current value for t, which will increase in small increments each time the game logic updates. Each object will also track a Previous and Next Location as well.

Now, when you render the scene you need to generate a current "snapshot" of the object as it stands at that point in time, using it''s t value. You apply the linear interpolation equation above to the PreviousLocation and NextLocation to generate an intermediate location-- the object''s location at that point in time. By the same token, you apply the linear interpolation equation to each vertex in PreviousFrame and NextFrame, to generate the in-between frame for time=t.

t can be updated and manipulated to increase in increments as fine as you need or as the frame-rate will allow. Each time it is incremented, you check to see if it goes greater than 1. If so, then you need to wrap it back around by subtracting one, then advance your animation and positional data. PreviousFrame is set to NextFrame, and a new NextFrame is generated to continue the animation. PreviousLocation is set to NextLocation, and a new location is generated for NextLocation. And so on, and so on.

The way I do it is I space all of my key-frames a constant number of frames apart (say, 4 logic frames per keyframe). This way, each time the game logic updates, I can advance t by 0.25 (1 / UpdateRate, or 1/4 for UpdateRate=4) to generate the in between frames. So, in sequence the render will draw frames at t=0 (or, PreviousFrame), t=0.25, t=0.5, t=0.75, t=1.0 (or, NextFrame). At t=1.0, I advance a frame and wrap t back around to 0 to start again.

There are other methods for interpolation besides linear. Linear interpolation, as the name implies, can determine arbitrary points along a straight line between point a and point b-- thus the flattening or shortening of rotating arms and the like. Other forms of interpolation can approximate a curve between points rather than a straight line, thus possibly resulting in smoother, more realistic animation with less flattening and distortion.

Cosine interpolation is calculated thus:

float ft = t * 3.1415927f;
float f = (1 - cos(ft)) * 0.5f;
return a*(1-f) + b*f;

This form of interpolation connects two points with an approximation of a curve. I say approximation, because that is all it is. A true curve takes into account more information to generate a smoother path. Such a true curve might be cubic interpolation:

Given: Points a, b, c, d as control points (keyframes) on the curve

Point e = (d - c) - (a - b);
Point f = (a - b) - e;
Point g = c - a;
Point h = b;

return e*t*t*t + f*t*t + g*t + h;

This gives a much smoother, more continuous curve connecting the points, but at the cost of having to remember 4 points: two Previous and two Next points. This may not be suitable for animation which can change state between one frame and the next, so it probably is not appropriate for vertex keyframing.

But, like I said before, if you generate your key-frames close enough together, any distortion from linear interpolation can be minimized so as to be nearly undetectable. More complex interpolation requires more processing time, which can have an effect on framerate when you need to do several hundred thousand interpolations per frame for a lot of objects.

If you structure your game loop correctly, it is possible to not only interpolate from one key frame to the next by frames (ie, advancing t by 0.25 each time, or whatever); it is also possible to go even finer, interpolating between even these intermediate frames to a degree allowed by the video frame rate.

For example, say I am running my simulation logic updating at 25 frames per second, advancing the game logic one step every 40 ms. That means that every 40 ms, t for all object animations advances by 0.25 (assuming, of course, that is the update interval I chose). 25 fps is great for complicated logic, as it gives a decent CPU plenty of time to calculate a frame and do all of the AI, but the drawback is it locks the video frame rate to 25 fps as well. t only increments every 40ms, so in the meantime we are just drawing the same exact scene over and over until t increments again. Our video card might be capable of 900 fps, but the game locks it to 25 by forcing it to redraw the same scene over and over for most of the time.

What we want to be able to do instead is interpolate from Previous keyframe to Next keyframe to generate the current frame, then advance the animation even further based on how far into the next frame we are. It''s a little complicated, so I won''t describe it in depth here. The implementation I use is pretty much exactly as Javier Arevalo details in his Tip of the Day at Flipcode.com. In his algorithm, he performs game logic updates at a fixed time step, and in the meantime calculates an interpolation factor (PercentWithinTick) to calculate how far along we are between this tick and the next. All rendering functions can use this factor to further interpolate animation and smooth things out.

Consider the case where we advance t for a model by 0.25 each tick. Now, say at a given point in time the loop calculates PercentWithinTick to be 0.5, meaning we are halfway to the next update tick. We can apply this PercentWithinTick to the update rate (PercentWithinTick * 0.25) and add this to an object''s current t value to account for how far into the next tick we are. This has the effect of smoothing out our 25 fps video framerate to take advantage of all the fps the card can pile on. 25fps jerks or steps in animation are interpolated and smoothed. It works very nicely, but can be a little difficult to understand at first.

Anyway, I hope this helps and I hope my wandering all over the place hasn''t confused you. Good luck and have fun.

Blender--The Gimp--Python--Lua--SDL

Share this post

Link to post
Share on other sites

I just had to reply with a tremendous thank you!

Your fantastic post totaly gor my model animated and walking around my 3D world!

Thank you so much.

Share this post

Link to post
Share on other sites
Guest Anonymous Poster
Superb post, Vertex Normal. Very informative.

Share this post

Link to post
Share on other sites

  • Advertisement