Jump to content
  • Advertisement

Grumple

Member
  • Content Count

    65
  • Joined

  • Last visited

Community Reputation

177 Neutral

About Grumple

  • Rank
    Member

Personal Information

  • Interests
    Design
    Programming
  1. Just to follow up on this in case anyone is interested...I did a test of the implementation described above and it "seems" to work fine. After applying uniform scale value to all mesh vertices, and translates of the joint transforms and offset mats, as well as any translate key frames in any animations, things turned out as expected in terms of size changes without (so far) any unexpected problems. Having said that I've only tested a few models and every time I think I understand all the elements at play with skinned animation I have something new smack me in the face...
  2. Hello, As part of the 'asset pipeline' for my custom engine I've built a model importer using assimp. The goal is to be able to import models from turbosquid, etc, into a proprietary format without having to significantly rework them. It's worked fairly well, but whereas my game is mostly built around 1 unit = 1 metre, I see FBX (my preferred import format) tends to be 1 unit = 1 cm. On top of that I see a lot of assets from the asset stores that look awesome but are bigger/smaller than I'd like, regardless of units of measurement. I'd like to 'pre-scale' these models on import into my own model file format without just using a scale matrix. (other than this import scale issue I don't have need for scaling in the engine, and it can complicate a lot of other systems). I've spent a week battling blender with my entry-level modeling skills, and can sort of work my way through the crazy unit issues it has with FBX to scale down the based model/armature, but as far as I can tell there is no simple way to just 'apply scale' to animations without a lot of manual work in the graph view. Assimp has a 'global scale' capability in newer versions, but that seems to just apply a scale transform to the root bone. Given that I just want to apply a uniform scale to all aspects of the model, it feels like during my model import 'post process', I should be able to just scale the translate of all bone transform/offset matrices, and scale the vertices of all bind-pos mesh data. If I apply the same scale to any location/translate key frames of animations, shouldn't this all 'just work'? That way I could store my models in proprietary/engine format at the scale I want without any screwing around at run-time within the game itself... Any insight would be greatly appreciated...usually I have no trouble finding lots of good reading about game programming issues, but this seems to be a surprisingly murky subject.
  3. Just an update...after a bunch debugging I figured out my problem and it was my fault. Here is a quick summary in case it helps someone else later: When running my animation update I was setting up a list of 'animation joint transforms' for my model and defaulting them all to identity. Then I run through the channels for my animation key frame interpolation and update transforms for all channels. The problem was that when an animation didn't affect a particular joint (i.e. there is no channel for a joint), then it was staying at identity, instead of being defaulted back to the parent-relative 'static pose' transform for that joint, which seems to be default behavior.
  4. Hi Guys, I've been working on a new Vulcan based engine, and doing skinned animation support for the first time. I've got everything setup and seemingly working so that I can do a basic import of an FBX model I downloaded from SketchFab, and play its idle animation seemingly correctly, which was very exciting to see. However, I'm guessing I'm missing some rule of model import processing, whereas my 'default pose' for the model comes in oriented so that the character is standing with Y-Axis up, but as soon as I launch him into his idle animation he switches to Z-Axis up. I've seen some mention of applying the inverse bind pose matrix to the joint itself on import, and thought that might be part of my issue, but otherwise can't think of what would be causing this? Thanks!
  5. Yeah that could very well be it, just looking to rule out that it 'shouldn't compile' for some reason. I have no good excuse for the old compiler other than this being the first issue that has really made me feel the need to change. Having seen this code compile on a friends VS2017 setup, I will likely be be updating asap... Thanks!
  6. I have a general c++ template programming question...I've tried to setup a class that used a nested template type T, along with variadic template arguments for a constructor of T, to ensure I can add items to a custom container class without any unnecessary heap usage/copy/move operations. Here is my test container code, using perfect forwarding and std::vector::emplace_back() to create items 'in-place' with little overhead (TemplateContainerTest.h): #pragma once #include <vector> #include <unordered_map> #include <utility> template <typename T> class TemplateContainerTest { public: TemplateContainerTest() = default; virtual ~TemplateContainerTest() = default; template<typename... ItemArgs> void AddItem( ItemArgs&&... itemArgs ) { m_Container.emplace_back( std::forward<ItemArgs>( itemArgs )... ); } protected: template <typename T> class ItemTracker { public: template<typename... ItemArgs > ItemTracker( ItemArgs&&... itemArgs ): m_Item( std::forward<ItemArgs>( itemArgs )... ) { } bool m_IsValid = false; T m_Item; }; std::vector< ItemTracker<T> > m_Container; }; And here is some code to exercise the template above (main.cpp): #include "stdafx.h" #include <stdint.h> #include "CNTemplateContainer.h" class TestItemOfInterest { public: TestItemOfInterest( uint32_t itemVal ): m_ItemVal( itemVal ) { } TestItemOfInterest( TestItemOfInterest && other ) { m_ItemVal = other.m_ItemVal; other.m_ItemVal = 0; } TestItemOfInterest() = default; virtual ~TestItemOfInterest() = default; uint32_t GetVal() { return m_ItemVal; } protected: uint32_t m_ItemVal = 0; }; int _tmain(int argc, _TCHAR* argv[]) { TemplateContainerTest<TestItemOfInterest> tmpContainer; tmpContainer.AddItem( 42 ); return 0; } Here is the kicker: in Visual Studio 2013, the code above fails to compile with the following error: templatecontainertest.h(28): error C2664: 'TestItemOfInterest::TestItemOfInterest(const TestItemOfInterest &)' : cannot convert argument 1 from 'TemplateContainerTest<TestItemOfInterest>::ItemTracker<T>' to 'uint32_t' However, in Visual Studio 2017 it compiles fine. For some reason, the perfect forwarding mechanism of Visual Studio 2013 seems to try to send the 'ItemTracker' into the T() constructor instead of just the arguments from outside. I see that the VS 2017 std::vector::emplace_back signature/implementation changed, but I can't understand why it works now/didn't work before... Any insight would be appreciated as I don't trust this at all without understanding the underpinning issues...
  7. Hi,   I can't find a direct answer to this anywhere...so hopefully I can get one here...   I've implemented a relatively simple transform feedback shader that reads elements from one VBO, and writes them to another.  Then I do a glGetBufferSubData() to read results back to client memory.   Now, I don't seem to have any trouble just executing my transform feedback draw, then reading back the VBO without an explicit glFinish in between, but I'm worried that I'm just getting lucky timing-wise.  I dont want to run into issues with reading partially populated feedback buffers, etc.   Does anyone know for certain  if I should require a glFinish between the transform feedback draw call and a glGetBufferSubData() to read the output of the transform back to client mem?     Thanks!
  8. Thanks a lot, for this reply....I knew about the old/deprecated fixed function feedback system, but didn't realize there was an official replacement for the shader world.  I'll do some more reading before diving in, but it looks to be a great solution.   I know the transformation is relatively cheap, but in my current implementation it is happening for 6 stages of render, per model, with potentially thousands of models.  I'm also going to be doing something similar for label rendering, but will need to be able to generate the NDC coord buffer and potentially read it back for de-clutter processing on the CPU.  Having a shader stage that will just populate an NDC coord buffer for readback/post-processing would be awesome.     Sorry, but I don't quite follow you here...can you describe a bit more, or link some reading material?   Thanks again!
  9. Hello,   I am working on a problem where I want to render 3D objects in pseudo 2D by transforming to NDC coordinates in my vertex shader.  The models I'm drawing have numerous components rendered in separate stages, but all components of a given model are based from the same single point of origin.     This all works fine, but each vertex shader for the various stages of the model render redundantly transform from cartesian xyz to NDC coordinates prior to performing work.  Instead, I'd like to perform an initial conversion stage, populating a buffer of NDC coordinates, such that all vertex shaders can then just accept the NDC coordinate as input.   I'm also looking to avoid doing this on the CPU as I may have many thousands of model instances to work with.   So, with an input buffer containing thousands of Cartesian positions, and an equal sized output buffer to receive transformed NDC coordinates, what is my best options to perform the work on the GPU?  Is this something I need to look to OpenCL for?   Being fairly unfamiliar to OpenCL, I was thinking of looking into ways of setting things up so that the first component to be rendered for my models will 'know' it is first, have the vertex shader do standard transform to NDC, and somehow write the results back to an 'NDC coord buffer '.  All subsequent vertex shaders for various model components would use the NDC coord buffer as input, skipping the redundant conversion effort.     Is this reasonable?
  10. This is probably a silly question, but I've managed to get myself turned around and I'm second guessing my understanding of instancing.   I want to implement a label renderer using instanced billboards.  I have a VBO of 'label positions', as 2D vertices, one per label.  My 'single character billboard' is what I want to instance, and is in its own VBO.  Due to engine/architectural reasons, I have an index buffer for the billboard, even though it is not saving me much in terms of vertex counts.     For a various reasons I still want to loop through individual labels for my render, but planned to call glDrawElementsInstanced, specifying my 'billboard model', along with the character count for a label.   However, I can't see how I can tell glDrawElementsInstanced where to start in the other attribute array VBO's for a given label?  So, if I am storing a VBO of texture coords for my font, per-character, how do I get glDrawElementsInstanced to start at the first texture coord set of the first character of the current label being rendered?   I see that glDrawElementsInstancedBaseVertex exists, but I'm getting confused about what the base vertex value will do here.  If my raw/instanced billboard verticies are from index 0..3 in their VBO, but the 'unique' attributes of the current label start at element 50 in their VBO, what does a base vertex of 50 do?  I was under the impression that it would just cause GL to try to load billboard vertices from index+basevertex in that VBO, which is not what I want.   I guess to sum my question up, if I have an instanced rendering implementation, with various attribute divisors for different vertex attributes, how can I initiate an instanced render of the base model, but with vertex attributes starting from offsets into the associated VBO's, while abiding by attribute divisors that have been set up?   EDIT: I should mention, I've bound all the related VBO's under a Vertex Array Object.  By default I wanted all labels to be sharing VBO memory to avoid state changes, etc.  It seems like there must be a way to render just N instances of my model starting at some mid-point of my vertex attrib arrays.
  11. Update #2:  Problem solved!   For anyone encountering similar issues, it turns out some older ATI cards (maybe newer) do NOT like vertex array attributes that are not aligned to 4-byte boundaries.   I changed my color array to pass 4 unsigned bytes instead of 3, and updated my shader to accept vec4 instead of vec3 for that attribute and everything now works as intended.   Kind of a silly issue....but that is what i get for trying to cut corners on memory bandwidth, etc.  =P
  12. Update:   I still haven't figured out the root of the issue, but as a test I have switched to using floats for my color attribute instead of gl_unsigned_byte.  My unsigned byte colors were being passed in the range 0..255 with normalized set to GL_TRUE, and floats are passed 0..1.0 with normalized param of GL_FALSE.  Without really changing anything else , the problem goes away completely, so I am really suspicious of the ATI driver...   Anyone else seeing issues using multiple glDrawElement calls from a single bound VAO containing unsigned-byte color vertex attributes?  
  13. Hello,   I'm running out of ideas trying to debug an issue with a basic line render in the form of a 'world axis' model.   The idea is simple:     I create a VAO with float line vertices (3 per vertex), int indices (1 per vertex), and unsigned byte color (3 per vertex) I allow room and pack the array such that the first 12 vertices/indices/colors are for uniquely colored lines representing my +- world axis, and then a bunch of lines forming a 2D grid across the XZ plane.   Once data is loaded, I render by binding my VAO, activating a basic shader then drawing the model in two stages.  One glDrawElements call is made for the axis lines after glLineWidth is set to 2, and the grid lines drawn through a separate glDrawElements with thinner lines.   Whenever I Draw this way, the last 6 lines of my grid (i.e. the end of the VAO array) show up as random colors.  However, the lines themselves are correctly positioned, etc.   If I just do one glDrawElements call for all lines (ie world axis and grid lines at once), then the entire model appears as expected with correct colors everywhere.     This is only an issue on some ATI cards (ie radeon mobility 5650), but works on NVidia no problem.   I can't see what I would have done wrong if the lines are appearing fine (ie my VAO index counts/offsets must be ok for glDrawElements), and I don't see how it could be that I'm somehow packing the data into the VAO wrong if they appear correctly via a single glDrawElements call instead of two calls separated by changes to glLineWidth()?   Any suggestions?  glGetError, etc return no problems at all...   Here is some example render code, although I know it is just a small piece of the overall picture.  This causes the problem: TFloGLSL_IndexArray *tmpIndexArray = m_VAOAttribs->GetIndexArray(); //The first AXIS_MODEL_BASE_INDEX_COUNT elements are for the base axis..draw these thicker glLineWidth(2.0f); glDrawElements(GL_LINES, AXIS_MODEL_BASE_INDEX_COUNT, GL_UNSIGNED_INT, (GLvoid*)tmpIndexArray->m_VidBuffRef.m_GLBufferStartByte); //The first remaining elements are for the base grid..draw these thin int gridLinesElementCount = m_VAOAttribs->GetIndexCount() - AXIS_MODEL_BASE_INDEX_COUNT; if(gridLinesElementCount > 0) { glLineWidth(1.0f); glDrawElements(GL_LINES, gridLinesElementCount, GL_UNSIGNED_INT, (GLvoid*)(tmpIndexArray->m_VidBuffRef.m_GLBufferStartByte + (AXIS_MODEL_BASE_INDEX_COUNT * sizeof(int)))); } This works just fine: glDrawElements(GL_LINES, m_VAOAttribs->GetIndexCount(), GL_UNSIGNED_INT, (GLvoid*)tmpIndexArray->m_VidBuffRef.m_GLBufferStartByte);
  14. Your link might be a workable option, but having read a bit more I think I might be confusing people with my description.   I think the real compatibility issue is more generalized to accessing a uniform array from within the fragment shader using a non-constant index.   The index variable I am using is originally received in the vertex shader as an attribute (in), and passed to the fragment shader (out),  The fragment shader then  uses that to index into the uniform array of texture samplers.      What I've found in hindsight is that glsl 330 doesn't like using any form of variable as an index into said uniform array, even though NVidia seems to allow it.  =/
  15. Hello,   After a lot of programming and debugging I feel like a dumb ass.   I set up an entire billboard shader system around instancing, and as part of that design I was passing a vertex attribute representing the texture unit to sample from for a given billboard instance.      After setting all of this up I was getting strange artifacts, and found that GLSL 330 only officially supports constant (compile-time) index into a uniform sampler2D array?   Is there any nice way around this limitation short of compiling against a newer version of GLSL?  Is there at least a way to check if the local driver supports the sampler index as vertex attribute through an extension or something?  I tested my implementation on an NVidia card and it worked despite the spec, but ATI (as usual) seems more strict.   For now I have patched the problem by manually branching *shudder* based on the index and accessing the equivalent sampler via a constant in my fragment shader.     For example: if(passModelTexIndex == 0) { fragColor = texture2D(texturesArray_Uni[0], passTexCoords); } else if(passModelTexIndex == 1) { fragColor = texture2D(texturesArray_Uni[1], passTexCoords); } else { fragColor = vec4(0.0, 0.0, 0.0, 1.0); }
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!