Jump to content
  • Advertisement

Grumple

Member
  • Content Count

    65
  • Joined

  • Last visited

Community Reputation

177 Neutral

About Grumple

  • Rank
    Member

Personal Information

  • Interests
    Design
    Programming
  1. Just to follow up on this in case anyone is interested...I did a test of the implementation described above and it "seems" to work fine. After applying uniform scale value to all mesh vertices, and translates of the joint transforms and offset mats, as well as any translate key frames in any animations, things turned out as expected in terms of size changes without (so far) any unexpected problems. Having said that I've only tested a few models and every time I think I understand all the elements at play with skinned animation I have something new smack me in the face...
  2. Hello, As part of the 'asset pipeline' for my custom engine I've built a model importer using assimp. The goal is to be able to import models from turbosquid, etc, into a proprietary format without having to significantly rework them. It's worked fairly well, but whereas my game is mostly built around 1 unit = 1 metre, I see FBX (my preferred import format) tends to be 1 unit = 1 cm. On top of that I see a lot of assets from the asset stores that look awesome but are bigger/smaller than I'd like, regardless of units of measurement. I'd like to 'pre-scale' these models on import into my own model file format without just using a scale matrix. (other than this import scale issue I don't have need for scaling in the engine, and it can complicate a lot of other systems). I've spent a week battling blender with my entry-level modeling skills, and can sort of work my way through the crazy unit issues it has with FBX to scale down the based model/armature, but as far as I can tell there is no simple way to just 'apply scale' to animations without a lot of manual work in the graph view. Assimp has a 'global scale' capability in newer versions, but that seems to just apply a scale transform to the root bone. Given that I just want to apply a uniform scale to all aspects of the model, it feels like during my model import 'post process', I should be able to just scale the translate of all bone transform/offset matrices, and scale the vertices of all bind-pos mesh data. If I apply the same scale to any location/translate key frames of animations, shouldn't this all 'just work'? That way I could store my models in proprietary/engine format at the scale I want without any screwing around at run-time within the game itself... Any insight would be greatly appreciated...usually I have no trouble finding lots of good reading about game programming issues, but this seems to be a surprisingly murky subject.
  3. Just an update...after a bunch debugging I figured out my problem and it was my fault. Here is a quick summary in case it helps someone else later: When running my animation update I was setting up a list of 'animation joint transforms' for my model and defaulting them all to identity. Then I run through the channels for my animation key frame interpolation and update transforms for all channels. The problem was that when an animation didn't affect a particular joint (i.e. there is no channel for a joint), then it was staying at identity, instead of being defaulted back to the parent-relative 'static pose' transform for that joint, which seems to be default behavior.
  4. Hi Guys, I've been working on a new Vulcan based engine, and doing skinned animation support for the first time. I've got everything setup and seemingly working so that I can do a basic import of an FBX model I downloaded from SketchFab, and play its idle animation seemingly correctly, which was very exciting to see. However, I'm guessing I'm missing some rule of model import processing, whereas my 'default pose' for the model comes in oriented so that the character is standing with Y-Axis up, but as soon as I launch him into his idle animation he switches to Z-Axis up. I've seen some mention of applying the inverse bind pose matrix to the joint itself on import, and thought that might be part of my issue, but otherwise can't think of what would be causing this? Thanks!
  5. Yeah that could very well be it, just looking to rule out that it 'shouldn't compile' for some reason. I have no good excuse for the old compiler other than this being the first issue that has really made me feel the need to change. Having seen this code compile on a friends VS2017 setup, I will likely be be updating asap... Thanks!
  6. I have a general c++ template programming question...I've tried to setup a class that used a nested template type T, along with variadic template arguments for a constructor of T, to ensure I can add items to a custom container class without any unnecessary heap usage/copy/move operations. Here is my test container code, using perfect forwarding and std::vector::emplace_back() to create items 'in-place' with little overhead (TemplateContainerTest.h): #pragma once #include <vector> #include <unordered_map> #include <utility> template <typename T> class TemplateContainerTest { public: TemplateContainerTest() = default; virtual ~TemplateContainerTest() = default; template<typename... ItemArgs> void AddItem( ItemArgs&&... itemArgs ) { m_Container.emplace_back( std::forward<ItemArgs>( itemArgs )... ); } protected: template <typename T> class ItemTracker { public: template<typename... ItemArgs > ItemTracker( ItemArgs&&... itemArgs ): m_Item( std::forward<ItemArgs>( itemArgs )... ) { } bool m_IsValid = false; T m_Item; }; std::vector< ItemTracker<T> > m_Container; }; And here is some code to exercise the template above (main.cpp): #include "stdafx.h" #include <stdint.h> #include "CNTemplateContainer.h" class TestItemOfInterest { public: TestItemOfInterest( uint32_t itemVal ): m_ItemVal( itemVal ) { } TestItemOfInterest( TestItemOfInterest && other ) { m_ItemVal = other.m_ItemVal; other.m_ItemVal = 0; } TestItemOfInterest() = default; virtual ~TestItemOfInterest() = default; uint32_t GetVal() { return m_ItemVal; } protected: uint32_t m_ItemVal = 0; }; int _tmain(int argc, _TCHAR* argv[]) { TemplateContainerTest<TestItemOfInterest> tmpContainer; tmpContainer.AddItem( 42 ); return 0; } Here is the kicker: in Visual Studio 2013, the code above fails to compile with the following error: templatecontainertest.h(28): error C2664: 'TestItemOfInterest::TestItemOfInterest(const TestItemOfInterest &)' : cannot convert argument 1 from 'TemplateContainerTest<TestItemOfInterest>::ItemTracker<T>' to 'uint32_t' However, in Visual Studio 2017 it compiles fine. For some reason, the perfect forwarding mechanism of Visual Studio 2013 seems to try to send the 'ItemTracker' into the T() constructor instead of just the arguments from outside. I see that the VS 2017 std::vector::emplace_back signature/implementation changed, but I can't understand why it works now/didn't work before... Any insight would be appreciated as I don't trust this at all without understanding the underpinning issues...
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!