Schrompf

Members
  • Content count

    752
  • Joined

  • Last visited

Community Reputation

1035 Excellent

About Schrompf

  • Rank
    Advanced Member
  1. Advanced Terrain Texture Splatting

    You could get the same result by prefiltering the alpha map, avoiding all but one sample in the pixel shader. And if you employ some kind of relief shader, I suggest using the height map instead.    I employed this method to much success, but I didn't use a hard comparision but instead used a simple smoothstep() to blend between to two. Also works nice, but had the downside that the order of sequence is now important. It's always "one on top of the other". Which was fine for me as my terrain blending put one material per pass. 
  2. Animating ASSIMP models

    The Assimp documentation says that the data from the animation tracks replaces the node's local transformation. So yes, the animation data is "premultiplied" with the joint transformation. My suggestion is to not transform the animation data into local joint space.
  3. Never Team Up with the Idea Guy

    I sense idea guys with hurt feelings.
  4. Maybe the model creator mapped the texture to -1 .. 0 on one axis, and you activated texture coordinate clamping. The result would then look just like this.
  5. How much of Boost do you use?

    My Boost usage list is:   filesystem, bind, signals2, format, any, lexical_cast.    Especially boost::signals2 is worth praising in my opinion. It's great to use signals to make decoupled systems talk to each other on occasion. You surely don't want to connect/disconnect thousands of times per frame, but calling through it is pretty fast.   Previously on boost, now deprecated due to C++11:   scoped_ptr, shared_ptr, unordered_map, random   I'm going to remove [i]any[/i] at some point in the future, though, because I want an implementation of [i]any[/i] that does not allocate for small types.
  6. OpenGL Multipass forward rendering?

    I'm not your debugger. Try it. 
  7. OpenGL Multipass forward rendering?

    You accumulate by using Alpha Blending. glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE);   From the top of my head, no guarantees given.
  8. I like it so far. It's a classic "The lord giveth, the lord taketh away" like any other Visual Studio release. I immediatly googled for the Registry Key TO SWITCH OFF THE ALL-CAPS MENU (god, that was annoying), and I wasted quite some time in the theme editor to make the current line stand out better, break point lines, marked lines, all that... they really wasted a lot of clear communication potential with them limiting themselves to either black or white as background color. Now that the theme is set and Visual Assist is installed again, life could be beautiful.   Except it isn't. It still leaves its whole stack of MSBuild zombie threads on occasion. Sometimes the whole UI deadlocks when switching from debugging back to editing. Intellisense has gotten faster, I admit, but it still needs 10s upwards to find a function defined one page below the cursor. The project properties are still locked in size and uncomfortable to edit, especially with multiple build configs. Edit&Continue, the one major feature that really makes my workflow fly, is in a desolated state. I even had a skype call scheduled with some guy from Microsoft who contacted me after I took the E&C survey, only for him to never show up at the scheduled date and time, and never since.   The one thing why I still think VS2012 is worth upgrading is the new C++11 features. I'm not the template master so I'm not that fixated on Variadic Templates, but range-based loops really made my life easier and std::async() is a nice thing to have, too. Just to point out two examples, I can name more if you give me some time. Still waiting for delegating constructors and default operators, though, and the uniform initializers are still unusable, even though my bug report was tagged as "fixed". Delegated constructors are there, technically, but they're implemented in the CTP compiler updates which is unusable with Edit&Continue and is a burden to deploy for all programmers. So I'm still hoping for them to actually come out with an ServicePack or whatever that incorporates all the new features.   There's a profiler integrated now, which is also a really nice thing. Hasn't been there since... VC6? Nice to have. I have yet to try the graphics debugger. An integrated PIX would be a really cool thing to have, especially because PIX keeps getting worse with every other Windows update.
  9. Convert the quaternion to a rotation matrix, then concatenate a translation matrix to it. Whatever programming language and math classes you are using will already provide a means to do this.
  10. By reading what others already wrote in this thread.
  11. This is because the Direct3D scene file format does not support cameras. If you really need lights and cameras, I suggest looking into the Collada file format. FBX might also support this, but I'm not sure.
  12. Finding 'allowed' resolutions?

    I use exactly this function. It works fine and is simple enough, but but warned: it lies occasionally. Especially the refresh rates it reports are wrong for certain display devices - overhead projectors are notorious for this, older monitors as well, or analogue-cable-connected displays.    So my suggestion for a reasonably well-behaving game is: use this function, filter out the resolutions your game can't handle, always use the default refresh rate whatever the monitor tells you, and you'll still need one of these obnoxious "Press OK to keep this resolution. Will switch back to previous setting in 15... 14..." dialogs to be safe.
  13.   Hey Ill. First off: thanks for using Assimp :-) I once tried the same route you're treading now, making all my animations to only add "offsets" to the bind pose. I was hoping to be able to combine animations better, and maybe being able to scale them in terms of effect strength. I ultimately failed in reliably finding a complete bind pose for all models. I came up with two ways:    a) Expect the model to be in bind pose when importing. If you get ALL of your assets from artists working for you, you could do this. But it's simply not the case for models you get from other sources, and there's no way to detect if the skeleton isn't in bind pose.   b) Calculate the bone's bind transformation by inverting the bone's offset matrix and "subtract" the parent's bone's offset matrix. This should give you the bind transformation in relation to its parent node. But it fails as soon as you get a skeleton where a bone doesn't affect any vertices. My code assumed a unity matrix in that case, and I saw a similar explosion of the character when trying to animate it.   Case b) can happen quite often. Most character artists today use some sort of predefined skeleton, for example 3DSMax biped. To my knowledge this is necessary to use the builtin animation tools that really seem to make artist's life easier. But when skinning the model they don't need some of the bones, and therefore don't assign any vertices to some of the bones, and BOOM. No bone is exported, no offset matrix can be found, character explodes when being animated.   So my suggestion is: only go that route if you can really rely on every asset being specifically made for you, by people you can give commands to, if necessary. In all other case, absolute animation data is the way to go. I converted all of my animation code to this option after wasting several days on the problem.
  14. What keeps you from doing exactly the same in the shader? And don't tell me you're using the fixed function pipeline, I'm out then.   To stay in your terms: you usually combine the bone's bind transformation (the original position and rotation) in relation to the mesh, plus the current rotation offset and position offset, all into one transformation matrix. You end up with an array of matrices which you upload to the shader. You enhance your vertex structure to contain a number of bone indices (normally 4, because the fit nicely into an input register) and an equal count of weights that specify how much the vertex is influenced by that bone. So every vertex in your mesh says: I'm affected by bone 13, 25 and 5, with a weight of 0.3, 0.25 and 0.45, respectively. The shader now simply transforms the vertex by each bone matrix, accumulating the result weighted by the corresponding vertex weight, and you're done.