• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

coderchris

Members
  • Content count

    297
  • Joined

  • Last visited

Community Reputation

304 Neutral

About coderchris

  • Rank
    Member

Personal Information

  1. Nah I dont think normalized constraints are any better from what I've seen.    I was playing with the normalized distance constraint because in his other paper (strain based dynamics), all the constraints are normalized.   That said, I ran into an issue getting the damping to work with strain based constraints - the damping was basically having little to no effect. This was due to those constraints being normalized. Turns out that the damping mechanism as described only works when the constraints are not normalized, so I'm having to unnormalize / scale the strain based constraints for damping to work on them.   I haven't figured out the exactly correct scaling though. Still trying to wrap my head around the math  :blink:
  2. I think you are correct, that seems to work nicely! Thanks for the input.   Everything seems to behave as described in the paper, definitely an improvement over regular PBD.   The stiffness parameter is a bit tricky though, for two reasons.   By their definition (if I'm interpreting correctly), alpha = 1 / stiffness. I found this to not completely solve a constraint in one step when stiffness = 1. So, I use a slightly modified version: alpha = (1 / stiffness) - 1. This will give alpha = 0 when stiffness is 1, which will fully solve a constraint in one step.   The second thing about the stiffness is that, although the valid range is [0, 1], how stiff it makes the constraint is highly dependent on the constraint's scaling. For example, with the usual distance constraint C(a,b) = |a - b| - restLength, with a stiffness of 0.5 and a timestep of 1, the constraint will be reasonably stiff using XPBD. However, using the normalized version C(a,b) = (|a - b| / restLength) - 1, will apply almost zero correction, so you need a much higher stiffness value to achieve the same effect.   Intuitively, this is due to the alpha in the denominator of the delta lambda "overpowering" the constraint. This is not a huge issue, since you can simply scale the stiffness value based on your choice of constraint function.
  3. The paper in question is XPBD (http://mmacklin.com/xpbd.pdf)   Equation (26) in the paper relates to adding damping to the constraint.   It contains a term that I do not understand how to evaluate - the grad C(xi - x^n) in the numerator.   For example, with a distance constraint we have grad C(a, b) = (a - b) / |a - b| This takes two positions, and is itself a vector function, but the equation in question (26) evaluates to a scalar.   What might they mean by this term as it relates to a distance constraint? 
  4. This guy gives a good explanation for solving the system in 1D:   http://www.paulinternet.nl/?page=bicubic   Which all makes sense to me. I would like to apply this to my case, but I'm not sure what my system of equations actually is...
  5.   In PBD (position based dynamics), we essentially move the predicted positions of each particle / object / fluid / whatever directly in order to solve any constraint (including friction).   [attachment=27691:PBD.png]   Pardon the crude drawing and probably poor explanation -    Consider this example of a static edge and a moving particle that collides with that edge. The black square is the initial position and the red square is its unconstrained predicted position. Predicted position = current position + current velocity * dt   To solve this collision constraint without friction, we project the predicted (red) position onto the edge which results in the (purple) point. We simply set our predicted position to this purple point - thats the constraint solve.   To add friction, we can compute the (blue) intersection point between the edge and the particles trajectory. We then compute a modified (green) position that is some amount between the (blue) intersection point and the (purple) projected point, based on how much friction we want. For example, for 100% friction we would set the predicted position to be at the blue point directly.   I guess it's not the most accurate way to model friction, but I did it this way because it let me have a normalized friction parameter [0, 1]   I have not implemented motors, but in the PBD framework you do have a velocity to play with, so I'm fairly sure it could be done without too much hassle.
  6. Maybe have a look into position based dynamics. It is most famously used for the cloth simulation in PhysX, but I have had success using the same framework for rigid body simulation. It requires no warmstarting and is very stable even with few iterations. You dont technically need to cache contacts at all. In my implementation I do cache contacts, but only to make handling static friction easier.   Here's the excellent paper describing the cloth simulation: http://matthias-mueller-fischer.ch/publications/posBasedDyn.pdf   And here's one describing it for rigid bodies (I have not read this one yet): http://www.interactive-graphics.de/index.php/research/55-position-based-rigid-body-dynamics-all
  7. I'm struggling to understand cubic interpolation over a triangle.   I am aware of and mostly understand bezier triangles:     Using this I can get an estimate of the function over the triangle given 10 control points and some barycentric coordinates (s, t, u).   However, this function does not go through all of the control points - it appears to only go through the 3 corner control points of the triangle.   I'm looking for an interpolation scheme that uses the same control point layout as a bezier triangle, but that goes through ALL of the control points.   What should I be searching for? Does this type of thing even exist?  
  8. Ah thanks, that saves having to add the extra attribute at least
  9. I thought of one possible solution, though I'm not sure if it is actually possible for a general mesh. Flat interpolation uses the attribute from the so called provoking vertex. If you could reorder your indices the right way, you could at least keep using indexed triangles. It would still use an extra vertex attribute. I think this would definitely work for meshes where each vertex had at most 3 triangles using it but I'm not sure about cases with more.
  10. So OpenGL 4 has gl_PrimitiveID available as an input to fragment shaders.   GL ES 3 does not (presumably due to not having geometry shaders).   One way I could emulate it is to convert my indexed triangle list into a non-indexed list, add a vertex attribute that is the triangle ID, and use flat interpolation on that attribute. The problem is that this will make drawing things much slower due to all the extra memory and bandwidth being used.   Does anyone have a better idea of how I might emulate gl_PrimiveID in the fragment shader under GL ES 3?
  11. Cool thanks, I'll keep looking in to it. Maybe I can create "ghost" vertices to make it look like a bezier triangle...    I found this: http://en.wikipedia.org/wiki/Polyharmonic_spline Which appears to be able to interpolate arbitrary point clouds. However, it is unclear to me if it is continuous in the sense that for a given point, if I choose say the 10 nearest vertices on my mesh, will that interpolate smoothly with a different point that has chosen a different set of 10 nearest vertices...    I suppose if I solved for ALL the vertices at once, then it would definitely be continuous across my entire mesh. My mesh will be changing alot though, so doing a large linear solve every time it changes probably wouldn't fit my computational budget.
  12. I have a triangle mesh. At each vertex of the mesh I have a color. For a given triangle in the mesh with vertex colors (a, b, c), and a point in that triangle with some barycentric coordinates (u, v, w), I can compute a linearly interpolated color as   color = a * u + b * v + c * w   This is fine, but I want a smoother interpolation. Given that I have all the adjacency information for the triangle mesh, is it possible to do some type of cubic interpolation? For a regular grid, the solution is documented: http://en.wikipedia.org/wiki/Bicubic_interpolation   However, I can't seem to find any information on doing it on a mesh. Is cubic interpolation even applicable here? If not, is there another type of interpolation which would give smoother results than linear interpolation?   Just to give a little more context, here is an example of the type of mesh I'm working with:   [attachment=26547:mesh.png]
  13. Perfect thanks!   gl_SampleMaskIn is exactly what I was looking for.
  14. I know through some recent extensions (NV_sample_mask_override_coverage) it is possible to overwrite the coverage mask used to ultimately decide which samples get written to by the rasterizer.   What I want to do is is not override the coverage mask, rather I just want to read it and output to an integer texture. It appears gl_SampleMask is output only...   Is it possible through the core API or an extension?   I could manually compute pixel coverage in the fragment shader by doing a bunch of point in triangle tests, but if the hardware can already do this, I would rather not repeat the work...
  15. Ah thanks for the derivation! I used the brute force method for finding the solution      The reason that snippet has the case distinction is because I wanted to rearrange the point ordering depending on which side its on but your explanation makes total sense - you don't really need the case distinction.