Jump to content
  • Advertisement

JoeJ

Member
  • Content count

    1113
  • Joined

  • Last visited

  • Days Won

    2

JoeJ last won the day on June 7

JoeJ had the most liked content!

Community Reputation

2795 Excellent

3 Followers

About JoeJ

  • Rank
    Contributor

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It's worth to mention (again) async compute, which is more similar to CPU multithreading. Here we use multiple queues (instead 1 or 2 threads per core), and if the GPU has support the work will execute in parallel. To join the individual workloads, there are synchronization commands for the various APIs. The downside is a pretty high cost coming from the overhead of using multiple queues and sync, and the need to divide a single command list into multiple command lists. The cost is higher than a simple memory barrier within a single queue. (This is where i see the most need to improve current APIs / drivers.) With Vulkan and FuryX i noticed only the graphics / compute queue offers full performance. The other 3 compute queues seem to be limited to utilize only half of the GPU. (Which is undocumented, and because i've initially used only the latter queues for my tests, i've got only disappointing results. Reason why i post this again and again...) At the end i've got close to optimal results with my tests (but i still missing real world stuff experience). Because all this seems very hardware or API dependent, it's a good reason to have some node based abstraction on top of APIs, so it's easy to experiment and find good configurations. But there is also an easier way to utilize async compute. If you have dispatches that do not depend on each others results (no memory barriers), you can (and should) execute them in the same queue, and the GPU will run them in parallel automatically without any downsides. The N-body example from above with its 4 dispatches is an example of this. (I don't know if any of this might work on Nvidia GPUs.)
  2. Can you give real world example? Googling flip-flap only gives me children books. Early on i believed that hip movement has some role in balancing, and it might be the reason why i was able to swing left and right while keeping stable balance much faster than my ragdoll was able to do. But after understanding the inverted pendulum i solved this and ragdoll can swing as fast as i now. I concluded that only the center of mass and support polygon relates to balancing, nothing else (assuming static friction is large enough to prevent any sliding, and you do not make a flywheel from swinging arms while tipping back over an edge etc.). This implies that all the balancing we do comes from the ankles, and other internal motion is basically a free variable. In fact i can play back some animation on the upper body and the controller adjusts the ankle torque automatically. So yes, eventually those 2 points balance you talk about could help me to make this internal motion look natural... (First, sorry for the off topic stuff.) One of its pioneers said that actual machine learning is just very advanced curve fitting. There is not so much hope that this is faster than just calculating those curves from physics equations directly, i guess. But i don't know. What you ask for is no longer physics. It might be the wrong forum, or even the wrong homepage, or just too early to ask for this. I wonder Nvidia has not already tried something in this direction. We'll see probably... Anyways, what you want has not yet been done in games i guess? So it is probably very hard to achieve. Not knowing what you aim for in more detail makes it hard to even make an estimation if it might be possible or impossible.
  3. I remember some related example that gave me wonders: Each thread has to process a number of nodes between 0 and 4 (but this number differs only by 1 between all threads in a workgroup, e.g. most threads process 4 nodes, but some latter threads only 3) Within this outer loop each node has some common work to process like loading its data, and depending if it is a leaf or not some conditional work (so if AND else, but no inner loops, just some instructions). I've also implemented this algorithm in a different way: One outer loop to process all interior nodes, followed by a second outer loop for the leafs. Notice in this case there will be more idle threads, the program is almost twice as long and previously common work is processed twice. I expected the first approach to be faster. It was faster on Nivida Vulkan, but slower on AMD Vulkan. On AMD OpenCL also the first approach was faster. The difference in performance is at least enough so it is worth to keep maintaining both branches. (My code is full of such ifdefs and completely unreadable for those reasons, but this is how i optimize for different hardware.) I do not understand how the second approach can be faster although it does twice the work! Assumptions: * Distributing the loads from memory may help to prevent bandwidth peaks * First Approach becomes too complicated, needs more registers and occupancy decreases (could check such things only with OpenCL) * Or the compiler acts somehow suboptimal (yeah, it must be this! It's always this!) In any case it was just one out of many examples that teached me: Keeping all threads busy is not as important as you think (and i still refuse to believe this lesson ) Also important: You can not really learn so much from such special cases. In the next shader just the opposite may happen.
  4. I give you a different example. Say we have n-body problems of different size between 64 and 512. The most efficient way to process them would be to use the same algorithm but with different workgroup sizes of 64, 128, 256, 512. Then you sort the problems to the proper workgroup size so a problem of size 200 is processed by workgroup of size 256, resulting in a need to dispatch 4 shaders instad just one of the largest size (512). With enough work the additional overhead will pay out. That's all fine, but on average, still only 75% of lanes will have work. There's nothing you can do about it. You have to accept it, it can't be done any better. I've often tried to implement things like a queue inside a workgroup to keep all lanes busy, but rarely it was a win over a simpler algorithm where some lanes work much longer than others, and if it was a win, the win was only a fraction of what i've hoped for. I've read this too (i think it's mostly about register cost to maintain control flow), but in practice you can't choose anyways. I've notice it is definitively worth to if out memory access, even from LDS. (may have been differnt a decade ago.) If recomputing saves registers, it will be worth it eventually (but often the compiler alters your decisions.) Lets say you have workgroug size of 256, but only 180 lanes are active. In this case the last wavefront may be able to skip execution. If this truly happens, or if it is even more fine grained (thinking of SIMD units processing only 16 waves), that i do not know, but it may work on some (or at least on future) GPUs. So i try to utilize this. Personally i think using IFs to save work is always good, and avoiding IFs never made much sense. Maybe more sense on early GPUs, but what is meant by all this is just: Be aware lanes operate in lockstep. You're right. (Skipped some things being too technically for me quickly, but i don't think they are important anyways.) So by using the stencil you utilized some higher level mechanism to pack work together (pixel quads where all stencil is zero will be skipped, but if only one pixel is nonzereo, other lanes will have no work). And this is exactly what you should do: Trying to pack work so similar lengthy workloads likely end up in the same wavefronts. (but also pack it so nearby threads access nearby memory, which can contradict each other.) Avoiding IFs has surely no priority, the advise seems outdated.
  5. Ha, my 'spare time expertice' ... can't resist to show my walking ragdoll again: https://www.youtube.com/watch?v=ULRnlAbtL3s The video is old and looks robotic (upper body does nothing, dump state machine to drive walking), but i hope i have time to continue on this in the future. Luckily Newton engine cares for all the constraint solving stuff we've discussed, so i can focus only on the control problem. Balancing is really a critical action. Humans do it always being at the edge of what is physically possible. Took me years to figure out how it works. For the simulation i do it analytically on an inverted pendulum model, which i have to map to and from the ragdoll bodies. Finally i set target angular acceleration to the joint motors, and Newton cares for the rest. No cheating here and performance is game ready. If' i'll ever make a game on my own again, then only with this kind of characters - otherwise i won't!
  6. JoeJ

    Best way to optimise 3d models

    Btw, this algorithm has been integrated into Luxology Modo, which is also very good and quick for manual retopo. I really like it for character / organic modeling in general. Might be worth to check out, there was a <100 bucks version on Steam.
  7. Ok, but the algorithm you describe assumes a static base, which makes it easy. But what if it is a walking robot standing with two feet on the ground (or worse: standing on stacks of dynamic boxes) - in that case feet contacts form a cycle and calculating a solution becomes hard, probably leading to using the algorithm within iterations in hope to converge towards an acceptable solution. If i would try to apply your algorithm to the resting contact problem of a pyramid stack (so each body has two bodies below it), i could do so by forming something like a spanning tree and traverse it from bottom upwards to get quick and pretty exact contact forces. But in the next frame the tree may look slightly different, choosing different paths. Result, in comparison to the naive approach, would be increased stiffness, but unacceptable jitter. (I remember i've tried lots of things like this.) So i still do not see a 'simple' way to do rigid body simulation. At least not if we desire more complex simulations than some boring boxes standing around and doing nothing and some dead ragdolls. I remember i've experimented with shock propagation es well, and it had some bad side effects for me so i dropped the idea. It's one of those many failures that led me to the conlusion: Either you do it right, or it does not work. You can fake graphics, but not physics. But this really depends on your goals of course - they want' massive stuff, not accuracy.
  8. How do you do this? Does that mean you build a large system of equations, e.g. all bodies that form an island, and solve for the exact solution? If so, is this utilized for games, or is it common to solve contacts unrelated of each other simply by accumulation and iteration (the 'naive' approach, which worked well enough for me and did not cause jitter, but was not perfectly stiff of course). You mean Inverse Dynamics? An IK solver would be of use only to control the target position/vel/accel of the joints. My question is how to calculate the necessary joint torques / forces to get there, e.g. for a robot consisting of many bodies? The problem is pretty similar to the calculation of resting contact forces, but contacts act only in one direction, so i assume both require / allow different approaches to solve them. I would be happy to understand just one.
  9. Sounds like a big problem. 1:100 is huge. This means potential high divergence on GPU: Larger bodies need to check MUCH more nodes/cells for collision detection. Same for calculating response with many tiny bodies. But this can be likely solved by binning bodies by size, and running one compute shader dispatch per bin. This way divergence can be reduced a lot, and async compute can run all dispatches in parallel. Sounds good but in practice it may be still an issue i guess. The larger problem is probably robust simulation. Large size ratios means large mass ratios too. Or can you use similar mass for small and big bodies? (I asked for how to handle large mass ratios myself just before in my previous post...) Also, large bodies having many contacts with tiny bodies will likely cause heavy jitter. You'll have to cheat: Use large sleep thresholds, things like shock propagation, etc. This solves some problems but also introduces new problems. Another option would be to store and simulate physical properties in the grid cells instead of individual bodies. E.g. average velocity / density per cell, similar to how grid based fluid simulation works. Then you drive the bodies by the resulting grid velocity vector field. I don't know if PhysX particles already do this, but if you can afford some inaccuracy, this might be the way to go. (And why don't you use PhysX? It's NV exclusive, but you could check out what's possible there.) Can't comment on Neural Networks, but here's some research i've found: https://homes.cs.washington.edu/~barun/files/icra17_se3nets.pdf If you can, you should tell us more. What kinds of geometry do you want to simulate? Breaking buildings the player can interact with? Or just some debris flying around after explosions, but not affecting anything?... makes a big difference.
  10. What i did to solve for forces is this: Calculate forces between pairs of bodies in contact, sum them up for each body, and repeat this process iterative. This dates 20 years back. I always thought this is a very naive approach and there should be something better. I've got stable stacks from it, but it was not good enough for large mass ratios. I tried to cache contacts and reuse forces for the next update, but this alone did not worked well. IIRC, it did not converge and bodies did not get to rest or there was jitter. Are there better methods? The other issue i've had was motorized joints for robotics, for similar reasons. Joints have been too soft, and increasing iteration count decreased performance quickly but improved results slowly. So i gave up and used physics engines instead. I noticed they have the same problems, but they had more features. Finally i found Newton engine, which managed to solve those problems, recently using a graph based solver similar to Featherstone, but force based - as far as i know. So i will not work on those things again, but if you'd describe the usual way to solve those constraints, i'd be very interested. Having bad education about math, i always need to reinvent wheels and i'm never sure how others do it, because i do not understand their language. I wonder why you say it is simple, so maybe you have simple description for dummies... (Collision detection / contact generation is, although complicated, much easier to learn for a person like me. I've had no problems there.)
  11. I think i already tried to adjust my point towards including the ratio given by the word 'most' in the next post. You always pick something i say out of context so it turns arguable, but you refuse to comment my arguments. I comment your points, you quote me just to introduce new points, becoming more and more unrelated. It seems the only reason you quote me at all is just to continue argumentation, instead just saying: maybe i was wrong about that tiny detail that was wrong. To me it is not, it is an open research topic. However, if you, somebody who is more involved in physics than myself i guess, says it is simple, i realize my own statement 'physics is harder than graphics' is either too subjective ore just stupid. I take it back, oops i was wrong and sorry. But again, a statement like 'Physics can even be considered a graphics element most of the time, only there to make things look more realistic or dynamic.' IMHO is misplaced in a thread starting with a video about rigid body dynamics and downplays the application of this field. Your words would not fail if you would be able to accept they might have been wrong later on. My fault is i'm too provocative so i make it hard for you. Sorry for that - i'll try to improve myself. I can't take the challenge, because i did not work on a game the last decade. But looking towards the copy anyways - for fun, not to judge you. I think you can do a lot. You know a lot about everything, enough to make games. I don't think i underestimate your skills. But sometimes your thoughts appear shortsighted to me. This is because we have very different perspective / starting point: Your goal is to achieve state of the art with your work on games. State of the art is all there is and there is nothing after it. My goal is to improve state of the art with my work on technology. State of the art is just the start. Finding out what comes next is my work. So i see and think about many things you do not. Physics simulation in games is a sleeping princess with huge potential. It opens up more possibilities and is more important for realism than photorealistic rendering. You once said you spend some time to learn 'most' about any field, but learning all the remaining details would take too much time. Better to progress in learning the next field, so you are capable of everything well enough to produce results in the end. This makes sense and is good practice, but it also implies: likely you're not a expert in most things. Sounds provoking but that's not what i intend. I can just hope you see what i mean. I've made a small physics engine in the past, including constrained multibody stuff, and you remember my work about walking ragdolls. This is why i think i know more about physics than you do, and why i allow myself trying to correct you. So, looking towards the next debate where we manage to upset each other and hopefully have some fun, ;D but be sure i respect you as a person, your skills and the help you provide.
  12. JoeJ

    Dark Fantasy Environment and Props

    Great. Cant' critique anything but the 2nd chair from left with the spikes. Not even the devil himself would want this chair in his room, it's a death trap! (also, thin geometry like this causes aliasing) All the issues pointed out early in the thread are gone. Good progress! :D Edit: I really like the texturing of the wooden furniture. You still use quick repetive ornaments, but the low contrast makes resulting issues disappear completely. This is a great way of spending less work on less important details by toning them down i think. I learn from this. Maybe you could try something similar with the bricks on the gothic structure on top of the thin columns. Here the bricks appear still too repetive and bumpy to me. Making the bricks less bumpy and adding a second bump map causing low frequency variation of height would make the whole thing more naturally imperfect and smoother. (Similar to the combination of wood and ornament texture on the furniture.)
  13. Agree, and as you already have pointed out, we can boil down pretty much everything to physics and finally math. However, if we talk about physics in games, probably most of us think about rigid body simulation including stacks of boxes under gravity, joints for ragdolls, motors, etc. If you ask a gamer what physics in games means to him, he'll probably answer gravity gun in HL2, a game where the whole world is based on advanced physics simulation. So this is my personal definition of the term, and looking at the OP video the whole thread seems to follow this definition as well. This thread is not about the majority of games NOT utilizing worlds based on physics simulation. It is about the minority of games that do so. Clearly there is interest in massive particle / rigid body simulation involving resting contact. Somebody interested in this probably wants to use it for gameplay an not just eye candy, considering the high performance cost this takes and what you have to sacrifice. Other things, e.g. ballistic or spaceship trajectories, precomputed door animations, PBR shading... all of this involves physics and has been discussed, but it is not only easy to implement in comparison to the above (no large problems with multiple unknowns), it is also off topic. This is why an argument like 'WE use it more for graphics than we do for gameplay' does not hold here and is unrelated IMHO.
  14. Computers do not solve probelms. People solve problems using computers. I've studied physics simulation, graphics, anatomy, all of them for much, much longer than 3-4 years now, and i can say physics is the hardest problem to solve mathematically. This does not mean an artist that can draw anatomically correct images out of his head deserves less respect than the guy writing a physics engine. There is just no point to compare those very different skills. But writing a physics engine requires more skill in math than a game renderer for games, because the problems are much harder to solve. You would agree if you would be more skilled in physics yourself. This does not mean physics guys ARE more skilled than graphics guys. I only talk about minimal requirements to pull off those things. Also there are difficult fieldes in CG as well, but i only talk about current state of the art in games. Of course they can and we do it all the time. Especially in racing games; some of them do use physics but most don't. Right now in my space game, I am faking DeltaV flight using Lerping; a trick I stole from a older game that used it to make a stable network game. If you make a simple racing game, yes you can do it without a need to solve anything. Also a typical space game does not have any physics problem to solve. You can just apply newtons laws to individual bodies. But that's as easy as rasterizing a circle. I talked about constrained multibody dynamics and made this clear. Try to write a simulator which can handle stacks of rigid bodies with different masses by calculating contact forces. Than you know what i mean. (And please now do not link to another paper from Erin Catto that proposes one way to get there just because everybody else links to them. And pleas no more laws of Newton - i've heard of them, no need you show me) The rest of your post is not worth to be adressed. Again you place some unrelated facts everybody already knows in a trial to strengthen your own weak argumentation, and again you assume the way YOU perceive the process of making games is the way WE make games, which implies that ALL games are made in a way YOU think. In other words: You know it all. I prefer the assumption that i know almost nothing - this is much closer to the truth, not only for me but for all of us
  15. Why? The art that you see on your screen in a game is the work of a 3D artist that had to spend 3-4 years learning art, A animator who spend 3-4 years learning animation and a graphics programmer who spend 3-4 years learning visual programming. I believe most physics programmers also spend 3-4 years learning how to code physics. So why would you ever believe that comparing a dedicated professional to another degrades them? Maybe i've used the term 'degrade' just to start a long standing argumentation leading to nowhere What i mean is that solving for forces in multibody constrained dynamics is a much harder problem, than testing for visibility in graphics. Notice: Rendering a frame in a game does not require a single solve. (This changes if we start using realtime GI, but we're not there yet, and even then the problem is very easy to solve, just hard to solve fast enough. In Physics it is not just a performance issue.) This is not true at all. We use dumbed down versions of physics all the time. The newton cradle simulation problem is a good example of this. Computers just don't have the power to simulate accurate physics. Windows in games aren't liquids, we don't track internal forces, we don't even use accurate collisions shapes. It is true. When you say list some games utilizing physics, i answer Half Life 2, Portal, Limbo, Penumbra, racing simulations. All of them try to do proper simulations based on correct equations of motion. This can't be faked in a way we can fake graphics by precalculated GI. You can do some fakes in physics like shock propagation, but this only exchanges limitations from one side to another. Collision shapes are always accurate. Using a different representation for graphics does not make physics inaccurate, wrong or faked. Actually it's the graphics that is faked, if you want so. I don't know what you mean with internal forces or liquids. Don't try to teach me about ragdolls. We've gone through this already I'm well aware we use a lot of illusions in our games, and calculating hitpoints is / can be game design not simulation. But physics simulations is not faking stuff until it looks right most of the time. It has to work in any case because it is the USER that is unpredictable, not the MATH. This is one reason why we use simulations, because a simulation targets to work even if the user performs actions in the game we did not intend him to do. I do not say games have to be realistic, it is still possible to use external forces on top of an accurate simulation, or to tune physical properties to serve game design. But in my opinion an accurate and working physics simulation of a game world is a better starting point, and improvements in this field will lead to innovation in games. This is subjective, and contrary to your view of faking lots of things and using proper math only in certain situations when desired. But i am aware this is subjective and personal, while you seem not to be. Representing you personal view as general truth because you think that's 'state of the art' / that's 'how games do it because it's good enough' can have a negative influence to the view and progress of others, mainly newbies. This is what i criticize, not anything technical.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!