Jump to content
  • Advertisement

Promit

Senior Moderator
  • Content count

    15770
  • Joined

  • Last visited

  • Days Won

    1

Blog Comments posted by Promit


  1. I see textured, diffuse shading on maybe up to 30 animated objects, maybe 100 particles, and in a few shots some distant terrain. A raspberry pi model A can do the graphics part of that at 30Hz in full HD using its crappy 4 core Broadcom GPU.

     

    I was surprised then--nay amazed--to find that the 3rd- and 4th-generation iPads actually have comparable GPUs. The Air and even the iPhone 6 aren't much better. I was under the impression that mobile devices were maybe a decade back on the GPU curve. It's looking closer to two.

     

    Under that information, I'm actually impressed you got this kind of graphics. For caustics, I was going to suggest some large textured quads--but you're almost certainly fillrate-bound at this resolution, which also explains your simple shading model. Updating animation geometry I imagine is also a significant challenge--especially since it looks like you used almost all your polygon budget on animated geometry. I'd be interested to hear about how you do skinning.

     

    I likewise believe that interactivity trumps quality. Further, I find anything less than 60Hz unplayable. As above, I'm impressed you managed even that.

     

    My research machine has 5,088 GPU cores. You're stuck with 4. Thank you for reminding me why I don't do mobile development; I retract my graphics criticism.

    Thanks - but you're missing somewhere between half and two thirds of the frame time smile.png I think I'll try and do a full breakdown of the graphics in another post. The water surface takes up at least half the frame time, maybe more on lower devices. It includes heavy tessellation (because I'm planning to do deformation/splashing down the line), three texture samples (one dependent), lots of ALU, and takes up a good chunk of the screen. The background is also not flat shaded; it's doing a dynamic gradient AND I'm adding film grain in on many devices to fix gradient banding. On top of that I have to fog-fade the creatures into the background, which means they're all computing the background (with grain!) too.

     

    It's the subtle stuff that kills you sometimes. These things are minor and slip under conscious notice on screen, but they're key to the look.

     

    Packing that stuff in at 60 fps, full resolution, was a challenge. You guessed correctly - the graphics is extremely fill rate and bandwidth bound. Every extra texture sample makes things a lot worse. The poly counts, despite being quite high, basically disappear against how much time is being spent on pixel processing. I don't do anything clever for skinning; we do a simple linear combination of bone matrices (which are derived from physics rigid bodies) in the shader.

     

    (As far as the iPad GPU, remember that 2048x1536 @ 60 is 3x as many pixels per second compared to 1080p30, and the thing has to sink the GPU heat in a very narrow space.)


  2. Gentle criticism: I'd like to see some better graphics--in particular some fake underwater caustics and some splashing on the surface. It doesn't look like you're taxing the GPU much. Regarding the animation, the characters look like they are far too maneuverable.

     

    All said, knowing how difficult AI and realistic, physical animation is, I am very impressed. Good work!

    GPU-wise, you have to keep in mind that we are driving the whole system at retina resolution and 60 fps. That means as much as 2048x1536 resolution, sixty times a second, on a mobile GPU that overheats in the first minute and clocks down to its actual sustainable speed. The iPad Air still has some GPU muscle to spare, as do the new iPhone 6 and 6+. The iPad Mini Retina and Phone 5 are running right at the line, and I have to shut off features on the 4, 3, and 2 versions of the iPads to hit that golden 60 fps number.

     

    I am not a believer in making gorgeous screenshots at the cost of actual gameplay. I needed a visual that I could create on all devices, solidly at 60 fps, at very high resolutions, and maintain that way beyond an hour of play. While I haven't had the time to push all of the devices all of the way to the edge, these GPUs are all being pushed very hard. Which, by the way, shows up as unbelievably high power consumption by the game. We are a battery destroyer ;)

     

    That said, caustics, splashes, and a variety of other special effects will most likely be added to the top spec devices in the near future. Splashes in particular were a dev time issue; I'm running a grid based water simulation in there, but it still looks crappy when I'm deforming it and I haven't had time to get the shaders and deformation right aesthetically. Caustics are behind a commented block of code because I'm not quite happy with their aesthetics or performance yet.

     

     

    As far as maneuverability goes -

    1) It's a game. More responsive and controllable wins over more realistic.

    2) You might be surprised by what dolphins are actually capable of underwater. Several times, we saw the simulation do something that looked implausible, only to go visit the dolphins in the Aquarium and realize that they were actually doing it if you knew when and where to look.


  3. Quote:
    Original post by mikeman
    I remember you saying that the computational requirements are minimal; if indeed they are they cheap enough so they don't 'steal' cycles from other more basic functions, then it would be a great additions.
    Right now we're bottlenecked entirely in PhysX, which itself isn't multithreaded to any useful degree. So basically what you're seeing is one maxed out thread of an i7, while seven other threads sit around doing nothing. CPU utilization is maxed at something like 20% because of that limitation. Once the new PhysX with proper multithreading is out, or once a different physics engine with better threading is in play, who knows? And let's not forget that full blown GPU driven rigid body physics is just around the corner, too.
    Quote:

    But I'm sure you already know that games is just a part of your target...I mean I don't really have much idea how,say, large LOTR-like battles are handled in movies right now, but I could see it helping there, or things like documentaries, demonstrations and such
    Actually, a package called MASSIVE was developed specifically for LOTR's battles. It's a very cool system but if you watch demos, you'll notice that interaction between the actors in the crowds is minimal at best.

    And we have a good Hollywood director friend who is very eager about what ours will be able to do given some more development work [smile]

  4. It's been vague partly because it's only around yesterday that I figured it out myself. Thankfully I at least figured it out before GDC, so we have a clear angle to talk up. The live demo for next week is essentially a spruced up version of the video; tools are unfortunately a little ways out. A few months perhaps.

    P.S. The guy who wrote that AIGameDev article? We're sitting down with him at GDC [grin]

  5. Without going into technical details, the simplest thing I can think of is this.

    Euphoria is CPU intensive (ie next gen only), and requires NaturalMotion to do all of the animation work for you because nobody else can use their tech. See this article for a more in depth overview. How many Euphoria based games have shipped? Two, I believe.

    BioReplicants are extremely cheap to compute, and will be available late this year as middleware. We're also seriously considering making it free for non-commercial use, although no decision has been reached yet. We can do that precisely because they're easy to use and easy to integrate. Our goal is to be as close to drop-in as possible for any game, without requiring us to sit down and write everything for you.

    Now we happen to think we're also capable of a level of interactivity that Euphoria doesn't provide, but that's a separate discussion and difficult to assert for sure because frankly no one seems to have any idea what NaturalMotion has been doing for the past year or so.

  6. The system is not trying to maintain balance, actually. Balance is a separate problem entirely. A very important problem, certainly, but that is a well researched field with gobs of literature on the subject in the various archives of ACM and IEEE. What ours does is to maintain and return to an animation in a physically plausible way.

    We do want to eventually be able to do full balance based systems, and it's certainly possible. It is, however, going to take time. Possibly a lot of time. What we have is really something that maintains animations under perturbation. Balance involves changing the underlying animation, and once you've done that in a reasonable way we can drive the physical simulation using the altered animation.

    Licensing/availability continues to be an open question based on what people say next week and how much work we have to reach production ready status. And what business deals are available, of course.

  7. Some of the omissions from his current iteration:
    * No joint constraints
    * No correction force constraints
    * His controller doesn't ever shut off, thus disallowing "death". (Deliberate for this video.)

    It's all mainly a question of time. As these things come in, it'll become much more effective in actual game environments.

    Our simplest use case is this -- imagine a shooter against AI controlled enemies. Current shooter tactics are generally "headshot him". Imagine instead that a hit to the arm throws his aim sideways -- possibly hitting his friends in the process. A hit to his leg brings him to his knees, stopping him in his tracks. And these aren't baked animations to put in, but rather physical responses to physical forces.

  8. Quote:
    Original post by NineYearCycle
    Could SlimTune be capable of doing co-profiling where the program runs on one machine and the profiler runs on another?
    Supported since day 1. It's baked right into the core design.
    Quote:
    I've tried explicit function call profiling (where I've embedded a profiling call within each function) before an the overhead for simply recording that much information is enormous.
    Actually what I found was that polling timing information (QueryPerformanceCounter) was a dramatic part of the cost in that type of profiling.

  9. Quote:
    Original post by mikeman
    I got jobs writing enterprise/DB apps without much difficulty and with no degree while being in college studying CS(actually I considerably delayed my studies because of the job-I'm just now finally getting my degree, in the age of 27(!) ), and it wasn't in some small 4-person 'companies', but relatively large with lots of clients.
    It's worth mentioning that there's a fairly large distinction between someone who never went to college, and someone who is simply taking a break. (Though you definitely took that to an extreme.)
    Quote:
    But somehow game industry seems to me like this scary place where you must at least have 1-2 degrees, a dozen finished quality games, maybe some books/articles published and whatnot, in order to get your foot in the door. [...] I don't know, general software jobs seem like one of those run-of-the-mill positions in which anybody with an ounce of programming knowledge can work, in contrast with gamedev jobs which seem more demanding. Maybe it's because there are a lot of people that want to work in the game industry?
    I think the problem might be that there's a lot of unqualified people who want to work in the game industry. I also suspect that game development studios, being generally small and quite busy outfits, have very little patience for training people who aren't already capable. Since the majority of college graduates really aren't that well trained for on-the-job engineering, larger companies in more forgiving industries are more inclined to taking that headache on.

    That's my guess, anyway.

  10. It's completely custom but realistic physics with a custom integrator we have. The model doesn't get any information apart from the forces being applied to its joints. As for a real robot, it's not clear yet...there's a professor here who's interested, so we'll see what happens.

    What you're watching is a very old demo, and the current version is much nicer -- but there's still a lot of work to do. We're going to spend time really refining it before actually trying to shop it around to people.

  11. It's not my engineering work, so I am not really equipped to go into details, but it's my understanding that this provides better and cheaper results than HRTF based work. The friend studied them in detail; I know next to nothing about them. My main question is, if they have this stuff that can do it, where is it? I would think there'd be at least the occasional game using it. (And reading the wiki page, they sound hideously complex and limited...)

    Plus, this is cheap and requires no special hardware support -- I'm considering porting it to the DS if the homebrew stuff allows for it. And of course it's a very strong offering in the mobile space, where most players are likely to have headphones.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!