Jump to content

  • Log In with Google      Sign In   
  • Create Account

Promit's Ventspace



Bandit's Shark Showdown - Can Video Games Help Stroke Victims?

Posted by , 18 November 2015 - - - - - - · 1,866 views

It's been a long time since I've posted, but with good reason. For the last couple months, I've been busy building our newest game: Bandit's Shark Showdown! Released as a launch title for Apple TV and now out on iOS, the game represents the latest generation of our animation/physics tech, and will power the next round of stroke rehabilitation therapies that we're developing.

Maybe you don't know what I'm talking about - that's okay because the New Yorker published a feature on us this week for their tech issue:

Can Video Games Help Stroke Victims?
Posted Image

And here's the game on the iTunes App Store:

Bandit's Shark Showdown!
Posted Image

Our Apple TV trailer:



I Am Dolphin - Kinect Prototype

Posted by , 09 October 2014 - - - - - - · 960 views

I'd hoped to write up a nice post for this, but unfortunately I haven't had much time lately. Releasing a game, it turns out, is not at all relaxing. Work doesn't end when you hit that submit button to Apple.

In the meantime, I happened to put together a video showing a prototype of the game, running off Kinect control. I thought you all might find it interesting, as it's a somewhat different control than the touch screen. Personally I think it's the best version of the experience we've made, and we've had several (touch screen, mouse, PS Move, Leap, etc). Unlike the touch screen version, you get full 3D directional control. We don't have to infer your motion intention. This makes a big difference in the feeling of total immersion.




Our New Game: I Am Dolphin

Posted by , 07 October 2014 - - - - - - · 2,990 views

After an incredibly long time of quiet development, our new game, I Am Dolphin, will be available this Thursday, October 9th, on the Apple/iOS App Store. This post will be discussing the background and the game itself; I'm planning to post more technical information about the game and development in the future. This depends somewhat on people reading and commenting - tell me what you want to know about the work and I'm happy to answer as much as I can.

Posted Image


For those of you who may not have followed my career path over time: A close friend and I have spent quite a few years doing R&D with purely physically driven animation. There's plenty of work out there on the subject; ours is not based on any of it and takes a completely different approach. About three years ago, we met a neurologist at the Johns Hopkins Hospital who helped us set up a small research group at Hopkins to study biological motion and create a completely new simulation system from the ground up, based around neurological principles and hands-on study of dolphins at the National Aquarium in Baltimore. Unlike many other physical animation systems, including our own previous work, the new work allows the physical simulation to be controlled as a player character. We also developed a new custom in-house framework, called the Kata Engine, to make the simulation work possible.

One of the goals in developing this controllable simulation was to learn more about human motor control, and specifically to investigate how to apply this technology to recovery from motor impairments such as stroke. National Geographic was kind enough to write some great articles on our motivations and approach:

Virtual Dolphin On A Mission

John Krakauer's Stroke of Genius

Although the primary application of our work is medical and scientific, we've also spent our spare time to create a game company, Max And Haley LLC, and a purely entertainment focused version of the game. This is the version that will be publicly available in a scant few days.

Posted Image

Here is a review of the game by AppUnwrapper.

I got my hands on the beta version of the game, and it’s incredibly impressive and addictive. I spent two hours playing right off the bat without even realizing it, and have put in quite a few more hours since. I just keep wanting to come back to it. iPhones and iPads are the perfect platform for the game, because they allow for close and personal, tactile controls via simple swipes across the screen.

I now have three shipped titles to my name; I'd say this is the first one I'm really personally proud of. It's my firm belief that we've created something that is completely unique in the gaming world, without being a gimmick. Every creature is a complete physical simulation. The dolphins you control respond to your swipes, not by playing pre-computed animation sequences but by actually incorporating your inputs into the drive parameters of the underlying simulation. The end result is a game that represents actual motion control, not gesture-recognition based selection of pre-existing motions.

As I said at the beginning of the post, this is mostly a promotional announcement. However, this is meant to be a technical blog, not my promotional mouthpiece. I want to dig in a lot to the actual development and technical aspects of this game. There's a lot to talk about in the course of developing a game with a three person (2x coder, 1x artist) team, building a complete cross-platform engine from nothing, all in the backdrop of an academic research hospital environment. Then there's the actual development of the simulation, which included a lot of interaction with the dolphins, the trainers, and the Aquarium staff. We did a lot of filming (but no motion capture!) in the course of the development as well; I'm hoping to share some of that footage moving forward.

Here's a slightly older trailer - excuse the wrong launch date on this version. We decided to slip the release by two months after this was created - that's worth a story in itself. It is not fully representative of the final product, but our final media isn't quite ready. Note that everything you see in the trailer is real gameplay footage on the iPad. Every last fish, shark, and cetacean is a physical simulation with full AI.




Neuroscience Meets Games

Posted by , 26 September 2011 - * * * * * · 1,374 views

It's been a long time since I've written anything, so I thought I'd drop off a quick update. I was in San Francisco last week for a very interesting and unusual conference: ESCoNS. It's the first meeting of the Entertainment Software and Cognitive Neurotherapy Society. Talk about a mouthful! The attendance was mostly doctors and research lab staff, though there were people in from Activision, Valve, and a couple more industry representatives. The basic idea is that games can have a big impact on cognitive science and neuroscience, particularly as applies to therapy. This conference was meant to get together people who were interested in this work, and at over 200 people it was fairly substantial attendance for what seems like a rather niche pursuit. For comparison's sake, GDC attendance is generally in the vicinity of 20,000 people.

The seminal work driving this effort is really the findings by Daphne Bevalier at the University of Rochester. All of the papers are available for download as PDF, if you are so inclined. I imagine some background in psychology, cognitive science, neurology is helpful to follow everything that's going on. The basic take-away, though, is that video games can have dramatic and long-lasting positive effects on our cognitive and perceptual abilities. Here's an NPR article that is probably more helpful to follow as a lay-person with no background. One highlight:

Bavelier recruited non-gamers and trained them for a few weeks to play action video games. [...] Bavelier found that their vision remained improved, even without further practice on action video games. "We looked at the effect of playing action games on this visual skill of contrast sensitivity, and we've seen effects that last up to two years."

Another rather interesting bit:

Brain researcher Jay Pratt, professor of psychology at the University of Toronto, has studied the differences between men and women in their ability to mentally manipulate 3-D figures. This skill is called spatial cognition, and it's an essential mental skill for math and engineering. Typically, Pratt says, women test significantly worse than men on tests of spatial cognition.

But Pratt found in his studies that when women who'd had little gaming experience were trained on action video games, the gender difference nearly disappeared.

As it happens, I've wound up involved in this field as well. I had the good fortune to meet a doctor at the Johns Hopkins medical center/hospital who is interested in doing similar research. The existing work in the field is largely focused on cognition and perception; we'll be studying motor skills. Probably lots of work with iPads, Kinect, Wii, PS Move, and maybe more exotic control devices as well. There's a lot of potential applications, but one early angle will be helping stroke patients to recover basic motor ability more quickly and more permanently.

There's an added component as to why we're doing this research. My team believes that by studying the underlying neurology and psychology that drives (and is driven by) video games, we can actually turn the research around and use it to produce games that are more engaging, more interactive, more addictive, and just more fun. That's our big gambit, and if it pans out we'll be able to apply a more scientific and precise eye to the largely intuitive problem of making a good game. Of course the research is important for it's own sake and will hopefully lead to a lot of good results, but I went into games and not neurology for a reason ;)


Our game Slug Bugs released!

Posted by , 28 May 2011 - - - - - - · 875 views

FREE! We've made Slug Bugs free for a limited time. Why wouldn't you try it now?
I've been quiet for a very long time now, and this is why. We've just shipped our new game, Slug Bugs!
Posted Image

It uses our Ghost binaural audio which I've teased several times in the past, and is also wicked fun to play. Please check it out! A little later in the week I'll probably discuss the development more.


Understanding Subversion's Problems

Posted by , 12 March 2011 - - - - - - · 929 views

I've used Subversion for a long time, even CVS before that. Recently there's a lot of momentum behind moving away from Subversion to a distributed system, like git or Mercurial. I myself wrote a series of posts on the subject, but I skipped over the reasons WHY you might want to switch away from Subversion. This post is motivated in part by Richard Fine's post, but it's a response to a general trend and not his entry specifically.

SVN is a long time stalwart as version control systems go, created to patch up the idiocies of CVS. It's a mature, well understood system that has been and continues to be used in a vast variety of production projects, open and closed source, across widely divergent team sizes and workflows. Nevermind the hyperbole, SVN is good by practically any real world measure. And like any real world production system, it has a lot of flaws in nearly every respect. A perfect product is a product no one uses, after all. It's important to understand what the flaws are, and in particular I want to discuss them without advocating for any alternative. I don't want to compare to git or explain why it fixes the problems, because that has the effect of lensing the actual problems and additionally the problem of implying that distributed version control is the solution. It can be a solution, but the problems reach a bit beyond that.

Committing vs publishing
Fundamentally, a commit creates a revision, and a revision is something we want as part of the permanent record of a file. However, a lot of those revisions are not really meant for public consumption. When I'm working on something complex, there are a lot of points where I want to freeze frame without actually telling the world about my work. Subversion understands this perfectly well, and the mechanism for doing so is branches. The caveat is that this always requires server round-trips, which is okay as long as you're in a high availability environment with a fast server. This is fine as long as you're in the office, but it fails the moment you're traveling or your connection to the server fails for whatever reason. Subversion cannot queue up revisions locally. It has exactly two reference points: the revision you started with and the working copy.

In general though, we are working on high availability environments and making a round trip to the server is not a big deal. Private branches are supposed to be the solution to this problem of work-in-progress revisions. Do everything you need, with as many revisions as you want, and then merge to trunk. Simple as that! If only merges actually worked.

SVN merges are broken
Yes, they're broken. Everybody knows merges are broken in Subversion and that they work great in distributed systems. What tends to happen is people gloss over why they're broken. There are essentially two problems in merges: the actual merge process, and the metadata about the merge. Neither works in SVN. The fatal mistake in the merge process is one I didn't fully understand until reading HgInit (several times). Subversion's world revolves around revisions, which are snapshots of the whole project. Merges basically take diffs from the common root and smash the results together. But the merged files didn't magically drop from the sky -- we made a whole series of changes to get them where they are. There's a lot of contextual information in those changes which SVN has completely and utterly forgotten. Not only that, but the new revision it spits out necessarily has to jam a potentially complicated history into a property field, and naturally it doesn't work.

For added impact, this context problem shows up without branches if two people happen to make more than trivial unrelated changes to the same trunk file. So not only does the branch approach not work, you get hit by the same bug even if you eschew it entirely! And invariably the reason this shows up is because you don't want to make small changes to trunk. Damned if you do, damned if you don't.

Newer version control systems are typically designed around changes rather than revisions. (Critically, this has nothing at all to do with decentralization.) By defining a particular 'version' of a file as a directed graph of changes resulting in a particular result, there's a ton of context about where things came from and how they got there. Unfortunately the complex history tends to make assignment of revision numbers complicated (and in fact impossible in distributed systems), so you are no longer able to point people to r3359 for their bug fix. Instead it's a graph node, probably assigned some arcane unique identifier like a GUID or hash.

File system headaches
.svn. This stupid little folder is the cause of so many headaches. Essentially it contains all of the metadata from the repository about whatever you synced, including the undamaged versions of files. But if you forget to copy it (because it's hidden), Subversion suddenly forgets all about what you were doing. You just lost its tracking information, after all. Now you get to do a clean update and a hand merge. Overwrite it by accident, and now Subversion will get confused. And here's the one that gets me every time with externals like boost -- copy folders from a different repository, and all of a sudden Subversion sees folders from something else entirely and will refuse to touch them at all until you go through and nuke the folders by hand. Nope, you were supposed to SVN export it, nevermind that the offending files are marked hidden.

And of course because there's no understanding of the native file system, move/copy/delete operations are all deeply confusing to Subversion unless it's the one who handles those changes. If you're working with an IDE that isn't integrated into source control, you have another headache coming because IDEs are usually built for rearranging files. (In fact I think this is probably the only good reason to install IDE integration.)

It's not clear to me if there's any productive way to handle this particular issue, especially cross platform. I can imagine a particular set of rules -- copying or moving files within a working copy does the same to the version control, moving them out is equivalent to delete. (What happens if they come back?) This tends to suggest integration at the filesystem layer, and our best bet for that is probably a FUSE implementation for the client. FUSE isn't available on Windows, though apparently a similar tool called Dokan is. Its maturity level is unclear.

Changelists are missing
Okay, this one is straight out of Perforce. There's a client side and a server side to this, and I actually have the client side via my favorite client SmartSVN. The idea on the client is that you group changed files together into changelists, and send them off all at once. It's basically a queued commit you can use to stage. Perforce adds a server side, where pending changelists actually exist on the server, you can see what people are working on (and a description of what they're doing!), and so forth. Subversion has no idea of anything except when files are different from their copies living in the .svn shadow directory, and that's only on the client. If you have a couple different live streams of work, separating them out is a bit of a hassle. Branches are no solution at all, since it isn't always clear upfront what goes in which branch. Changelists are much more flexible.

Locking is useless
The point of a lock in version control systems is to signal that it's not safe to change a file. The most common use is for binary files that can't be merged, but there are other useful situations too. Here's the catch: Subversion checks locks when you attempt to commit. That's how it has to work. In other words, by the time you find out there's a lock on a file, you've already gone and started working on it, unless you obsessively check repository status for files. There's also no way to know if you're putting a lock on a file somebody has pending edits to.

The long and short of it is if you're going to use a server, really use it. Perforce does. There's no need to have the drawbacks of both centralized and distributed systems at once.

I think that's everything that bothers me about Subversion. What about you?


I am at GDC

Posted by , 01 March 2011 - - - - - - · 418 views

I arrive in SFO for GDC tonight, give me a shout if you'd like to meet!


Evaluation: Git

Posted by , 17 October 2010 - - - - - - · 591 views

Late copy of a Ventspace post.


Last time I talked about Mercurial, and was generally disappointed with it. I also evaluated Git, another major distributed version control system (DVCS).

Short Review: Quirky, but a promising winner.

Git, like Mercurial, was spawned as a result of the Linux-BitKeeper feud. It was written initially by Linus Torvalds, apparently during quite a lull in Linux development. It is for obvious reasons a very Linux focused tool, and I'd heard that performance is poor on Windows. I was not optimistic about it being usable on Windows.

Installation actually went very smoothly. Git for Windows is basically powered by MSYS, the same Unix tools port that accompanies the Windows GCC port called MinGW. The installer is neat and sets up everything for you. It even offers a set of shell extensions that provide a graphical interface. Note that I opted not to install this interface, and I have no idea what it's like. A friend tells me it's awful.

Once the installer is done, git is ready to go. It's added to PATH and you can start cloning things right off the bat. Command line usage is simple and straightforward, and there's even a 'config' option that lets you set things up nicely without having to figure out what config file you want and where it lives. It's still a bit annoying, but I like it a lot better than Mercurial. I've heard some people complain about git being composed of dozens of binaries, but I haven't seen this on either my Windows or Linux boxes. I suspect this is a complaint about old versions, where each git command was its own binary (git-commit, git-clone, git-svn, etc), but that's long since been retired. Most of the installed binaries are just the MSYS ports of core Unix programs like ls.

I was also thrilled with the git-svn integration. Unlike Mercurial, the support is built in and flat out works with no drama whatsoever. I didn't try committing back into the Subversion repository from git, but apparently there is fabulous two way support. It was simple enough to create a git repository but it can be time consuming, since git replays every single check-in from Subversion to itself. I tested on a small repository with only about 120 revisions, which took maybe two minutes.

This is where I have to admit I have another motive for choosing Git. My favorite VCS frontend comes in a version called SmartGit. It's a stand-alone (not shell integrated) client that is free for non commercial use and works really well. It even handled SSH beautifully, which I'm thankful about. It's still beta technically, but I haven't noticed any problems.

Now the rough stuff. I already mentioned that Git for Windows comes with a GUI that is apparently not good. What I discovered is that getting git to authenticate from Windows is fairly awful. In Subversion, you actually configure users and passwords explicitly in a plain-text file. Git doesn't support anything of the sort; their 'git-daemon' server allows fully anonymous pulls and can be configured for anonymous-push only. Authentication is entirely dependent on the filesystem permissions and the users configured on the server (barring workarounds), which means that most of the time, authenticated Git transactions happen inside SSH sessions. If you want to do something else, it's complicated at best. Oh, and good luck with HTTP integration if you chose a web server other than Apache. I have to imagine running a Windows based git server is difficult.

Let me tell you about SSH on Windows. It can be unpleasant. Most people use PuTTY (which is very nice), and if you use a server with public key authentication, you'll end up using a program called Pageant that provides that service to various applications. Pageant doesn't use OpenSSH compatible keys, so you have to convert the keys over, and watch out because the current stable version of Pageant won't do RSA keys. Git in turn depends on a third program called Plink, which exists to help programs talk to Pageant, and it finds that program via the GIT_SSH environment variable. The long and short of it is that getting Git to log into a public key auth SSH server is quite painful. I discovered that SmartGit simply reads OpenSSH keys and connects without any complications, so I rely on it for transactions with our main server.

I am planning to transition over to git soon, because I think that the workflow of a DVCS is better overall. It's really clear, though, that these are raw tools compared to the much more established and stable Subversion. It's also a little more complicated to understand; whether you're using git, Mercurial, or something else it's valuable to read the free ebooks that explain how to work with them. There are all kinds of quirks in these tools. Git, for example, uses a 'staging area' that clones your files for commit, and if you're not careful you can wind up committing older versions of your files than what's on disk. I don't know why -- seems like the opposite extreme from Mercurial.

It's because of these types of issues that I favor choosing the version control system with the most momentum behind it. Git and Mercurial aren't the only two DVCS out there; Bazaar, monotone, and many more are available. But these tools already have rough (and sharp!) edges, and by sticking to the popular ones you are likely to get the most community support. Both Git and Mercurial have full blown books dedicated to them that are available electronically for free. My advice is that you read them.






September 2016 »

S M T W T F S
    123
45678910
11121314151617
18192021222324
2526272829 30  

Recent Comments

Recent Comments



PARTNERS