Jump to content
  • Advertisement
Embassy of Time

Making a (bad) movie... with AIs

Recommended Posts

A friend of mine and I realized that we are boring to listen to when talking about our passions. He is into AI coding and I love coding asset management systems. So we have gotten together to see how far we can push AIs in animation! We're currently doing a very bad movie that the software called "Ghost machine fights alien monster", because we fed it a ton of bad movie descriptions. It's horrible. But it's fun to watch (I think) and the work is really interesting (I think), so I thought I'd throw what currently exists out there. It's just two minutes (the full movie should end around 20-30 minutes), so, you know, don't expect anything. Plus, again, badly made, because we still haven't built anything but some test assets. You can watch the mess here:

https://www.youtube.com/watch?v=_LJll-_p52w&feature=youtu.be 

Also, because the AIs we're using are obsolete proprietary ones, we can't modify or share them (we're not even supposed to have them, my buddy works with AI article software), I've started messing with my own stuff. Here's a very, very simple one built along the same ideas as the infamous Botnik one but not nearly as advanced (took under an hour to code):

http://embassyoftime.com/DocManus.html

Any feedback is welcome, and ideas on how to do stuff like this are VERY welcome!

Share this post


Link to post
Share on other sites
Advertisement
Posted (edited)
26 minutes ago, HappyCoder said:

How much of the work is manual? How much is generated by an AI?

Phew, not an easy question to answer, really. The AI is mostly writing the script at this point, and the challenge is bridging the gap to the actual animation.By the assignment scene, it's already code. By the boardwalk scene (the boat leaning scene is a bit weird to explain), it's using tailored code (i.e. all code blocks, but heavily edited), and by the hotel street scene where he takes a smoke (that's the hand movement, it's hard to see), it's all generic code, and we're building the first direct parser between the AI manuscripting and the animation. That's why his legs suddenly seem so spazzy, the code being designed for the AI has no idea about human proportions, it just had a few numbers to go by. We need spatial awareness, which we hope to have by the scene I'm designing now. Kinda.

So in short, it starts out mostly manual, is entirely code halfway, and is the foundation of a primitive AI coding language usable by the manuscript AI by the end. Not sure how much sense that made.... my head is in a weird place trying to make this work!

Edit: Also, the rig is bad, and very limited. Not only is it hard to use (it was never designed for anything beyond rough posing for background figures), the AI-to-animation coding language puts needs on it for spatial language. We're still not sure how we will mke the AI aware of physical directions, surroundings and the like, but we KNOW it will require a completely different rig...

Edited by Embassy of Time
More info

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!