Jump to content

  • Log In with Google      Sign In   
  • Create Account

Creating a "Music Video Game"?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1   Members   -  Reputation: 113


Posted 09 March 2013 - 07:31 PM

I'm primarily a composer looking to experiment with the development and creation of interactive musical games.  I am using the word "game" liberally, here.  I am interested in environments where input by the "player" in a visual/physical space generates or otherwise directly influences the sounds that are heard.  I would probably want to start out with something simple like the Eno/Chilver collaborations.  I am more interested, however, in direct and perhaps more predictable control over the sounds by the player than is offered by those ambient experiments, where the music and sound are more abstractly or metaphorically related than anything else.  I do have a few more ambitious ideas I would be interested in pursuing if I am able to accomplish simpler tasks first.  Basically, I am just in need of some direction as I don't really know where to begin.  I have no background in programming but I am willing to learn.  Max/MSP? PureData? A non-visual language? Or should I start with a game engine and work backwards to get the sounds in? Or perhaps I underestimate the difficulty of such a project and should just forget about it? As is probably evident from this post, I am quite clueless.  Any advice would be greatly appreciated!

#2   Crossbones+   -  Reputation: 808


Posted 09 March 2013 - 07:50 PM

I was working on something similar to this, it was an attempt to manipulate audio based on input from a webcam or camera.


We used C++, SDL, OpenCV and SDL_sound.

Interestingly, capturing video frames with OpenCV and even basic analysis of video (like brightness) was quite easy to do. The tricky part was reliably layering many audio tracks without any delays or glitches. SDL_mixer was completely unreliable and caused delays because of it's internal threading. SDL_sound required more work and we have not managed to make it glitch-free before our interests faded :)

If I ever do this again, I would try another, maybe more reliable audio library like OpenAL. However my experience ends here.


Based on your question, my suggestion would be to have a better idea of how your game (or interactive experience) should work. Maybe you won't need to write it from scratch like we tried to do :)

#3   Members   -  Reputation: 1998


Posted 10 March 2013 - 09:07 AM

Since you have never programmed before...

I would NOT recommend pure data....there is a lot of non-nonsensical aspects to it, very confusing and buggy etc.

Visual "languages" have a way of pulling you in, until you realize that learning code would have been easier.

I would absolutely recommend Processing. It is specifically designed for artistic applications like you describe and very simple to learn.

While Processing has its limitations compared to C++ or whatever, you will find that you can achieve your "dream project" without having to spend too much time

learning how everything works, which would been an issue if you choose to use C++ even with SDL or similar libraries.


If you go ahead with Processing, I found this website that discusses using Processing for audio applications.

Examples of Audio Processing in "Processing"

Edited by minibutmany, 10 March 2013 - 09:12 AM.

Stay gold, Pony Boy.

#4   Members   -  Reputation: 113


Posted 10 March 2013 - 11:35 PM

Thanks for the replies.  Processing looks interesting, but based on what I've read it seems that most musicians prefer Max/MSP's audio engine.  Would your criticisms of PD also apply to Max? What if I was to do the art/animation in one program (I'm thinking Blender if I wanted to go 3D) and the sounds/music in another.  How would I go about creating the "rules" by which the visual affects the aural and vice versa? Or would learning to program in any of these languages provide easy ways of doing this?

#5   Crossbones+   -  Reputation: 808


Posted 12 March 2013 - 03:44 PM

Your best bet for creating a game with max and then distributing it would be this: http://cycling74.com/products/gen/codeexport/.


The exported code assumes familiarity with development environments.


However, this as advanced as you would need to get if you try to do the same yourself with C++. Plus with a blob of auto-generated code which has to be plugged into your audio output, it can be very bind-bending.


This is my quick analysis of MAX option.


How would I go about creating the "rules" by which the visual affects the aural and vice versa?


Inventing the system how it is done yourself. That's what programmers do. However, there might be various tools and libraries like mentioned "Processing" which help to do many things. Some libraries are too simple: they basically have a function to play("file.mp3") and it works! However your task might need access to audio data or layering tracks precisely. I am sure there are tools for that. But how many tracks at the same time? What do you want to do with data you read from file? That is for you to decide and make.


Or would learning to program in any of these languages provide easy ways of doing this?


I think that in this case you can't really do it without programming knowledge. And you are right, programming is only for connecting systems together and defining a logic how it will run. You may also need 3D files modeled in tools like Blender, and maybe audio samples prepared with some kind of DAW.


I think gamedev community can help you with more precise direction if you have a more precise thing you want to accomplish.

#6   Members   -  Reputation: 113


Posted 13 March 2013 - 12:34 AM

Very stripped down example of what I might want to do:


So let's say I have a multi-track audio sequence ready to go.  The player opens the game, sees a square, and the audio sequence begins to play.  I want the player to interact with the square, which will modulate various parameters (filters, comb filters, feedback, distortion, pan, gain, etc.) in the ways that I define.  Ideally, these modulations would occur pre-render (so they are actually modulating parameters inside a virtual instrument sequenced in MIDI, as opposed to effecting an already rendered audio file), but my understanding is that support for such a system (which borders on procedural audio) is very limited.  So if a player changes the square by moving one of its lines by 20 degrees, a low pass filter will open, for example.  I think I should start out this simply (though this is probably not easy to implement), but my concern is scalability.  I may be able to do dinky little projects such as this in "Language A" but if I want to create a complete 3D environment (which is my ultimate goal), I may end up having to learn a totally unrelated "Language B" which would not be efficient in the long-run.  I'm not as concerned with 3D modeling and such as I have the tools to get started on that.  I just don't know how to connect the physical world to the musical world.  Ideally, I would want the player to also influence the actual notes being played, but this is more difficult both from a programming and designing/composing standpoint.

Edited by goldenmommy, 13 March 2013 - 12:38 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.