Archived

This topic is now archived and is closed to further replies.

Antialias

ANN question.

Recommended Posts

I had an idea for a web-based experiment with ANNs this week, which I''d like to try out. Basically, I''m trying to breed neural nets that produce simple musical tunes. What I have is a basic multi-layer perceptron with the twist that some of the output neurons will be wired back to the input neurons (Thereby I''m trying to give the network some basic memory). The neurons process their input using the sigmoid function. After each played note, the network will be updated and a part of the output will be used to determine the next note to play. When viewing the project, the user will download a set of weights for the network from a database and rate the musical output. After a while the sets in the database will be ranked and the fittest of them will be combined by a genetic algorithm. I don''t know if this will work or is a good idea, and I''m not particularly looking for feedback on the nature of the experiment. It''s just an experiment and it won''t take much time to implement. However, after implementing the ANN part, something happened that I find odd and perhaps one of the more experienced people on this board can clarify whether this might have been due to a coding error or whether it''s a perfectly normal result (in which case I''ll have to drop the project). For testing, I set up the complete ANN with random weights and random initial output, then I let it run for a couple of turns. No matter whether I have ten or a hundred neurons per layer, the output of every neuron in the output layer quickly converges to a fixed number. I expected something much more chaotic. Did I do something wrong or was I naive in my expectations? Is it due to my weights, which I set randomly between -1 and 1 or is there some other tweaking of parameters I should try? Any hints would be appreciated.

Share this post


Link to post
Share on other sites
What are you using for input?
If there is no input apart from the output plugged straight back in then it sounds like your experiment has fallen into an attractor i.e. the numbers bounce around for a while but then the effect of the output on the net (via the inputs) ends up with an identical output.

I''m guessing you have no inputs apart from the output wired straight back in.

Mike

Share this post


Link to post
Share on other sites
You're right - I don't have any input except for the output of the network (I wouldn't know what else to take). I'm still surprised that the system reaches stability so quickly. At 100 neurons per layer it never takes more than five or six turns. And the numbers hardly bounce to begin with - the attractors they hit are only a tiny fraction away from their inital states.

[edited by - Antialias on January 18, 2004 2:51:23 PM]

Share this post


Link to post
Share on other sites
I thought about letting the network "hear" slices of the waveforms it produces, but that seemed a bit too random for me, as I wasn''t going to update the network with each sample of the wave it played but only with each note.
But perhaps feeding it some sine and cosine waves independent of the musical output, some values oscillating in a basic manner, might actually get me somewhere. Thanks for the hint, I''ll consider it. Any other suggestions?

Share this post


Link to post
Share on other sites
Perhaps have some activation based weighting going on. If a neuron fires a lot then give all of it''s outputs a multiplicative effect that reduces firing, if it fires very little, give it a multiplicative effect that increases its firing. There are mechanisms that supposedly do this in the brain (AFAIR) and I''ve certainly seen NN experiments that use such a mechanism.

What "firing a lot" and "firing very little" mean is up to the context of your experiment.

Share this post


Link to post
Share on other sites
quote:
Original post by Antialias
I had an idea for a web-based experiment with ANNs this week, which I'd like to try out. Basically, I'm trying to breed neural nets that produce simple musical tunes.


Stepping back from the problem for a moment, I wonder if it wouldn't be more straightforward to either evolve the music directly, or focus instead on a syntactic approach?

-Predictor
http://will.dwinnell.com





[edited by - Predictor on January 19, 2004 6:31:58 AM]

Share this post


Link to post
Share on other sites