• Advertisement

Archived

This topic is now archived and is closed to further replies.

How to solve this?

This topic is 5151 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi there, heres my problem, on an orthogonal system, where a vehicle can only go up, down, left or right, lets assign each one of those possibilities to an output nerve, and setup a NNet to connect to those outputs. Some Inputs --> NN --> 4 Outputs Now, when a certain output is fired, the vechicle will go in that direction... My problem is, how do i make the NN only fire ONE of the outputs? What system is commonly used for mutually-exclusive decisions, when one of the decisions cant be taken at the same time as others? Thanx for any tips on this...

[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
The most irrefutable evidence that there is intelligent life in the Universe is that they haven''t contacted us!

Share this post


Link to post
Share on other sites
Advertisement
The obvious thing to do is selecting the output with the highest value. There are ANN architectures where the neurons in the last layer inhibit each other, but I don''t have any experience with that.

Share this post


Link to post
Share on other sites
Output with the highest value?

In my designs, outputs only take the form of 0 or 1, fire or not fire, so there isnt really a "higher" value i can take.

Neurons that inhibit each other? this is interesting, can you provide any links to this?

Thanks.


[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
The most irrefutable evidence that there is intelligent life in the Universe is that they haven''t contacted us!

Share this post


Link to post
Share on other sites
quote:
Original post by pentium3id
Neurons that inhibit each other? this is interesting, can you provide any links to this?



A connection between two artificial neurons is inhibitary if the connection weight is negative. In other words, if the parent neuron fires, its affect is to decrease the likelihood of the child neuron firing, thus the ''inhibitory'' label.

Timkin

Share this post


Link to post
Share on other sites
quote:
Original post by pentium3id
Output with the highest value?

In my designs, outputs only take the form of 0 or 1, fire or not fire, so there isnt really a "higher" value i can take.


Well, you''re going to have to do one of 3 things:

1. Give up the binary outputs for graded ones, which allows selection of the strongest.

2. Force the neural network to generate a single ''1'' output (lateral inhibition is one option).

3. Choose an output arbitrarily.

-Predictor
http://will.dwinnell.com

Share this post


Link to post
Share on other sites
I''d model the problem differently, ruling out mutually exclusive options by design. Use turn left/right, move forward/back.

Alex



AiGameDev.com

Share this post


Link to post
Share on other sites
I''d agree with Alex on this one... why do you feel that an ANN is an appropriate tool for this decision task? What is it about the domain that precludes the many other reliable and less semantically challenged solution methods.

Timkin

Share this post


Link to post
Share on other sites
Timkin, because the work im doing is about a new NN design.
The neurons themselves are diferent both in implementation and mathematicly, and the way the NN evolves is also a bit unorthodox.

The Left/Right aproach, in which a NN allways takes a decision, even it doesnt fire, although the best solution for the current problem, isnt correct (imho) purely from a design standpoint.

A NN shouldnt be forced into making a decision every single time, and thats my point.

In nature, how are "conflicts of opinion" solved?


[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
The most irrefutable evidence that there is intelligent life in the Universe is that they haven''t contacted us!

Share this post


Link to post
Share on other sites
quote:
Original post by Predictor
What do you mean when you write "semantically challenged "?



Simply that it is generally very difficult to read an (artificial) neural network and immediately identify how the parameters of the network (which are a kind of statement about the decision space of the network) relate to the inputs or outputs at other than a mathematical level. In other words, there are no clear semantics for logical statements relating inputs to outputs. Does that make sense?

quote:
Original post by pentium3id
Timkin, because the work im doing is about a new NN design.



Fair enough then...

quote:
Original post by pentium3id
A NN shouldnt be forced into making a decision every single time, and thats my point.



It sounds to me like you need to move to a continuous/graded output space and make decisions based on a thresholding of this output space. You might want to read the following papers... they aren''t directly pertinent to what you have described above, but reading them might give you some ideas...

Kwok, T & Smith, K. ''Chaotic Dynamics of the Self-Organising Neural Network with Weight Normalisation for Combinatorial Optimisation''. Proceedings of the 3rd Int. NAISO Symposium on Eng. of Intelligent Systems (EIS''2002), Workshop on Chaos & Computation, Spain. 2002

Kwok, T & Smith, K. ''A Self-Organising Neural Network with Intermittent Swithing Dynamics for Combinatorial Optimisation''. Submitted to 3rd Int. Conf. on Hybrd Intelligent Systems (HIS''03), Melbourne, 2003.

I''m not sure if the latter was accepted or not... if not, check out Terrence''s or Kate''s web page at Monash Uni... there might be a copy available there as a tech report... otherwise, just read the first one!

quote:
Original post by pentium3id
In nature, how are "conflicts of opinion" solved?



Normally with swords/guns/stern words or in the case of my baby daughter, a stubborn refusal to change one''s opinion (she loses though)!


Good luck,

Timkin

Share this post


Link to post
Share on other sites
quote:
Original post by pentium3id
A NN shouldnt be forced into making a decision every single time, and thats my point.



I don''t see how the model I suggested does not allow that.

Besides, looking for biological inspiration for conflict resolution probably won''t get you very far, especially since neural networks are more of a mathematical model than a neuro-biological one...

Alex



AiGameDev.com

Share this post


Link to post
Share on other sites
Kinda depends on the neural net doesn''t it Alex

Continuous time neurons are fairly intractable purely via mathematics.
Spike time dependant neural nets are quite heavily influenced by biological modelling.

(not that you didn''t know this, I''m just in a pedantic mood today )

Mike

Share this post


Link to post
Share on other sites
quote:
Original post by pentium3id
A NN shouldnt be forced into making a decision every single time, and thats my point.



It sounds like you''re adding "do nothing" to the list of legal responses for this system. This makes the case for a continuous-output solution stronger. Let''s say that the (continuous output) neural network yeilds 4 outputs with range 0.0 to 1.0, one for each direction. As has been suggested, the output exhibiting the maximum strength can be selected. Further, though, other checks can be performed on the set of outputs. For instance, if none of the outputs exceed a minimum threshold, it might be concluded that there is not strong enough evidence to move in any particular direction.

-Predictor
http://will.dwinnell.com





Share this post


Link to post
Share on other sites
quote:
Predictor wrote
What do you mean when you write "semantically challenged "?



quote:
Timkin answered
Simply that it is generally very difficult to read an (artificial) neural network and immediately identify how the parameters of the network (which are a kind of statement about the decision space of the network) relate to the inputs or outputs at other than a mathematical level. In other words, there are no clear semantics for logical statements relating inputs to outputs. Does that make sense?




Yes, thanks.

-Predictor
http://will.dwinnell.com


Share this post


Link to post
Share on other sites
quote:
Original post by MikeD
Kinda depends on the neural net doesn''t it Alex



heh, true some NN are more accurate than others, but they''re all still a long way of neurological accuracy. In the mean time, using mathematical hacks is essential to get useful results!

Share this post


Link to post
Share on other sites
quote:
Original post by alexjc
heh, true some NN are more accurate than others, but they''re all still a long way of neurological accuracy. In the mean time, using mathematical hacks is essential to get useful results!


True, to get a completely accurate picture of how neurons work you''d need to accurately model them all the way down and accept the fact that, whilst we break the brain down into neurons, axons, dendrites, ganglial cells and the like for simplicity''s sake, the way it truly works as a dynamical system can only be completely explained by the interactions below the biological level, from the point of view of chemistry and partical physics.

Still I think you can get useful results from CTRNNs and STDNNs in any problem domain. Sure the maths isn''t tractable for learning solutions such as back prop and you''d probably need to use a hybrid system to robustly produce any set of multiple behaviours, but I''d rather use CTRNNs for the majority of cases that I''d consider using NNs for at all (which is the far minority, in fact 0 in my current project ).

Mike

Share this post


Link to post
Share on other sites

  • Advertisement