How to solve this?

Started by
16 comments, last by Prozak 20 years, 3 months ago
quote:Original post by Predictor
What do you mean when you write "semantically challenged "?


Simply that it is generally very difficult to read an (artificial) neural network and immediately identify how the parameters of the network (which are a kind of statement about the decision space of the network) relate to the inputs or outputs at other than a mathematical level. In other words, there are no clear semantics for logical statements relating inputs to outputs. Does that make sense?

quote:Original post by pentium3id
Timkin, because the work im doing is about a new NN design.


Fair enough then...

quote:Original post by pentium3id
A NN shouldnt be forced into making a decision every single time, and thats my point.


It sounds to me like you need to move to a continuous/graded output space and make decisions based on a thresholding of this output space. You might want to read the following papers... they aren''t directly pertinent to what you have described above, but reading them might give you some ideas...

Kwok, T & Smith, K. ''Chaotic Dynamics of the Self-Organising Neural Network with Weight Normalisation for Combinatorial Optimisation''. Proceedings of the 3rd Int. NAISO Symposium on Eng. of Intelligent Systems (EIS''2002), Workshop on Chaos & Computation, Spain. 2002

Kwok, T & Smith, K. ''A Self-Organising Neural Network with Intermittent Swithing Dynamics for Combinatorial Optimisation''. Submitted to 3rd Int. Conf. on Hybrd Intelligent Systems (HIS''03), Melbourne, 2003.

I''m not sure if the latter was accepted or not... if not, check out Terrence''s or Kate''s web page at Monash Uni... there might be a copy available there as a tech report... otherwise, just read the first one!

quote:Original post by pentium3id
In nature, how are "conflicts of opinion" solved?


Normally with swords/guns/stern words or in the case of my baby daughter, a stubborn refusal to change one''s opinion (she loses though)!


Good luck,

Timkin
Advertisement
quote:Original post by pentium3id
A NN shouldnt be forced into making a decision every single time, and thats my point.


I don''t see how the model I suggested does not allow that.

Besides, looking for biological inspiration for conflict resolution probably won''t get you very far, especially since neural networks are more of a mathematical model than a neuro-biological one...

Alex


AiGameDev.com

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Thanks for the replies

[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
The most irrefutable evidence that there is intelligent life in the Universe is that they haven''t contacted us!

Kinda depends on the neural net doesn''t it Alex

Continuous time neurons are fairly intractable purely via mathematics.
Spike time dependant neural nets are quite heavily influenced by biological modelling.

(not that you didn''t know this, I''m just in a pedantic mood today )

Mike
quote:Original post by pentium3id
A NN shouldnt be forced into making a decision every single time, and thats my point.



It sounds like you''re adding "do nothing" to the list of legal responses for this system. This makes the case for a continuous-output solution stronger. Let''s say that the (continuous output) neural network yeilds 4 outputs with range 0.0 to 1.0, one for each direction. As has been suggested, the output exhibiting the maximum strength can be selected. Further, though, other checks can be performed on the set of outputs. For instance, if none of the outputs exceed a minimum threshold, it might be concluded that there is not strong enough evidence to move in any particular direction.

-Predictor
http://will.dwinnell.com





quote:Predictor wrote
What do you mean when you write "semantically challenged "?


quote:Timkin answered
Simply that it is generally very difficult to read an (artificial) neural network and immediately identify how the parameters of the network (which are a kind of statement about the decision space of the network) relate to the inputs or outputs at other than a mathematical level. In other words, there are no clear semantics for logical statements relating inputs to outputs. Does that make sense?



Yes, thanks.

-Predictor
http://will.dwinnell.com


quote:Original post by MikeD
Kinda depends on the neural net doesn''t it Alex


heh, true some NN are more accurate than others, but they''re all still a long way of neurological accuracy. In the mean time, using mathematical hacks is essential to get useful results!

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

quote:Original post by alexjc
heh, true some NN are more accurate than others, but they''re all still a long way of neurological accuracy. In the mean time, using mathematical hacks is essential to get useful results!


True, to get a completely accurate picture of how neurons work you''d need to accurately model them all the way down and accept the fact that, whilst we break the brain down into neurons, axons, dendrites, ganglial cells and the like for simplicity''s sake, the way it truly works as a dynamical system can only be completely explained by the interactions below the biological level, from the point of view of chemistry and partical physics.

Still I think you can get useful results from CTRNNs and STDNNs in any problem domain. Sure the maths isn''t tractable for learning solutions such as back prop and you''d probably need to use a hybrid system to robustly produce any set of multiple behaviours, but I''d rather use CTRNNs for the majority of cases that I''d consider using NNs for at all (which is the far minority, in fact 0 in my current project ).

Mike

This topic is closed to new replies.

Advertisement