Capacity of learning within an NN
The way you stop early is to to have unseen test data and you then take the Sum Squared Error at the output so the nen stops learning at a minima, however techniques like momentum can stop the net stopping at a local minima
So what do you do with the unseen test data and how do you measure any performance with this data. Ok say, you present this unseen data to the NN and it produces an error value....what do u do with this error value to be able to evaluate if ur NN is good enough?
Thanks
DarkStar
UK
-------------------------------
Loves cross-posting because it works
Thanks
DarkStar
UK
-------------------------------
Loves cross-posting because it works
Apologies for hijacking the thread (somewhat)
how would you characterize the difference between expanding an ANN''s width (of the hidden layers) vs expanding the number of hidden layers?
I haven''t been able to find a decent explanation of this...
how would you characterize the difference between expanding an ANN''s width (of the hidden layers) vs expanding the number of hidden layers?
I haven''t been able to find a decent explanation of this...
"how would you characterize the difference between expanding an ANN''s width (of the hidden layers) vs expanding the number of hidden layers?
I haven''t been able to find a decent explanation of this... "
ftp://ftp.sas.com/pub/neural/FAQ3.html#A_hl
I haven''t been able to find a decent explanation of this... "
ftp://ftp.sas.com/pub/neural/FAQ3.html#A_hl
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement