# Neural Networks - Training-question

This topic is 3462 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi! Im trying to design a NN for forecasting timeseries-data. Every date has the same set of data for input, and using this it will determine the output in say 5 days time. The data-structure is contained inside an xml-file, with the following structure: ------------------ <root> <row date='20080101'> <data name='Inputdata1' value='0,5' \> <data name='Inputdata2' value='0,7' \> <data name='Inputdata3' value='0,1' \> etc. </row> <row date='20080102'> <data name='Inputdata1' value='0,3' \> <data name='Inputdata2' value='0,4' \> <data name='Inputdata3' value='0,8' \> etc. </row> etc. </root> ------------------- Im going to train the NN using a dataset that spans about a year. My question is this: Should I train the NN for each date until it reaches an acceptable error, and only then move on to the second date repeating the process for that date? 1. Input date1 2. Train 3. Is error acceptable? 4.a) (yes) move on to next date 4.b) (no) adjust NN's weights, and go to 1 Or do I train the NN by going trough each date once, move on to the next date, and so on until I reach the end of 1 year, and the repeat the process until it reaches an acceptable error. 1. Input date1 2. Train 3. Adjust weights 4. Move on to next date, and go to step 2. 5. At the end of each dataset (1 year), check if error is acceptable for all inputs. If not, go to step 1

##### Share on other sites
Definately go through all then check the error. Going through the data entries in different number of sweeps will cause the data to be weighted.

When training a NN on previously known data (offline learning) you should group the training data into a matrix and do one iteration for each run (until you are satisfied with the result).

With y being your systems output, x the systems input, w as the NN weights (a matrix if y is a vector), l your learning factor and f(x) input weights, your system would look something like this:
y = x*w', w = l*f(realy - y)
by putting your input and output data into matrices Y and X (such as X =[x(0) .. x(T)]) you get
Y = X*w', w = l*F(realY-Y) where F(X) = [f(x(0)) .. f(x(T))]'

Correct me if I'm wrong, was a while since I did the NN course.

##### Share on other sites
Oh and since your input is timeshifted I should probably clearify:
your system would be something like
y(t) = x(t-5)*w0 + x(t-6)*w1 + .. + x(t-5-n)*wn
with n being NN degree, or number of nodes/weights. With T as you dataset length, in your case 365 your X matrix would be:
[x(t-5) x(t-4) .. x(t-5+T)]
[x(t-6) x(t-5) .. x(t-6+T)]
[..
[x(t-5-n) .. x(t-5-n+T)]

##### Share on other sites
Im not sure I understand, since Im not that math-oriented. Is it possible you could explain it in a more pseudo-code-like way?

##### Share on other sites
what tarazu is saying is use your second method.

run through all the dates recording the sum of the errors/accuracy along the way. after each date the weights will be changed (assuming you're using gradient descent).

So you will keep running all the dates through the NN until the same of errors/accuracy is good enough.

pseudo code is something like this (assuming gradient descent learning):

while ( accuracy < required accuracy OR error > lowest error required (pick one of the two or use both) ){    for ( every date in year )     {        run date's data through NN        record error/accuracy        update weights    }}

no idea why tarazu is bringing up matrices and so on.

##### Share on other sites

This topic is 3462 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.