Quote:Original post by Kylotan
It's sane, but not necessarily optimal. There are many other ways of modelling and approximating functions, many of which will be more suitable than neural networks, especially if you know a little about the characteristics of the function.
That's the thing with my particular scenario, I don't know anything about the characteristics of the function or if there even is one. What we have is a set of data where a bunch of inputs are suspected to have some relation to a bunch of outputs. What we've been trying to do is model this relation by training various NNs with our sample data (with different subsets of the inputs/outputs, as requested by the field experts) and doing some sensitivity testing on this 'model' to get a very rough idea of how the inputs might affect the outputs and to gather some evidence of these suspected relations.
The function/relation is not expected to be linear nor even continuous and we're looking into about 10-80 inputs and 1-10 outputs, with some of the inputs being mutually dependent. I'm thoroughly aware of the downsides of this approach and I have often wondered about alternatives, but the uncertainty and variations in our scenario seem to be a perfect match for the vagueness of NNs.
I'm genuinly curious what aproach you'd recommend in a scenario like this, other than axe the project and run! [smile]