Quote:Original post by flangazor
Quote:Original post by CoffeeMug
If I'm in the business of writing trading systems I'm only concerned with writing trading systems. I really can't afford to rewrite technologies that already exist and do things much better than I could with my resources.
Considering the rubishness of most things, yes you can. If you agree what Peyton-Jones says in this paper, a lot of code is based on an ad-hoc misunderstanding of a domain. If you enter a domain with a knowledge deep enough to represent it as an algebra or calculus, then you will end up with something very powerful and concise. A working example of this is darcs.
a.) Most programmers/software engineers/software architects eat at combinatory logic.
b.) Most programmers have ad-hoc domain knowledges or are domain experts with ad-hoc knowledge of program design. Being good at both programming and having a deep domain knowledge can make you extremely powerful.
John has an expert knowledge in physics and computation. I'm interested in his opinion here.
You're probably thinking of domains most programmers are interested in and, yes, have a lot of work that isn't worth replicating: graphics, audio, network stacks, etc.
I think this is very interesting. I did my PhD with a friend who does the opposite to me in this respect.
I have always been keen to learn the foundations of various ways of solving problems. Consequently, my code makes minimal use of others', typically limited to only those libraries that I consider to be excellent, because I try to reinvent everything. Currently, that means OpenGL. This approach can get out of control though. I once spent over a month working on a library in C++ for handling atomic models which used templates to perform partial specialisation. My desire to learn culminated in my biting off more than I could chew and the library never worked in its intended form (mainly due to bugs in g++).
In contrast, my friend had a tendency to save the time that I would have spent learning new approaches and, instead, spent his time searching for the best existing implementations, opting for reuse rather than reinvention.
Individually, both of these approaches is limiting. However, when working together we could get an enormous amount done. For example, when he wanted a quick 3D visualisation of a 10^4-atom model of amorphous silicon I could knock it up in a few minutes. When I wanted to order the atoms in a model by their position along a space-filling Hilbert curve and couldn't figure out how, he knew of the best implementation on the internet.
So both approaches, reinventing and reusing, have their merits.
As for programming vs domain knowledge, you are spot on in the context of computational physics, IMHO. When an undergrad, 80% of my year in physics voted to be taught no programming whatsoever (= entirely domain knowledge, no programming knowledge). Consequently, the vast majority of scientists produced by the University of Cambridge don't know how a computer (or calculator) stores a number, let alone what the implications are. Equally, the overlap in mathematics between physical and computer-scientists is so small that computer scientists are unlikely to be able to communicate with a physical scientist who wanted them to solve a problem on a computer (= entirely programming knowledge and no domain knowledge).
This is great for me, of course, because I can communicate with both natural scientists and computer scientists. I can do science and I can write code. However, it is still very frustrating to see other scientists using completely inappropriate computational tools (most notably Fortran) when it would save them time and heartache to learn "proper" ways of doing things. More recently, the over-use of Perl by bioinformaticians has given me a dicky ticker. ;-)