Lisp Pimpin'

Started by
387 comments, last by Mercury 18 years, 8 months ago
Quote:Original post by CoffeeMug
If I'm in the business of writing trading systems I'm only concerned with writing trading systems. I really can't afford to rewrite technologies that already exist and do things much better than I could with my resources.
Considering the rubishness of most things, yes you can. If you agree what Peyton-Jones says in this paper, a lot of code is based on an ad-hoc misunderstanding of a domain. If you enter a domain with a knowledge deep enough to represent it as an algebra or calculus, then you will end up with something very powerful and concise. A working example of this is darcs.
a.) Most programmers/software engineers/software architects eat at combinatory logic.
b.) Most programmers have ad-hoc domain knowledges or are domain experts with ad-hoc knowledge of program design. Being good at both programming and having a deep domain knowledge can make you extremely powerful.

John has an expert knowledge in physics and computation. I'm interested in his opinion here.

You're probably thinking of domains most programmers are interested in and, yes, have a lot of work that isn't worth replicating: graphics, audio, network stacks, etc.

[Edited by - flangazor on August 18, 2005 2:18:47 PM]
Advertisement
Quote:Original post by Nathan Baum
Several times.

Interestingly, nobody has mentioned that Quake 3 was mostly written in C.


So were Doom 3, Halflife 2, Unreal, and, I'm assuming, Duke Nukem Forever. All of which, IIRC, had massive delays.
Doom 3 was id's first real foray into a C++ engine I believe.

Language may have an effect on the delays, but afaik, the mitigating factors were:

Unreal: Built an engine from scratch.

Half-life 2: Hacking together an engine on top of their previous hackjob, while spending more time giving paid advertisements for ATi.

DNF: It takes more to make a game than "we're gonna make a new duke 3d and it will be the bestest game evar and everyone else sucks and it's a 1999 game".
Quote:Original post by flangazor
Considering the rubishness of most things, yes you can.

Well, my original statement was in reference to Erlang, which is an extremely robust, concise and efficient system in its domain. Reimplementing Erlang in Lisp would take me a long time and cost a lot of money. What's the point if Erlang already exists? Originally someone made an argument that using Erlang with Lisp would be too hard because of interoperability issues. So I made an an argument in favor of interoperability.
Quote:Original post by flangazor
If you enter a domain with a knowledge deep enough to represent it as an algebra or calculus, then you will end up with something very powerful and concise.

This is precisely the reason why VB is so popular: hiring heavyweight experts is way too expensive. Even if your shop is full of experts and the rest of the world is full of mostly idiots who make crap software, what's the point of rewriting everything that's good enough to be reused? Otherwise, I do agree with your argument to a reasonable extent.
Quote:This is precisely the reason why VB is so popular: hiring heavyweight experts is way too expensive. Even if your shop is full of experts and the rest of the world is full of mostly idiots who make crap software, what's the point of rewriting everything that's good enough to be reused? Otherwise, I do agree with your argument to a reasonable extent.
Yes -without VB or VBA people would actually be performing the macros themselves. For example, every Thursday my cousin would have to check two spreadsheets full of numbers and make sure some caulcuations matched up on both sides. I taught my cousin VBA and helped him write a script that ended up doing all that work and he either spent his Thursdays browing the web or got a massive pay raise since he took on more work (I forget which).

A lot of that code is so specific it's not worth rewriting. However, there are a lot of things definitely worth rewriting!

I think a lot of it depends on knowing what problem to solve. For example, Joel Reymont's online journal has him fluttering between a whole bunch of ideas. I think it's quite interesting to see him try to figure out the problem he wants to solve. IMO, he's on the right track with his understanding that a trading system doesn't need to be super quick; it needs to be scaleable (you can buy more quickness with more hardware) and it needs to make the task that most traders spend most time on easier and making hard things so easy they become second nature.
Quote:Original post by flangazor
Quote:Original post by CoffeeMug
If I'm in the business of writing trading systems I'm only concerned with writing trading systems. I really can't afford to rewrite technologies that already exist and do things much better than I could with my resources.
Considering the rubishness of most things, yes you can. If you agree what Peyton-Jones says in this paper, a lot of code is based on an ad-hoc misunderstanding of a domain. If you enter a domain with a knowledge deep enough to represent it as an algebra or calculus, then you will end up with something very powerful and concise. A working example of this is darcs.
a.) Most programmers/software engineers/software architects eat at combinatory logic.
b.) Most programmers have ad-hoc domain knowledges or are domain experts with ad-hoc knowledge of program design. Being good at both programming and having a deep domain knowledge can make you extremely powerful.

John has an expert knowledge in physics and computation. I'm interested in his opinion here.

You're probably thinking of domains most programmers are interested in and, yes, have a lot of work that isn't worth replicating: graphics, audio, network stacks, etc.


I think this is very interesting. I did my PhD with a friend who does the opposite to me in this respect.

I have always been keen to learn the foundations of various ways of solving problems. Consequently, my code makes minimal use of others', typically limited to only those libraries that I consider to be excellent, because I try to reinvent everything. Currently, that means OpenGL. This approach can get out of control though. I once spent over a month working on a library in C++ for handling atomic models which used templates to perform partial specialisation. My desire to learn culminated in my biting off more than I could chew and the library never worked in its intended form (mainly due to bugs in g++).

In contrast, my friend had a tendency to save the time that I would have spent learning new approaches and, instead, spent his time searching for the best existing implementations, opting for reuse rather than reinvention.

Individually, both of these approaches is limiting. However, when working together we could get an enormous amount done. For example, when he wanted a quick 3D visualisation of a 10^4-atom model of amorphous silicon I could knock it up in a few minutes. When I wanted to order the atoms in a model by their position along a space-filling Hilbert curve and couldn't figure out how, he knew of the best implementation on the internet.

So both approaches, reinventing and reusing, have their merits.

As for programming vs domain knowledge, you are spot on in the context of computational physics, IMHO. When an undergrad, 80% of my year in physics voted to be taught no programming whatsoever (= entirely domain knowledge, no programming knowledge). Consequently, the vast majority of scientists produced by the University of Cambridge don't know how a computer (or calculator) stores a number, let alone what the implications are. Equally, the overlap in mathematics between physical and computer-scientists is so small that computer scientists are unlikely to be able to communicate with a physical scientist who wanted them to solve a problem on a computer (= entirely programming knowledge and no domain knowledge).

This is great for me, of course, because I can communicate with both natural scientists and computer scientists. I can do science and I can write code. However, it is still very frustrating to see other scientists using completely inappropriate computational tools (most notably Fortran) when it would save them time and heartache to learn "proper" ways of doing things. More recently, the over-use of Perl by bioinformaticians has given me a dicky ticker. ;-)
Quote:Original post by flangazor
IMO, he's on the right track with his understanding that a trading system doesn't need to be super quick; it needs to be scaleable (you can buy more quickness with more hardware) and it needs to make the task that most traders spend most time on easier and making hard things so easy they become second nature.

I read his journal. Considering my limited experience with trading systems (I've worked with two equity trading systems written by different teams), he's solving non-existent problems. He's talking about trading software that'll verify patterns in historical data before the trade. I can't think of anyone who would want to use that functionality. There isn't a shred of evidence that verifying a trade against history even means anything! Another words, instead of watching people trade, finding repetitive things and making them simpler (via a better UI of some sort), he is solving a problem that traders don't even know exists. I'm not sure that it's the right approach to writing successful software. May be there's a million dollar idea somewhere in there but a sure way to make money is improve upon an existing system by alleviating its limitations that its users are screaming about. Planning on making little revolutions by designing things people don't need more often than not results in failure.
Quote:Original post by CoffeeMug
I read his journal. Considering my limited experience with trading systems (I've worked with two equity trading systems written by different teams), he's solving non-existent problems. He's talking about trading software that'll verify patterns in historical data before the trade. I can't think of anyone who would want to use that functionality. There isn't a shred of evidence that verifying a trade against history even means anything!


I'm trying to verify patterns in historical data to see if they can be traded. The visual way of doing this is called technical analysis. People come up with head and shoulder, bull, bear, whatever visual patterns. I'm trying to do the same quantitatively.

Verifying a strategy (not a trade) against history is called backtesting and its an absolute must if you want to be successful at trading.

I finally settled on OCaml for the project, please see http://wagerlabs.com/uptick/2005/08/rose-by-any-other-name.html.

What have we learned here?

Why is lisp great?
  • Flexible
  • Good framework for creating DSLs ('the next big thing').
  • Always contemporary (can add language features as a project sees fit).

Why is lisp not so great?
  • Speed is oversold at the moment.
  • Ignorant people who ask for a ray tracer in lisp disappear when a ray tracer in lisp appears (Max_Payne). As CoffeeMug and Tron3k mention, a lot of people seem to go through this stage and that might need to be addressed.


Correct/flame away.
Quote:Original post by jdh30
I've heard that Haskell has an excellent FFI. I've used OCaml's a bit and it isn't great. I'd expect Lisp to be somewhere between the two.


Although I really love Lisp, the ffi can be a pain. At some point you will have to introduce C or C++ additions, and this is where things get funky.

Each implementation seems to have it's own ffi. CLisp has a reasonable one for a widely-used, open product. But for professsional development in Lisp, you really need to go to a commercial implementation. Allegro Lisp appears to have the most extensive ffi, and Xalysis Lispworks also has a good one.

For this and other reasons, if you want to do professional work, you really need to invest in one Lisp implementation, and stay with it; I suspect that this is one reason why Naughty Dog Software (a part of the Sony Entertainment Group and the author of PS2 Jax and Dexter Games) chose Allegro Lisp for their development platform.

Having said this, there is some remarkable animation work being done by major Hollywood Studios with software based upon the old Symbolics Lisp implementation, which is not widely available (although I think that the Allegro Lisp implementation was started by guys at one time associated with Symbolics).

--random
--random_thinkerAs Albert Einstein said: 'Imagination is more important than knowledge'. Of course, he also said: 'If I had only known, I would have been a locksmith'.

This topic is closed to new replies.

Advertisement