I attended your tutorial, Ferretman, and will agree there were some very interesting things mentioned.
I dont know if there has been a thread around here on it, so I will mention it anyways. Those interested in the AI hardware should keep tabs on http://www.aiseek.com for when it comes online.
I am most interested to see how they get around some of the obvious implementation issues - are there many AI algorithms that can be abstracted enough to be a useable generic solution across the board?
Some objections were cited by one of the more vocal participants in Ferretmans tutorial to a representative of this Israeli group doing the AI acceleration hardware. Amoung these was the need to transfer info to the card in great quantities, and then retrieve the info at a fast enough speed. At the time I remember thinking that this objection had sufficient grounds to probably render the hardware impractical on current systems.
However, I have recently been awakened to the up-coming PCI-Express technology. With some of the stats given for the x16 slots (initially one on a board - for the gfx card), I dont think the data transfer for and AI accelerator is an issue (provided it can be accessed via an x16 slot).
Anyhow, I am quite keen to see what solution is proposed. Whatever it is, it will be slow to catch on if anything comes of it. The industry (heres my great generalisation) seems to be quick to optimise and improve hardware solutions, but slow to welcome new ones.