The only problem with the SD Best Practices show is that the expo is very small. There's maybe 10 vendors? I suppose it makes some sense - SD Best Practices seems to be about the equivalent of the gaming industry's Austin Game Conference relative to GDC (the software industry has their big SD West conference). But the problem is that it's easy to get burned out if you go to every single class session, which I have tried to do - and failed due to burnout.
In any case, these were the classes from Day 3:
- Introduction to Financial Engineering for Software Developers by Christian Gross - This was a great overview of the terms and methods used in the financial industry. Gross clearly laid out how it's important that software developers going into the financial industry understand the business side of financial engineering, which is practically required to have any chance of succeeding in that environment. Not that I'm looking to join the financial industry. My interest lied in the terms and techniques used to build software for the financial industry.
- Quantitative Testing: Moving Beyond Unit Testing by Christian Gross - Another one by Mr. Gross, where he discusses how to use statistical methods to verify that your algorithm doesn't just function but that it's actually right. I'm a firm believer in using data to back up your assertions, so this class was really interesting to me. Gross doesn't advocate losing unit tests, but he does believe that unit tests are limited in what they can do as they only provide 80% coverage (i.e. does it function?). The other 20% is where the most difficult problems lie and may only surface once in a while, but through quantitative analysis of your algorithms you can verify that your algorithm is correct (i.e. is it right?). This is a topic I want to explore more, particularly in how it can be used for 3D simulation and gaming.
- Code Metrics and Analysis for Agile Projects by Neal Ford - This one was mildly disappointing. I was hoping it would be a sort of extension from the Quantitative Testing class and discussed more about analyzing your code metrics and making strategic decisions off those metrics, particularly as applied to agile projects. It jumped into that topic a little bit toward the end, but a good chunk of time was spent watching code analysis tools run on the Java struts framework. For one, I didn't even know what struts was until I looked it up (framework for web apps), and two, do we really have to watch code analyzers run and look at the statistics they pump out about the code? In any case, I did pick up a few words to throw into Google and did get a better understanding of some metrics tools I had recently heard about (i.e. treemaps). Overall, though, I didn't have any "aha!" moments. For the most part I already collect and analyze the data Ford presented. It was a bit of a low after the fun statistical stuff from earlier in the day.
Cross-posted at Code.Implant.