Inventing on Principle - amazing video!

Started by
65 comments, last by ddn3 12 years, 1 month ago
Saw this on /r/gamedev... you should at least skip through it briefly, it's definitely worth your time!

[media]http://vimeo.com/36579366[/media]
Advertisement
That's pretty bloody impressive. His programming tools are amazing - I hope that sort of thing starts to filter into the mainstream.

And his philosophy is not bad either. Some great words to live by, right there.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

I'm only at the 12 minute mark and my mind is blown. When he started fiddling with "the future", I laughed out loud and said "bugger off" to my computer.
At first I thought this was a joke, but wow, I am amazed just like everyone else. I couldn't help but laugh at his catchphrase, "but it's not good enough." He's right, those tools are far better than good enough!
Denzel Morris (@drdizzy) :: Software Engineer :: SkyTech Enterprises, Inc.
"When men are most sure and arrogant they are commonly most mistaken, giving views to passion without that proper deliberation which alone can secure them from the grossest absurdities." - David Hume
First off, I really like how this guy looks at life and I'm going to snag several ideas he mentioned to guide my own judgement in the future.

However, as far as the presentation at hand itself goes, the only problem I have is the applicability of the concept in a real world environment. The part about animation is brilliant and I would expect the next generation of artist tools to employ techniques like the ones presented by him. The part about adjusting code parameters in real-time is really useful for a subset of people (in particular any scripted environment).

However, when it comes to actual non-scripted programming that involves a non-trivial code base, I don't really see a benefit (although the concept in its own sense has been present for a while in debug mode - at least in Visual Studio -, which allows you to adjust values, albeit without a temporal aspect, at runtime).

In a broader sense, the problems I see are:

- any algorithm you write should be one that you foremost understand. True, catching erraneous termination conditions (or a lack of there of) is a really nice thing and can speed up debugging considerably; however, if you really need a complex simulator (which the data-driven concept he advocates is not as it is severely limited by the fixed nature of the input), you will still be stuck with writing your own test cases and custom debugging tools. No one-glove-fits-them-all solution is going to proof your code for you if you, the programmer, don't understand how it works inside your head first
- any algorithm simple enough for a generic debugger like this to handle is in my view, at best, suitable as a learning experience. Writing any more complex piece of code (or algorithm) will quickly reveal that either the feedback mechanism itself becomes too complex to grasp or you will be misjudging the code erraneously either because your approach is "too artistic" (you rely too much on the test data and not the theory behind it) or you simply do not grasp the broader picture in which the algorithm needs to fit. An example would be any sort of recursion, which generally requires a very specific visual "feel" of the algorithm to get down correctly. A tool like that will help you, yes, but only in the most trivial cases

In summary, there is and will always be a barrier between "the artist" and "the programmer". Bridging the gap between the two is a truly noble effort, but I fear is flawed from the start. Simple rewording would help here: eg "expanding the artist's tools to visualize complex work more easily" or "giving the programmer a new perspective on their code". I do realize that I'm narrowing the grand idea behind his talk (which is saving ideas from destruction by inability to realize them due to an a lack of intuition), but he does try to apply the same principle to all walks of life. As for the programmer and the artist - the two will forever remain two different types of people with separate skillsets and purposes.

PS - thanks for sharing the video!

In summary, there is and will always be a barrier between "the artist" and "the programmer". Bridging the gap between the two is a truly noble effort, but I fear is flawed from the start... As for the programmer and the artist - the two will forever remain two different types of people with separate skillsets and purposes.

As someone who's job it is to teach programming to art students, I say: meh.

You know what my art students have trouble with? Text editors, file system paths, FTP uploads, and remembering syntax. Know what they are really good at? Visualising the execution of algorithms. Describing the various sorting algorithms is possibly easier than doing so to CS students, as long as you draw an example on the whiteboard.

There is a huge empty space to be filled by programming languages that are easier to read/write without complex textual syntax/memorisation, and tools that help us to visualise every component of our program. Over 50% of the human brain is dedicated to visual processing - anything that can be done to help move programming from a textual to a visual medium, is going to be a net win in comprehension of code/algorithms and ease of spotting bugs.

Now personally, I don't think his algorithm visualisation in textual-columns was particularly compelling. But it's a step in the right direction, and someone needs to keep making those steps.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

He has some good ideas but after working with such systems I can't say it's really applicable to many problems or actually accelerate development. The problem with immediate feedback schemes, is it requires a nearly complete solution before you can iterate upon it like how he describes, that's nice but programming is problem solving and when you've reached that state, most of the big design decisions have already been made and implemented, you'd be just iterating over the minutiae. It might give u insight into the edge cases but it won't give you fundamentally new or innovative algorithms or designs.

That and also most problem sets are not easily visualized. Maybe that's the bigger challenge. Writing a good visualizer would take a major effort perhaps better spent on pen and paper, but who knows.. maybe you can make it re-usable for other circumstances.

-ddn

Edit : there are a few other good talks on that channel, this one is also worth watching.. I think it's actually more pertinent and impacting.

http://vimeo.com/9270320
+1 thanks for sharing.

It's a fact of life that when programming, everyone makes obvious mistakes all the time, in simple algorithms and complex ones.
One method for dealing with this is pair programming -- with two people watching the one screen, a lot more of these mistakes get immediately caught, as you partner points out "you forgot a divide by 255 there" or "don't you mean minus two?", etc...

That's not very practical most of the time though, so the method that I use regularly is to use the debugger to step through every line of code that I write, and watch the execution flow and all values via the 'watch' window. Just by watching the data being transformed by your code, you can catch the majority of these simple human errors.
So, any IDE that made this process easier on me -- such as being able to instantly "step into" a function in the IDE using dummy data -- would be an amazing help for writing bug-free code.

I'm not sure how the rest of you go about writing bug-free code, but personally, watching the code running like this is a vital step in my ensuring it's correctness. If I've not inspected each line via the debugger/watch, then I don't trust there's not a simple logic error waiting to manifest as a bug at some point in the future. It's like the traditional "desk checking" method, but without the possibility of my "internal computer" making the same logic error I did when writing the code originally.

+1 thanks for sharing.

It's a fact of life that when programming, everyone makes obvious mistakes all the time, in simple algorithms and complex ones.
One method for dealing with this is pair programming -- with two people watching the one screen, a lot more of these mistakes get immediately caught, as you partner points out "you forgot a divide by 255 there" or "don't you mean minus two?", etc...

That's not very practical most of the time though, so the method that I use regularly is to use the debugger to step through every line of code that I write, and watch the execution flow and all values via the 'watch' window. Just by watching the data being transformed by your code, you can catch the majority of these simple human errors.
So, any IDE that made this process easier on me -- such as being able to instantly "step into" a function in the IDE using dummy data -- would be an amazing help for writing bug-free code.

I'm not sure how the rest of you go about writing bug-free code, but personally, watching the code running like this is a vital step in my ensuring it's correctness. If I've not inspected each line via the debugger/watch, then I don't trust there's not a simple logic error waiting to manifest as a bug at some point in the future. It's like the traditional "desk checking" method, but without the possibility of my "internal computer" making the same logic error I did when writing the code originally.


The problem I find with this is that this kind of debugging is thoroughly incomplete - by picking dummy data or a specific test case without generic testing you essentially invite more mistakes or you need to manually know to introduce them in the test data to catch most of the bugs IMO. While you can, indeed, catch simple execution ("human") errors that way, this kind of debugging for me is always the last resort as it tends to be too specific and has little to do with a real world environment. In fact, it is doubly useless if there is more than mistake in the code - which is often the case. As for me, I:

1) write the algorithm as best I can
2) I spend 5-10 minutes on the initial problems caused by typos, missing early outs etc (I've generally become quite good at catching them and automatically employ certain failsafes to help locate the bug faster: for instance I always use while(1) and include specific termination conditions so I can quickly differentiate and test for infinite loops)
3) if the algorithm has any complexity, I will write very specific debug output and that tells me the information I need in the format that I need and, if needed, a custom test case that executes the algorithm using a configuration that I can control far more easily than typing it in (for instance by using the mouse). I've also spent some time implementing a real-time reflection technique for myself in C++ that allows me to control floats and bools from the UI that often helps FAR more than working with a static dataset (the fact that I had to do this in the first place, of course, is kind of blah)
4) only once I've located the general problem area, I do my best to pinpoint the specific error conditions (eg which iteration, which object, etc), set a breakpoint and step through only that

Most of the time this catches the error almost immediately at the account of having to identify and write the debug output for only the data that I actually need. Got a 2k or 4k line algorithm you're debugging with recursions and/or multiple nested loops running for tens of hundreds of iterations? Yeah, you're going to want to be specific about what you actually see (the approach in the video would help if it allowed to pick specific variables or, better yet, form debug statements on specific lines involving multiple variables and expressions that could be added/removed/manipulated at runtime that the debugger would collate across the execution process and display in proper human-readable fashion).

For instance I spent the past 3 weeks implementing and perfecting my CSG code. At the end of the day I learnt a few things about my implementation that it cannot and is not supposed to handle (eg multiple object overlaps across multiple operations), but more importantly I spent 3 weeks on the code because it's far from trivial to debug and is prone to logic errors rather than coding errors (which only make up a small subset). A debugger like that would not have helped me one bit when dealing with a test case of 80 faces, 300 intermediate vertices, 20-way branching, quadruple-nested loops and two execution paths. In fact, IMO it would've been distracting and misguiding.

For instance I ended up needing far more illustrative debug output and a cold shower to figure out that polygon winding played a huge part in the merging process (a problem I deduced visually that harked way way back to the bottom level deep inside my triangulation code). For a situation like this, which I believe to be much closer to a real-world scenario where the algorithms are far more complex and far more specific, a data-driven debugger would have been useless unless it was really clever about what kind of feedback to actually give me. As such I believe there (at least for now) is no substitute to the human programmer when it comes to implementing non-generic algorithms: the flow NEEDS to be visually present in the programmer's head, the programmer NEEDS to understand all execution paths and fringe cases and the programmer needs to understand that the code will only be as good as he or she is, because no tool will ever fix the biggest problems that cause implementations to fail: logic errors.

Over 50% of the human brain is dedicated to visual processing


I know you were just speaking figuratively but this is completely untrue. The cerebellum actually houses about 60% of the neurons in the human brain. I would say it's probably closer to 10% of the human brain which is dedicated to visual processing. Neural networks are very good at recognizing patterns no matter how abstract they may be. On the other hand things which involve timing and memorization seem to require many more neurons. The reason visual tasks are so easy has less to do with the amount neurons dedicated to the task and more to do with the very way neurons work.

This topic is closed to new replies.

Advertisement