My conclusion: the difference is having feedback data (a lot of this is visual) to analyse. When i have feedback data, particularly visual, i do very well, otherwise i mostly fail.
Yep, humans are doing fine with visual information. :)
For instance recently i was iterating through the pixels of some bitmaps (400X800) using nested loops and doing some calculations during the cycles. During one of the cycle something broke To find the root cause I would have put at least 5 break points and to step through with the debugger. That meant i would have to click 400x800x5 (1,600,000) times or less!!!
I fully agree that a debugger here is not the answer. When I hit a problem like this, and I don't know the next step, I stop trying (and do other things).
I ponder about it under the shower, while doing the dishes, at 6:40 in the morning when you wake up, but the alarm clock is still silent, while walking the dog. Usually in a few days, I get a new idea how to catch the little bugger, and I try that. Repeat until problem solved :)
This may look counter-productive, but if you leave it for a few days, and then look again, you have a more fresh view, and you see new/different things, and get new ideas.
The question i was asking myself afterwards (and like to ask others) was would game developers be able to develop successful algorithms if the concepts are abstract?
I think the concepts are abstract already. An AI has to work for every situation it may encounter, all puzzles that the game generates must be solvable. Did you try to solve every possible puzzle of your program in every possible way that a user may use to solve it? My guess is that you didn't. So how do you know it will work?
The only way I know is by reasoning about it in terms of abstract concepts.
Heck, game software in the end only produces abstract interactive animated pictures, nothing more. There are never tanks, airplanes, and fellow soldiers in my apartment when I play a FPS. It's all imagination going wild :p
And do real abstract problems exist in game development?
Sure, it does. Something like path-finding is not solved for each case separately, the A* algorithm has no doubt been invented based on an class of route-problems.
How did Leonard Euler visualise and formulate ei? + 1 = 0 ( i, e and ? being weird numbers impossible to visualise)
Can you visualize 1? can you visualize 1.5 ? can you visualize 3.14? If you can you have visualized a pretty close approximation to ?. The only reason you consider it a 'weird number' is because our decimal number system cannot copy with it, but you can just as well blame the number system for being weird :)
However, this is not how Euler worked. He read and studied math. He read and studied geometry. ? is not a weird number there, it's the key value in the entire theory. It pops up everywhere in relations between different concepts in geometry. It's like the power of 2 in computers. You find it everywhere, since it's fundamental in how bits are organized.
How did physicist formulate the mathematics of 10-11 extra dimensions when its practically impossible to imagine more than 4 dimensions?
How do you handle
- Scalability
- Speed
- Maintainability
- Extensibility
- Modularity
- Code size
- Memory usage
- Persistence
- Upgrading/updating
- Version migration
of your code? It's practically impossible to think in more than 4 dimensions!
The answer is, of course, you don't. You don't do all the above at the same time, and neither do physicists. You use the fact that the different dimensions are loosely coupled, and add one dimension at a time, and verify it against all already existing dimensions. Repeat until all dimensions done.
How did Nicolas Tesla visualise that AC was the solution to cheap, efficient electricity?
He probably made a few calculations what would happen if electricity would go nation-wide. How much power would I need, how much would I loose in the cables, that sort of calculations.
I believe there is not one 'concrete' and 'abstract'. They come into existence as soon as you draw a line, call one side 'concrete' and the other side 'abstract'. You can draw infinitely many different lines, and thus there are infinitely many forms of 'concrete' and 'abstract'.
You think images displayed at the screen are concrete? Sorry, but they don't exist, they are just a bunch of lights that are turned on and off by a program. A sort function is concrete? Sorry, but "statement" and "variable" doesn't exist, both are just values in memory space. Hmm, memory space doesn't exist either wires split the space to various blobs of silicon chips. Values don't exist either, they are just a bunch of bits that we interpret as "belonging together". Hmm, bits are just a bunch of wires, and an electrical load that we interpet as 1 or 0. I don't know more levels, but I am sure a physicist would give you a few more. Depending how you look at things, you create a concrete reality for yourself to work in.
What you do with source code and data structures is the same as what these smart people above do. They draw lines. One side of the line they understand and assume to work, and then they think about the part above the line, using ideas from below the line. Everybody has several lines. When you write a new function, the new function first lives above the line. After you finished, you draw a new line, where the just created function is assumed to be available and working below the new line, and you can make a new function on top of it.
If you find a bug in the function, you step back to the previous line, and find out how the function fails.
The other way around works too. I can speed up a routine, but what happens in the surrounding code then? You look at the local function, creating a new line for the faster version, then go up a line, to check how this works in the bigger picture.
Most people make and switch lines without realizing it. Scientists however are very good at this game, and consciously switch between lines, draw new lines, and consider consequences. Each line gives a new way to look at a problem.
A second thing that everybody does, is to throw away clutter. All programming languages basically do the same thing, they create CPU instructions. I just think in CPU instructions instead, and drop all languages. I'll figure out how to express my solution in language X later.
Hmm, I am not quite interested in the precise order of CPU instructions, let's just say it can move data and compute data. I don't need to consider the CPU at all, I can just think in data values and structures and how they relate and change. I'll figure out how to tell the CPU what to do later.
You stop doing trowing away clutter when you arrive at a level where you can comfortably think about what you want to achieve. We typically stop at the level of C++ or data structures, as that's where we normally operate.
If you invent an algorithm, you typically stop when you have the smallest number of relevant concepts left (which is also "the point where you can comfortable think"). Eg path finding is something like "I have a landscape (of tiles) and bunch of things that need a route from some point to another point in the landscape."
Scientists typically drop everything but the core elements that they need. Guys like Euler think in e and ?, as it's the core of their theory.