I don't find anything compelling in Tutorial Doctor's arguments. My own point of view is that computers are already better than humans at many tasks: arithmetic, finding primitives of functions (a.k.a. indefinite integrals), playing many board games like chess or checkers, making investment decisions, playing Jeopardy, etc. The list will just expand over time until there is nothing a human can do better than a computer. I don't know if "replicating" human intelligence is relevant at all, once computers are better at everything. In any case, it won't be a matter of months.
Ideas about code that can debug itself, or modify itself or similar things were very prominent in early attempts at AI (that's why Lisp was popular in the field). Those attempts failed miserably, probably because that line of thinking is completely misguided. Your introspection about how you reorganize a shelf is just not very insightful, that's all. You certainly didn't have to perform brain surgery on yourself to be able to finish the task, which is the analogous situation to code that debugs itself.
The central problem of AI is how to make decisions. A solution to this problem has existed for at least 50 years: The best decision is one that maximizes the expected value of a utility function. The devil is in the details, of course.
There have been several areas where AI has been successful recently: machine translation, visual object classification, self-driving cars... Most of those areas have seen enormous advances thanks to the availability of large amounts of data and the ability to process it using machine learning.
[EDIT: By the way, 1,000,000! only has about 5.5 million digits. I am sure it's not that hard to compute, but I don't know what point you were trying to make: Do you know of any human that can compute it?]