But I guess it's a bit a question of philosophy, too. If you're 100% positive that you test all conditions that may possibly fire an assertion, it's of course fine... you get the same net effect without runtime cost. I'm not that confident with myself, I prefer still having a minimum amount of runtime checks.
It's a lot of philosophy, actually.
Much of that originates from the kind of programs, and scale of programs that you write. For small to medium programs, written by a small number of people, exceptions work. Everybody knows what exceptions to expect at every point in the program, and can make a decision whether or not to catch each of them at the function they are writing.
One thing you cannot see in the code however, is whether a developer actually considered handling exception X and concluded he didn't have to, or that he never even considered that exception X could happen. Lack of catching an exception doesn't tell that in the code.
This is where exceptions break down in bigger projects. With more people, and people coming and leaving the project (with a 100 employees, people do get fired and hired while the project is developed). Nobody has a full overview of all the code, nobody knows what exceptions to expect precisely from the called sub-routines, since that is typically not documented. If you want to know for some routine P, you have to follow all calls from P recursively down to the OS calls, and then see how all functions that it directly or indirectly calls, handle each kind of exception that they might get from their child function calls. In the worst case, this means you have to examine all code of the project. Unfortunately, with a large program, this is not feasible due to time and resource constraints. Even if you had enough time, it's all manual work, which means errors will be made, so your final conclusion about the set of exceptions that may happen at some point in the program is likely not even correct.
So, for big programs and large projects, it's pretty much not feasible to know what exceptions to expect, so essentially, you have no idea what exceptions you should actually handle in the code. How to get out of this? This is where the "exceptions are bad" idea comes from. Instead of having exceptions that may or may not happen in some routine P, we return the error result in the return-value, and at the next higher level decode the cases, and handle the normal and all error cases.
It costs a little CPU time, but the code explicitly tells us at every function, what error cases exist. In the code, I can see what the programmer did for each case. The code is different between developers forgetting about an error case, and forwarding it to the higher level. That means it's simpler to find in code reviews that some error was never considered.
The bigger advantage is however that all over the program, for every function P, I can see what errors it currently returns. I can look at the child function calls, and see what they can return. I don't need to recursively dig through half the code base to collect that information. The normal case and all error cases can be found locally. Reviewing becomes much simpler, and thus gives better result, leading to higher quality code.
It may cost a millisecond CPU time, but it saves you several developers that you don't have to pay, and gives higher quality code, for a large program.