I have read that it is quite the opposite. TDD makes change easier and safer. If you want to change some behavior, then you first change the related tests but the entire suite should still pass....
Yes, but do you realize how much work writing tests actually is? Pretty soon you'll have a few hundred of them.
Also, how do you know you changed all the tests that needed to be changed? How do you know you changed them all in the right way?
You'll need meta-tests to tests the tests, I think! Now we only need to solve how to make sure meta-tests test the right thing....
For small scale changes, this may be feasible, but you're doubling the workload at least, since tests and code have to stay in sync.
Writing this, it sounds more like acceptance tests.
Acceptance tests are tests where the client agrees the result does what you agreed on, and hands over the money. The main interest of the client is typically not whether your lexical parser recognizes a tab character as white space.
As far as i understand, when you are in a design phase, you choose most of the patterns that might implement the use cases. But in the other hand there are people who says that testing disciplines can drive and even improve software design.
These two sentences both use "design" but they don't agree what "design" actually is.
"design phase" in the first sentence refers to the process of designing, deciding how to make happen what the requirements document promised, out of 'nowhere'. What's the overall structure, what are the responsibilities, etc.
"software design" in the second sentence refers to the result of designing, its internal structure, and the exact operations laid out in your source code, all the *.xyz files.
While both things are strongly related, they are not the same thing. TTD doesn't cover how you invent software structure. It won't tell you to use a FSM for a lexical parser.
What it does do, is give you concrete test cases (lots of very small ones) against the latter. Since you write them before you write code, you are not influenced by the software structure of the software, and the tests are better (it would be even better if the tests were not written by the person writing the code). In general, this improves the code. Good tests tend to cover edge cases you didn't think of while writing the code, and possibly even uncover bugs in the design. For example, there may be a test where it appears you need information that the code doesn't have at that point.
The big question here is, is the effort of writing all these tests worth its time? The opinions differ on that. A blog that points out some weaknesses:
http://pythontesting.net/strategy/why-most-unit-testing-is-waste/
Personally, I only use unit tests for cases where I want to be really really really sure the code does what I think it does. This is either in the core of an algorithm which must be really correct, or it is a piece of software that is buried under a zillion layers, but everything rests on its proper functioning. Ever debugging the latter is such a nightmare, I want to avoid that at all costs.
In all other cases, the yield of unit tests is too low for me. (Any test that fails to find an error is wasted effort!) As a result, after being finished, people use it, and find a few bugs that are usually simple to fix. After a while, reported bugs are not bugs in my software, but bugs in the code of the people using it.
That does not mean my software is bug-free, it's just that the usual software paths that everybody uses, have been tested sufficiently to be bug-free. In all the other paths, there are still bugs (with a high likelihood), but nobody uses those paths, so they are never encountered by anyone.
Note that having tests does not mean bugs won't happen. My former colleague did like tests, so we have a big set of tests for a simulator. This year, people found a bug in the dependency calculations, which apparently is not covered by the tests.