I don't know why I have to keep repeating myself.
It's because you're wrong and folks here have already explained why.
Down-votes only tell me that somebody is panicking for no reason.
I am as panicked as I am worried motorcycles will takeover cars because they're faster and consume less fuel (conveniently ignoring that cars can transport more people, are more comfortable, can withstand adverse weather conditions better, and have higher chance of survival in the even of an accident).
Most indie titles might not need to get the kind of performance DX12 is offering, but that doesn't mean you should run away from it (neither was the question limited to indie/niche markets).
Indies don't get scared away because of higher performance. They get scared away because of its high maintenance cost.
Not only it's more complex to code, you have to keep one codepath for each vendor (that means 3) because the optimization strategies are different. Not to mention you have to update your codepaths as new hardware is released, and if you did something illegal by spec that just happened to work everywhere, you may find yourself fixing your code when it breaks in 4 years because it's suddenly incompatible with just-released hardware.
Edit: Just an example:
- On NVIDIA you should place CBVs, SRVs and UAVs in the root signature much more often. Also avoid interleaving compute and graphics workloads as much as possible.
- On AMD you should do exactly the opposite. Avoid using the root signature (except for a few parameters, specially the ones that change every draw). Also interleave compute & graphics as much as possible.
- On Intel (and AMD APUs) you should follow AMD's rules, but also avoid staging buffers because host only memory lives in system RAM. The memory upload strategies are different.
Have fun dealing with all three without messing up. Also keep up with new hardware: these recommendations may change in the future.