Although Hodgman's advice is likely the better, I would point out that there _is_ an advantage to using higher-level primitives in your math: it's clearer to other developers (including future you) what your intent was.
I'd also suspect a good optimizer in fast-math mode to apply the following transformations:
lerp(a, b, t) -> a*(1 - t) + b*t
a += lerp(0, 1, t)
a = a + lerp(0, 1, t) // operator expansion
a = a + 0*(1 - t) + b*t // inlining
a = a + 0 + b*t // constant expression resolution
a = a + b*t // identity transformation folding
a = b*t + a // reordering add-multiply to multiply-add
a = fma(b, t, a) // fused-multiply-add
... // instruction selection for target machine
Sure enough, with at GCC 5.1 and Clang 3.2 we see that indeed the optimization happens (link below) when using -O3 -mavx2 -mfma -ffast-math (I didn't play around with the settings much, so I don't know if you need all that). GCC 5.1 and Clang 3.8 optimizes perfectly while while Clang 3.2 - 3.7 select a poorer FMA instruction (it literally translates the add-multiply rather than transforming it into a mutiply-add) that requires an extra mov instruction to compensate (Hodgman's suggested simplification _also_ does this, though, as it's still an add-multiply).
Clang 3.0 does not do the optimization, I don't see an option to test 3.1, and I didn't bother any GCC older than 5.1. The online MSC compiler doesn't let me set the target architecture or see assembly output and I'm too lazy to compile locally to test right now, so I'm unsure how well it does at this test (but I've so far been _extremely_ happy with the quality of optimizations in MSVC 2013+). ICC 13 surprisingly does not ever emit an FMA instruction in my testing, but does optimization down to just two instruction (add and multiply, unsurprisingly).
gcc.godbolt.org testCompilers are neat.
That said, debug performance _does_ matter in games, at least at the higher end of development, so there's an argument to be made that your code should be as fast as possible even with optimizations off. The selection of trade-offs between optimal-debug and optimal-clarity is a constant battle in game code engineering, unfortunately. :)