In the example you give, in most languages and most scenarios the work will be identical. This is not always the case.
In your specific example there are some extremely basic optimizations that will be made by almost every compiler. The first is to eliminate the temporary, the second is to unroll the loop. In both cases it will be trivially optimized to this:
coolObjectList[0].doWork();
coolObjectList[1].doWork();
coolObjectList[2].doWork();
coolObjectList[3].doWork();
coolObjectList[4].doWork();
In other cases where the compiler couldn't be certain about the optimizations, so it couldn't take them.
Removing temporary objects is a common task for compilers. Various things can prevent that. Perhaps the assignment needed to go through a custom assignment operator, then the change could make a difference.
Hoisting constant values out of loops, hoisting object creation out of loops, hoisting allocation and other expensive allocations out of loops, these are very common changes to make in a code base: do something once rather than doing it many times. In this simple case the compiler can do it for you, in more complex cases it may not.
Another item is that you looped a fixed number of times. Loops have a very tiny performance impact, It is on the order of a fraction of a nanosecond each time, but doing it thousands of times every frame for millions of frames and the time accumulates. If you looped a variable number of times the compiler may not be able to unroll the loop.
Mainstream compilers are generally quite aggressive with optimizations, and they contain databases of an enormous number of software patterns they can quietly improve. If you see in your code that it is something you should improve, doing the work once rather than doing it in a loop, then do so. Otherwise you can generally trust the optimizer to do its job behind your back.