Agreed -- Lighting.
Tim Sweeney of Epic Games and a graphs wiz every bit as good as Carmack--if not better--roughly lumps generational lines of rendering advancements as to how many bounces of light they simulate. In raycasters like Doom, light bounced just once off of a surface of the world directly to a pixel on your screen and nothing intervened in it -- light didn't come from anywhere in the world, it was just an ambient value, ever-present, constant, radiating in all directions evenly. In the first generation of polygon games like Quake or Unreal, light "bounced" twice -- light affecting a particular surface had a single origin in the world and each origin could have different properties, an ambient lighting factor continued to stand in for all the indirect bounces; pre-baked light-maps helped color-in the illusion of localized lighting and occlusion. For a long time, lighting advances came by increasing the number of lights in a scene, not increasing the number of bounces -- Even through Doom 3 was more lights, not more bounces. Modern games of today simulate ~3 bounces -- IIRC, the ambient light factor bounces off of one surface, picking up its properties, propagates that to a nearby surface, and finally the sum of this and the localized, two-bounce lighting to your eye. The coming generation--maybe today's bleeding edge--should make a good run at subsurface scattering.
Adaptive animation is another of the current frontiers that builds on realism. After that, probably AI/Behavior that results in more than a simple choice between pre-canned responses that are blended at best -- something more natural than that will be needed to cross the uncanny valley once we reach realistic-appearing humans in real-time..
A parallel advancement has been Physically-based lighting, which gives materials a logical consistency like you see in the real world. In the past, materials were often bespoke and could have their knobs tuned to wildly different values to achieve an appearance consistent with the scene as a whole.