I'm actually very surprised that, to my knowledge, no one has used this principle for up-scaling video resolution
It's used in a lot of video-games. Every frame you shift the screen projection by a fraction of a pixel, which results in edges maybe shifting by a pixel, or not. When combined with MSAA and temporal reprojection, nVidia uses the TXAA buzzword to refer to it
It's also an old-school way that people used to use to render out "magazine quality" screenshots from game engines. You'd take thousands of sub-pixel shifted screenshots and then average them, which gives you a screenshot that you can say is "in engine", but looks pre-rendered
Yeah, that's pretty close to what I'm referring to, but I was imagining using that principle to actually increase the number of pixels (versus just increasing the number of "samples" averaged to form each pixel) by reconstructing the amount of shift after the fact based on only the video data.