actually the time to be measured is from the end of present, followed by a poll, an update, a render, and a present.
that is only the same as poll to photon time if you do nothing between present and polling
so its really the time from one present to the next that includes the results of the polling after the first present. and that is the same as input -> photon only if you poll immediately after present
Nah that's too conservative to cover input->photon time
Given two frames (which poll, update, draw, present), and three keyboard inputs, A, B, C and D:
| Frame 1 | Frame 2 |
| A B |CD |
| Pol,Up,Drw,Prsnt | Pol,Up,Drw,Prsnt |
The user pressed C immediately before a Poll, so it spends close enough to zero time in the hardware and OS processing queues before being picked up by the poll. So C has no extra delay.
B spends about half a frame waiting in an OS queue.
A spends about a whole frame waiting in an OS queue.
D occurred just momentarily after C, but also just missed the Poll, so it will have to wait around for a whole frame (like A did).
So input->photon latency must also include a variable factor of between zero and one frames, or from 0 to 1000/PollingHz milliseconds.
If you just count present->present, you're also not including any GPU processing time whatsoever. The GPU and CPU do not run in close synchronization - and usually have at least one frame of latency between them. Graphics drivers deliberately introduce one frame of latency to ensure that no pipeline stalls can occur and throughput is maintained.
LCD's also buffer inputs for at least one frame.
So assuming a decent graphics driver (and rendering code), and a decent LCD, your timeline looks like:
| CPU Frame 0 | CPU Frame 1 | CPU Frame 2 |
| Pol,Up,Drw,Prsnt0 | Pol,Up,Drw,Prsnt1 | Pol,Up,Drw,Prsnt2 |
+-------------------+-------------------+-------------------+
| | GPU Frame 0 | GPU Frame 1 |
| | Render, Prsnt0 | Render, Prsnt1 |
+-------------------+-------------------+-------------------+
| | | LCD Frame 0 |
| | | Buffer, Prsnt0 |
^^ Just to be clear, this is what the timeline of your game basically looks like right now ^^ three different processors, handling the frame in a serial pipeline
So just measuring the CPU's present->present timeframe will give you a value that's potentially 3x smaller than the real value.
When you add the effect of input polling causing events to linger in a buffer, your actual input->photon latency is between 3x and 4x the numbers you're calculating.
The exception to this "at least three frames" rule is when the CPU/GPU/LCD update rates are all very different.
e.g. if your CPU framerate is 15Hz, GPU framerate is 30Hz, and LCD framerate is 60Hz, then you get:
Max time an event can linger in a queue before being picked up by a Poll: 15Hz / up to 66.7ms
CPU present->present time: 15Hz / 66.7ms
GPU present->present time: 30Hz / 33.3ms
LCD buffering time: 60Hz / 16.7ms
Total: from 116.7 to 183.33333333, or from 1.75x to 2.75x (instead of the 3x to 4x for the general rule of thumb).
You shouldn't just calculate these values and trust the theory though; get a 240Hz camera and film your keyboard+screen while you strike a key, and count the 240Hz frames that tick by between your finger first touching the keyboard and the LCD showing a response.
On a regular 60Hz game, it should be at least 11 frames in the 240Hz footage (somewhere around 50ms).
On a 15Hz game, it should be at least 35 frames in the 240Hz footage (somewhere around 150ms).
If you fix the caveman download links, I can do some empirical tests with a 240Hz camera for you.
Humans do not act "at 5Hz".
I can push a button for less than 10 ms and routinely demonstrate how HMI's cannot handle button presses that quick (and we show on an oscilloscope that the button was indeed pressed for 7~12 ms).
Again, 5Hz only got brought up as the rate of high level cognition / conscious experience, and is also approximately the human conscious reaction rate. If you had no idea how far away the button was, forcing you to actually think about whether you've touched it yet and should now release (instead of using muscle memory), you'd end up with much longer press/hold times due to that large thinking/reaction delay.
This is off topic, but striking a button can be a controlled by a conscious decision making process at 1Hz for all it matters, and still achieve a 10ms contact time, as long as you don't have to think too hard about the process itself once it's begun :P