Each packet needs to be timestamped with the (global) tick it is intended for.
The client needs to adapt to how late packets typically arrive from the server. Packets that don't arrive in time are assumed to be lost. If a packet arrives after it's considered lost, bump up the estimation of the server latency by some amount (say, the amount packet is late + 10 ms) but cap the amount of bumping allowed per packet to something reasonable like 50 ms.
On the client, things will be displayed (and client prediction resolved) at time (estimated global tick time + estimated transmission latency). Meanwhile, client commands will be issued at tick (estimated global tick + estimated transmission latency in ticks). I e, the server needs to tell the client how late (or early) packet arrive, to update the estimate, and client needs to measure how late (or early) server packets arrive. There are then two functions on the client: turn client-based timestamp into estimate of global tick, and turn server-based global tick into estimated client timestamp.
After accounting for the RTT and storing client predictions in the future, it seems to be good!