My question regards this tho: Assume that my round trip time is 100ms, but of this 100ms about 75ms is consumed from the client to the server (which is logical, as most home connections have slower/crappier upload then download)
What you care about is ordering all events in a strictly increasing sequence, not the specific sync to the server. It doesn't matter how the latency is distributed.
Also, your assumption that upload bandwidth matters for client latency isn't really true. Let's do some math, assuming a cable connection with really good downstream and really constrained upstream:
User download == 1 MB / sec
User upload == 20 kB / sec
Client command packet to server == 300 bytes
Server update packet to client == 3000 bytes
So, one packet takes 300 bytes / 20 kB == 15 milliseconds to transmit up, for the first hop (from the client). Note that, if the command packet is smaller, this number changes significantly!
One packet down takes 3000 bytes / 1 MB == 3 milliseconds to transmit down, for the last hop (to the client).
However, as soon as you're outside of the client connection, you're on a routed internet infrastructure, where upload and download throughputs are usually symmetric, and always aggregated among many consumer, and thus a lot faster. At that point, it's the actual speed of electrons in copper (about 2/3 the speed of light) and the routing latency that matters, not the client bandwidth limitations.
Thus, any amount of your latency greater than (15+3) == 18 milliseconds in this case will be evenly divided between "up" and "back," so the maximum you'll be off in your estimate would be (15-3) == 12 milliseconds;
But, as I said, as long as you have a strict ordering of events, all of that doesn't matter much :-)