I dont really want to be measuring it all out as that would require me to test with multiple connected clients, as well as needing me to do a bunch of stuff to them to see how much it takes to max out and get laggy, i was just looking for a generalised answer from where i can start at and work from.
You can do it in theory instead of testing in practice.
e.g. let's say you've got a client-server game where:
* a bone is represented with a quaternion rotation and a 3d vector position, which is 7 floats (28-bytes)
* a character has 64 bones (1792 bytes)
* a character state packet has 1 32-bit character-ID, and 64 bone states (1796 bytes).
* there's 32 characters (57472 bytes)
* the physics is updated at 30Hz (1724160 bytes / second == 1.6MiB/s download per client).
* the server is sending this to 32 clients (55173120 B/s == 52.6MiB/s upload at the server).
You can tweak those numbers until they fall within typical DSL speeds (maybe 100KiB/s download per client, 3-10KiB if you want to be friendly).
Also, your choice of networking architecture makes a big difference. The above is based around the typical "FPS model" of networking, where you've got an authoritative server that updates the state of all clients (and also assuming that the state of the bones needs to be synced explicitly, which L.Spiro is trying to tell you isn't usually the case). If you were instead using the typical "RTS model" of peer-to-peer lock-stepping, things would be completely different (you could have unlimited moving bones at a fixed cost, because the state of the objects is never sent over the wire, only the inputs to the sim)...
What kind of game are you making?