This confused me a bit, I realize that obviously how much data you can send/receive effects how fast you can ACK and what you ACK, etc. but the window size for me is artificial window over the sequenced packets you send, but it seems like you're talking about the window size as a receive buffer here? Maybe I'm just confused, but it's not clear on what you're trying to say to me at least!
No, I'm just saying that usually you have a fixed sized buffer (say, 64K) where you push data one end, and transmit to the client the other end (often done as a ring buffer), and if your buffer becomes full because your client hasn't acknowledged fast enough (trying to send faster than is received), then you need to deal with the case.
For example, if you call send() with TCP / IP, the function will return the number of bytes actually added to the buffer which may be less than the number of bytes you tried to send, because the buffer is running out of memory.
Not everyone cares though. It's nit picking. Some just use dynamic allocation with gemoetric growth and shrink to adapt for slow / lossy connections. Again, I would just kick them and not bother, but you may have 'spikes' at some points in the game, and where the buffer fills up in short amounts of time, so kicking a client because he hasn't acknowledged a X amount of bytes can be a rather arbitrary rule.