there's no good reason to keep data in memory that you're not using, and may never use again (e.g. if a player never comes back).
Shh! Don't tell the Redis folks!
(FWIW: We run the biggest single Redis instance I know of, with 768 GB of RAM. This turned out to be a mistake, because the entire kernel locks up for 10 seconds each time it forks to checkpoint the data.)
How would I go about implementing
At that point, you're building the lowest-level component of a database (or, for that matter, file system) which is "indirect block allocation and management."
A very simple way to do that is to split your file into chunks, say 1 MB each, and have each chunk link to the next chunk when it gets full. To read all the data, you follow the chain of links and concatenate all the data.
A slightly more sophisticated way is to make the first chunk be an array of chunk offset, and each time you need another 1 MB chunk, add a new offset to the table, and when the table runs out, you either say "file is full," or you apply the linked-list of table chunks, or you add a second layer of indirection.
(Chunk size varies by application -- 1 MB may be way too big or not big enough, depending on what you're doing.)
An even more sophisticated way of doing this is to structure your data in an ordered index -- at this point, you'll want to read up on B-trees, B*-trees, and other such structures, because you're well on your way to building your own database!
Simple math example:
Let's assume 1 MB chunks. Let's assume 64 bit file offset.
1 MB can fit 128K of file offset pointers. Each pointer references a 1 MB chunk of file data.
Maximum size of data stored in file: 128K * 1M == 128 GB of data.