Let me try and explain this a bit better:
If you have a set of game servers that are all exposed to the internet (port forward, just behind a firewall, on the DMZ, whatever). These servers, obviously, have to talk to the database. So clearly there's an open connection between them and the database. Even if the database is behind another firewall, on a different network segment, or any number of other network design decisions.
But let us take the example of a set of port forwards to the different game servers, with the database server on the same network as the game servers.
If all of the game servers are on the same network then, once one is compromised, then it becomes fairly trivial to move laterally to all the other machines *on that segment of the network*. I.e. if I compromise the server at port 6000, I have control over that machine. I can now establish LOCAL connections from that machine to all of the other machines on that network segment, INCLUDING the database server.
The chances of you detecting and stopping such an intrusion in time to prevent damage, or the theft of data, is rather minuscule.
Once someone is ON your servers, no amount of encryption will help you. They can poke directly into memory to yank the keys, passwords, algorithms, or anything else they want out. They can decrypt your configuration files and database connection strings with minimal effort. Frankly, once they're on that segment of the network, that segment is compromised and NOTHING on it is guaranteed.
Now, if we change the network layout slightly, we end up improving security immensely. Specifically, if we move the database server off the local network and onto an internal LAN, then setup a firewall between that LAN and the DMZ that the game servers occupy we have introduced an additional barrier. No longer can I simply remote straight into the database server and say, make a copy of the raw database file for later examination. I'm reliant on connecting via the game servers into the database and running queries. Those queries will only have the permissions of the user that is used by the game servers, although those permissions are usually pretty wide since the game servers have to write changes to the database. Nevertheless, this introduces an additional step, and increases the chances of my detection significantly. Any serious queries I write will introduce a noticeable database load, and it makes it a lot harder for me to simply disguise my hacking as random crap that's getting thrown at the server. While this is good, its not great. The chances of detection are still really low, and I can still end up doing bulk queries and getting large amounts of data out of the system before you ever notice.
Now, if we add an intermediary between the client and the game servers we can impose additional layers of security. In addition we get additional scalability options. Firstly, if we have a DMZ with a series of proxy servers on it, with those proxy servers doing nothing but translating client packets into game events, which are send to the game servers, then we can scale the networking side of things (provided we have a decent backbone between the game servers and the proxies), fairly easily. Too much traffic for one proxy to handle? Toss up another and tell the login server about it.
The game servers are then put behind the firewall with the database. The only way in or out of the local area network is via communication with the game servers. The database is not exposed to the proxy servers at all. Since the game servers only communicate with the proxy servers via game events (i.e. internal packets), then the only ways to attack the game servers, and thus gain direct access to the database, would be to work out YET ANOTHER EXPLOIT and inject that into one of the game servers. In addition, since the game servers only respond to game events, I cannot issue bulk queries and obtain copies of your database WITHOUT hacking my way through that barrier.
Now, if your proxies are doing the validation and translation of packets, batching up zone events and sending them to the correct game server, etc. Then your game servers should be doing validation of those events, such as running physics, etc. This does not exempt the game servers from running basic validation, such as ensuring packet format and length are correct for the event type specified.
This significantly increases the chances of detection. Something as simple as trying a buffer overflow on say a player position update request would likely be caught. Too many such requests might ban that account, but if you kept seeing those requests in the admin log, you would probably open up your sniffer and see whose running a bot.
Now, its not perfect. Someone can STILL get in, and you can still not notice them. But the barrier to entry has been significantly increases with relatively little cost. For any REAL MMO you're going to have to have multiple servers running a single shard ANYWAYS.
As a reference, here is the architecture of the EVE node system, the VPN links are firewalled (i.e. DMZ):
(This is from their GDC 2009 presentation)