I purchase, deploy and manage network and server equipment for VoIP services out of many different tier 3 and tier 4 data centers across the US, UK and Australia.
The smart answer is don't buy hardware until you have to. Ramp up using IaaS hosts so you can run on OpEx vs CapEx. This will give you time to raise the capital necessary to get into the data center space. Onlyput stuff into your data center if you can make it reliable. Define what reliable means to you and your end users before even thinking about data center space.
First - data center space is very expensive. It will vary immensely based on contract but it all boils down to power and cooling. The more dense you get per rack, the more cooling your equipment needs. Depending on your architecture and layout of the facilities they will most likely equate a power usage to cooling ratio/sf you must abide by. For example, if you start running redundant 3-phase 60-amp 208v circuits per rack to run a bunch of blade chassis you can easily start to run into requiring thousands of sf of floor space to accommodate the cooling. I will use the term "space" to include power and cooling moving forward. This is your biggest cost.
Second - do you need a carrier hotel or are you okay with 1 or 2 major carriers? This translates into cost for cross connects and DIA circuits and most of all reachability. In most cases with that many concurrent players you will want to peer with multiple carriers like Level3, Verizon, Comcast (and other residential cable/broadband providers) so you can reduce the number of hops it takes for the majority of your players to reach you as well as provide contingency against major fiber cuts that cause providers to route over long distances drastically increasing latency (Ohio I am looking in your direction...). This is your second biggest cost.
The monthly recurring costs of a data center even with only 2 racks can easily be over $100k depending on hardware choices, network requirements and space (remember power and cooling have a defined ratio with square footage) requirements. If you want to help bring in a more accurate number it really comes down to specifying hardware ( such as blade chassis and blades or stand alone servers etc.) and how many you intend to put in a rack. You also need to figure out if you will deploy racks as kind of a stand alone entity with top of the rack switching or if you will deploy centralized networking with patch panels. It is usually a combination of both but this will play a major role in your ability to automate and orchestrate via smart hands which either way will result in further overhead costs.
Again, I cannot stress this enough - don't get into the data center game unless you absolutely have to. It is rare these days to have an application that cannot work within the offerings/confines of an IaaS provider. There is a tipping point at which IaaS is more costly (operating margins) than running your own data center but by that time your recurring revenue should pay for that transition or you have done something very, very wrong.