There are several things you need to consider.
If your goal is mainly to create a file of this length, then it is not necessary to map the view. Creating the file mapping (assuming proper access rights and protection mode) will create a file to the desired length already.
Mapping the view is another story. As you've pointed out, your computer should be able to address 16 exabytes. Yes, but only if your program is compiled as 64-bit binary (on a 64-bit computer running a 64-bit operating system). You didn't specify what compiler you're using, but let's assume that -- like most people -- you use a 32-bit compiler, mapping such a large memory area will simply fail, even if your OS is 64 bits.
Note that creating a mapping also consumes considerable amounts of memory and is non-neglegible work, so one wouldn't want to actually map such huge amounts of memory unless really necessary. 16 GiB of memory corresponds to somewhat over 4 million pages table entries, which consume 64 MiB of physical memory.
Mapping 16 GiB when your address space allows for it but you do not have the physical RAM (you'll probably need upwards of 20 GiB) is yet another story, and of course there's the working set size limits.
The maximum working set is limited to ridiculously small sizes by default, unless you change it. Which means that pages will be added to and removed from your working set all the time with such a huge dataset. This means literally millions of page faults (although they are "soft" faults as long as the OS does not run low on zero pages)
Lastly, the amount of zero memory pages is not unlimited. Normally you never notice that, because the idle task always clears unused pages and you normally don't consume that many, so there's always spare ones. However, asking for 16 GiB in one go (and also touching it!) may make you feel it. The OS is required to zero all pages, both in the file on the disk and pages that are mapped into your address space. Since both are "opaque" to your application, the OS cheats as much as it can to hide the overhead. It will for example allocate a file without initializing it and "remember" that anything that is to be read from those sectors is "zero". The same goes for the pages in your address space, it merely "remembers" that these pages are new pages that you've never accessed.
However, eventually, the OS has to write all those zero sectors to disk. Also, eventually, it has to provide a phyically present zeroed memory page (at the very least, the moment you try to access one -- but possibly sooner). At some point, the OS cannot cheat any more, but has to do the actual work. This is when you start to feel it.
Unluckily, Windows is not particularly intelligent when it comes to dirty page writeback either (though I don't know if other operating systems are much smarter). I've experienced this when I wrote a quick-and-dirty free-sector-eraser for my wife's computer (for doing a "kind of security wipe" when getting a computer upgrade with the requirement to turn the old machine in, but with a still intact Windows installation -- so the plan was to first delete the documents folder using Explorer, and then overwrite the now free sectors on the disk 5-6 times simply by writing huge files until the disk was full, thus overwriting all available disk space with random values).
My little tool would map a 1 GiB file, fill the memory with random, close the mapping, and create/map the next 1 GiB file. The idea was that this memory mapping was probably the most efficient way of writing huge amounts of "data" to the disk. Pages that you unmap become "free" as soon as they are no longer "dirty".
New zero pages that you allocate do not need to physically exist unless you touch them, at which point they fault. So, yes, there is a bit of racing for physical pages, but the worst thing to happen is the writer thread is stalled by a page fault which needs to wait for a page to become free and zeroed. That's OK, because all we want is the disk to keep writing with maximum possible speed, we don't actually care about the application's performance. This theory proved true until the machine ran out of physical memory.
At that time, Windows started copying dirty pages which it should just flush to disk and be done with them to the page file (no, I'm not joking!) to make phyical pages available for newly touched pages, then page the dirty pages in again (paging out the now freshly touched pages), and finally write them to disk. Result: Writing at ca. 110 MiB/s the first 2 seconds (pretty much the disk drive's theoretical maximum), then drop to ca. 2 MiB/s.
Now, for the funny part, the "fix" was simply inserting a Sleep(5000); after unmapping each file (*cough*). This gave the disk just about enough time to flush enough pages to disk and throw them away before the application was asking for a new pages. What a crap solution, but it worked...