To me that would make write(int*) unnecessary, since I could just deference my pointers and it would resolve to the write(int) overload.
You can always deference a pointer of course, but I think its more than a matter of opinion and preference. First, to me, as a user of write, I don't particularly want to juggle that information unless its explicitly important to me -- I want a convenience function to take care of it, and function overloading does just that with not even a character difference in spelling. In fact, I'd recommend implementing write(int*) in terms of write(int) to avoid code duplication; with inlining, performance will be equivalent. I'll make a second point on whether this is a matter of personal preferences in a bit.
Unless you're using write(int*) because it handles null pointers somehow. However more often than not, the calling code is already handling null pointers and missing data, because that may need to be handled differently depending on what's being serialized (sometimes you should write nothing, sometimes you write zero or some other sentinel value, sometimes you throw an exception, etc.).
That's one distinction yes, and I agree that often the client checks. Indeed, if the client needs to throw an exception or fail and exit the serialization immediately because the data is malformed, it has to be the client because the serialization library doesn't know about the client's data structures. Likewise if the client prefers fine-grained control over whether a particular null pointer should be initialized to some valid default. However, if a value is written to encode a null value, that representation probably belongs to the serialization library, not the client, and if the client wants only course-grained control over whether null pointers should be initialized to some default, its easier and safer for the serialization library to allow the client to set this behavior (either globally for all null pointers, or on a per-type basis) -- the job of a library is not to provide a minimum footprint, but to make doing the right thing easy (a minimal footprint and making it easy to do the right thing
are not usually at odds, but sometimes they are).
Better still, putting my serialization library writer hat back on, providing those "convenience" functions means that I can change the implementation as I see fit -- for example, I might be able to notice that the user writes the same integer via its address several times. If I know, by policy or because its value is const, that the value doesn;t change between calls to write during the same file serialization, then perhaps I can coalesce disk storage of the integer value itself, and encode a smaller means of referencing it into the file-stream. I can do similar for plain old integers -- write(int) -- too, but (and here's that second point) I can do neither if I can't tell the difference between a number of integers that all happen to have the same value and a number of pointers who point to the same integer. This is because a number of integers that happen to have the same value only share equivalence, while a number of pointers who point to the same integer share identity -- its the difference between "has the same value" and "is the same entity", and that is (or at least can be) important. But you can't make the distinction if you wipe away its pointer-ness by prematurely dereferencing it.
You could use the same kind of space-saving trick by remembering the contiguous memory ranges you might have already written (say, arrays or vectors), and then transforming pointers into that range into a potentially-smaller indexer, which could lead to considerably smaller files if the element type is large -- and smaller files are great and all, but really its the ability to make the distinction between equivalence and identity that's important.