I don't think it's much of an issue. Generally, you will want some kind of checksum (or rather a MAC) to be on the safe side where it matters. Such as when downloading a huge file, or for making sure your executables or data files are undamaged and untampered.
But... as part of your application-level network protocol... no.
Ethernet typically has something like 10-8 to 10-10 bit error rate (depending on what kind of network, and depending on whom you ask, some claim on your LAN you can expect 10-12, but... whatever). IEEE 802 functional requirements as of 1991 requires (5.6.1) 10-8 or better for a device to be compliant, so 10-10 is probably not a too unreasonable expectation today.
Anyway, seeing how your traffic goes over the internet and you don't know how barely standards-conforming cables somewhere on the internet may be, I will assume the worst case, 10-8.
Ethernet frames are terminated by a 32-bit CRC which guarantees (neglecting collisions) to capture all single- double- and triple-bit errors. That doesn't mean that 4-bit or 5-bit errors cannot (or will not) be detected, it just isn't guaranteed by the mathematical model. This means that (neglecting collisions) in order to have a "silent" bit error, i.e. one that isn't directly discarded by the hardware and actually makes it to the IP layer, you need to have at least 4 bit errors in one frame.
Assuming no jumbo frames (internet, eh!) you have a maximum of 1,500 bytes, or 12,000 bits in a frame (well, a bit more, something like 1536 bytes or such... but makes no difference). In order to encounter a single silent bit error in that frame, you thus need 4 bit errors happening among 12,000 bits on the wire. 1/3,000 is quite a different number from 10-8. Also, if you aren't doing bulk transfers, your frames will usually be smaller than the maximum size, so it's even less likely to have this happen (same number of bit errors on the wire, but more frames, more checksums, more interpacket gaps).
Or, look at it from the opposite side. A gigabit ethernet can push 81,247 frames over the wire each second (that's for maximum-sized frames, the likelihood of getting errorneous bits is smaller with smaller frames, both because you have more checksums and because the interframe gaps which are always 96 bits relatively grow in size). The 81,246 Interpacket gaps correspond to 7,799,616 bits (which are harmless, the network card doesn't look at them). If you count in preambles and destination MAC, which is reasonable because it is "harmless", too (any bit error there and the packet will not arrive), it's 16,899,376 bits (about 1.7%) which can contain errors and be totally harmless to begin with.
At a BER of 10-8, a billion bits will contain 10 error bits. Actually only 9.9 bits if we consider the "harmless" locations, there is about a 10% chance that we only have 9 bits to deal with (the tenth disappears in an interpacket gap). But let's be pessimistic. Let's say 10. That's up to 10 packets but in that case the CRC is guaranteed to pick them up, or if we assume a "somewhat malicious" clustering of our bad bits so actually 4 of them make it into a single packet, we have a maximum of 2 packets where it's not guaranteed that the CRC will pick it up. IEEE 802 requires (5.6.2) a maximum tolerable likelihood for that case too, which is 10-14.
(cough) On the other hand, if we assume that the bad bits cluster up in such a way, we should also consider that there's a fair chance they disappear alltogether in a single interpacket gap, too...
So, we have 2 out of 81,247 frames (0.002%) that are problematic. They contain an error and we don't know for sure that the network card will discard the frame (it probably will, but we don't know... there is a 10-14 chance that it won't). That's in the worst, theoretical case, on a network that only just barely conforms to a 25 year old standard.
Unless you plan to do a transmission that saturates your gigabit link for about 1.58 million years, that 10-14 chance should be no biggie, you can let the TCP or UDP checksum deal with that.