Guess who they will blame when the game doesn't work? Unless you have a very clear meter for quality, and very clearly shows how the quality impacts the game working or not, they will blame you.
Ah yes, I can see that kind of thing happening. But luckily, to your advantage, no game (except turn-based like chess) will really "work" in such an environment, and if you display something like "packet loss!" (or even a number) in the corner to hint them, it should hopefully be good.
To avoid that, one solution would then be to make sure that each packet is small enough that it's unlikely to encounter any fragmentation until it's assembled at the recipient's.
Problem with a homebrew fragmentation implementation is that you don't really know when such a thing happens. At least, there is no easy way to find out. ICMP makes this work "magically" under IPv6 but you cannot easily access the info from any unprivilegued user process. Fragmentation happens automatically at the router under IPv4. You never know it happened.
If you do your own fragmentation, you only have the option of setting the "don't fragment" bit, but it's not very straighforward (for example Linux only allows DF for raw sockets, so you must run as root), and in my opinion it's something that stinks (for UDP, at least). Other than that, your only way of knowing that you've exceeded the MTU is by looking into your crystal ball, pretty much.
TCP does the same thing behind your back to discover the MTU and its window size, sure, but that's a different story. TCP is "bulk data", not "real time". It's perfectly OK to have packets dropped purposely to find out your limits.
For UDP, I would rather try to keep the datagram size to something reasonable that will likely pass on 99% of all routes, simply by sending less stuff at a time at application level. Something like 1280 bytes should fly (since practically the entire internet is IPv6 nowadays, even though most people use IPv4, the routers still have to comply with IPv6's minimum MTU). Maybe this will cause fragmentation for a few people who have a lower MTU (in theory, they could have as little as 576, but I doubt you'll find a lot of these, I can't even remember a time when it was anything lower than 1492 on my end), but you probably won't need to care. First, it just means that a select few people get 1 datagram in 2 fragments, what a tragedy -- it's not like they're getting 50 of them. And second, it'll be very few people, and those likely won't have any fun playing your game with their low-end internet connection anyway.
On the other hand, doing your own MTU discovery will necessarily and regularly drop datagrams for everybody. Which is perfectly OK if you use TCP to download a 50 MiB file. If TCP drops and resends 10 or 20 out of 50,000 packets to discover the best MTU and window size, your 20-second download takes 0.01 seconds longer, if it does at all (most likely it doesn't even) -- who cares. There is no real difference.
Only when it's UDP data in a game, you'd wish that packets weren't dropped purposely. Every now and then, you'll lose packets anyway, and it's bad enough when that happens. Sure, your application must be able to somehow cope with packet loss, but meh. Every packet lost somehow interferes with gameplay, maybe more severely or less so. In any case, I would never provoke it to happen on purpose, on a planned schedule.