I was assuming that losing a fragment would invalidate the entire packet anyway - i.e. one lost fragment means you lost all, just like with IP
I didn't look either, but I doubt it. I would deem myself very arrogant if I assumed that I could write a fragmentation layer that is exactly identical to how IP does it, only better. In all likelihood, it would only be worse! Carmack and the other people involved in Q3 should know better than to try such a thing.
Of course, it might simply be that they wanted to allow packets that exceeded UDP max packet size.
That would be insane, however. If they have that much bulk data, then they'd better use TCP in the first place. UDP already allows you to send datagrams close to 64kiB in size, which is way too big for "realtime".
If you use UDP, you do that because you have hard realtime requirements on data arriving, you want low latency. You just cannot afford to wait. Thus, you certainly do not want to send messages of several hundred or thousands of kilobytes, and you certainly do not want to wait for dozens or hundreds of fragments to come in before you can assemble them.
You probably won't die if every now and then a single datagram is getting fragmented, if it happens, well then it happens, bad luck -- but planning to send datagrams several dozen of kilobytes in size is just a bad idea.