Why do your videos / animated textures need to re-compressed at all? Are you using a lot of them at once or something? Why not simply decode from a lossy video format like VP8/9 and upload as uncompressed? Also, why load-time re-compression or compressing to multiple formats? Why not simply re-compress to a single compressed format during installation, that way you can have faster load times, avoid wasted disk space and use a better (slower) compression algorithm for higher quality.
typically there are multiple videos being streamed at the same time.
these are generally used for random things like fires and water-effects and torches and similar.
most are 256x256 or 512x512 or a few times 1024x1024 (though mostly these are tests).
like, say, you might wander around and catch a view with water, slime, lava, a fire, and a torch, all visible at once:
in this case 5 concurrent videos.
thus far, counts of 3-8 seems fairly common (out of 14 video maps currently active in game).
typically, since it all happens in the render thread, the CPU overhead should be kept low enough to not cause harm to the framerate.
in my tests, the performance cost of decoding and streaming uncompressed video to the GPU is a bit higher than the cost of a quick and dirty conversion into a compressed-texture format and uploading this to the GPU, but the image quality tends to be "lower than ideal" with quickly compressed frames, as well as the performance isn't as good as is ideal.
in the faster forms of the dynamic-conversion strategy, the block-encoding logic is pretty much shimmed directly on top of the IDCT or IWHT transforms, so pretty much:
read-in and decode DCT blocks for a given macroblock, and feed through the IDCT (Inverse DCT);
convert the macroblock to a 4x4 grid of BCn blocks;
send the BCn blocks to the output.
typically this happens each time a given frame is displayed (so, each frame, typically cycling about once per second to once per several seconds depending on the clip length). effectively, it is like a video player set in a loop decoding and uploading the individual frames into a texture (for often several concurrent videos).
VP8 and VP9 are actually fairly expensive to decode (vs a lot of other options).
they are not a particularly good choice for streaming video to textures (good for having a low bitrate, not so much for decoding speed).
currently I don't really have any formal installation step, though it could be possible to do a "conversion cache" (say, if it tries to access a "not-yet-converted" asset, it can convert it and potentially save it off to a file or something).
but, yeah, having a video where the data is already more-or-less in the desired texture format does make things a little faster, as well as potentially allowing higher image quality, just it seems silly to distribute multiple copies of a given short video clip.
for larger things, like possibly cutscenes or similar, it probably makes more sense to always just send a generic version.
likewise, typically only a single cutscene plays at a time, so decoding speed is less critical.