Jump to content
  • Advertisement
Sign in to follow this  
cr88192

OpenGL misc: texture compression and video textures...

This topic is 2142 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

basically, a fair amount of work recently in my case has been going into video textures, mostly trying to find a "good" solution to the various sorts of problems that come up when doing these sorts of things.

 

sorry, not sure where to best post this (I am using OpenGL at least, I guess...).

 

 

basically, here is the issue:

 

one can have a more general-purpose video format (MJPEG / Theora / ...), which might internally represent the video frames in terms of a YUV colorspace or similar, and then transcode to a compressed texture format (DXT1 or DXT5 or BC6H or BC7 or whatever else). the advantage is that the video can be more general purpose, and get a good size/quality tradeoff, though the disadvantage is that the decoding speed isn't as good, nor is the final image quality particularly great (since the texture conversion needs to be done in real-time).

 

for example, for normal texture maps, I am generally using load-time conversion, as the speed is less important and it can do a slightly better looking conversion.

 

 

alternatively, one can have a codec which specifically targets a given compressed texture format, and can potentially invest more effort (at encoding time) into generating better final image quality, as well as having higher decoding speeds, and more features can be supported for video-textures (such as mipmaps, ...).

 

the problem: the need for multiple versions of the video to target each format used, as well as to use a codec potentially specific to each compressed texture format (for example, having one codec that does DXT1 and DXT5, and another that does BC6H and BC7), as well as possibly still needing to keep around a "generic" version (for decoding to RGBA or similar).

 

for example, it isn't necessarily good if a person needs 2 or 3 versions of a given video-texture, each of which uses a different codec (wasting space and similar). (there end up being a lot of specialized codecs in use mostly as none does particularly great with "everything"...).

 

and, what of other compressed-texture formats? ...

 

 

for example:

textures/base_vid/sometex_dxt5.avi  //DXT5 or similar

textures/base_vid/sometex_bptc.avi  //BC6H or BC7

textures/base_vid/sometex.avi  //Generic (RGBA or real-time conversion)

 

or (what I have often currently been doing):

only having the version intending for decoding to DXTn, with a fallback case for decoding to RGBA or similar if needed (issue: weak image quality in fallback cases).

 

I am not sure if anyone has a particularly good strategy here?...

 

(well, granted, besides obvious things, like not using videos as animated textures or similar...).

 

 

Share this post


Link to post
Share on other sites
Advertisement

Why do your videos / animated textures need to re-compressed at all? Are you using a lot of them at once or something? Why not simply decode from a lossy video format like VP8/9 and upload as uncompressed? Also, why load-time re-compression or compressing to multiple formats? Why not simply re-compress to a single compressed format during installation, that way you can have faster load times, avoid wasted disk space and use a better (slower) compression algorithm for higher quality.

Share this post


Link to post
Share on other sites

Why do your videos / animated textures need to re-compressed at all? Are you using a lot of them at once or something? Why not simply decode from a lossy video format like VP8/9 and upload as uncompressed? Also, why load-time re-compression or compressing to multiple formats? Why not simply re-compress to a single compressed format during installation, that way you can have faster load times, avoid wasted disk space and use a better (slower) compression algorithm for higher quality.

 

typically there are multiple videos being streamed at the same time.

these are generally used for random things like fires and water-effects and torches and similar.

most are 256x256 or 512x512 or a few times 1024x1024 (though mostly these are tests).

 

like, say, you might wander around and catch a view with water, slime, lava, a fire, and a torch, all visible at once:

in this case 5 concurrent videos.

 

thus far, counts of 3-8 seems fairly common (out of 14 video maps currently active in game).

 

typically, since it all happens in the render thread, the CPU overhead should be kept low enough to not cause harm to the framerate.

 

 

in my tests, the performance cost of decoding and streaming uncompressed video to the GPU is a bit higher than the cost of a quick and dirty conversion into a compressed-texture format and uploading this to the GPU, but the image quality tends to be "lower than ideal" with quickly compressed frames, as well as the performance isn't as good as is ideal.

 

in the faster forms of the dynamic-conversion strategy, the block-encoding logic is pretty much shimmed directly on top of the IDCT or IWHT transforms, so pretty much:

read-in and decode DCT blocks for a given macroblock, and feed through the IDCT (Inverse DCT);

convert the macroblock to a 4x4 grid of BCn blocks;

send the BCn blocks to the output.

 

typically this happens each time a given frame is displayed (so, each frame, typically cycling about once per second to once per several seconds depending on the clip length). effectively, it is like a video player set in a loop decoding and uploading the individual frames into a texture (for often several concurrent videos).

 

 

VP8 and VP9 are actually fairly expensive to decode (vs a lot of other options).

they are not a particularly good choice for streaming video to textures (good for having a low bitrate, not so much for decoding speed).

 

currently I don't really have any formal installation step, though it could be possible to do a "conversion cache" (say, if it tries to access a "not-yet-converted" asset, it can convert it and potentially save it off to a file or something).

 

 

but, yeah, having a video where the data is already more-or-less in the desired texture format does make things a little faster, as well as potentially allowing higher image quality, just it seems silly to distribute multiple copies of a given short video clip.

 

 

for larger things, like possibly cutscenes or similar, it probably makes more sense to always just send a generic version.

 

likewise, typically only a single cutscene plays at a time, so decoding speed is less critical (ADD: for 1080p30, you basically need around 63 megapixels/second, which is "pretty doable" for a more conventional DCT-based codec and direct conversion to DXTn / BCn).

Edited by BGB

Share this post


Link to post
Share on other sites

Show us a video of the effects. I don't think you need animated textures to do any water/fire. We could maybe give an idea with what you have to maybe do it another way.

I'm not sure what your question really was. You dont need 1 buffer for each format, you need a buffer for each video. Load the full frames into a texture array of type DXT.

Share this post


Link to post
Share on other sites

Show us a video of the effects. I don't think you need animated textures to do any water/fire. We could maybe give an idea with what you have to maybe do it another way.

I'm not sure what your question really was. You dont need 1 buffer for each format, you need a buffer for each video. Load the full frames into a texture array of type DXT.

 

it would be the issue of needing one video per format, not one buffer per format.

 

this would arise mostly because the videos are encoded in such a way that they decode directly to a specific compressed-texture format (rather than representing a more generic RGB or YUV source format, such as with a more general-purpose video codec).

 

but, yeah, the issue is mostly that some types of things are easier to pull off with video than with shader effects or geometry.

 

also, animated textures via frame-cycling put some fairly severe limits on the length/resolution/framerate of an animated texture (video allows higher framerates and resolutions at a more-or-less arbitrary length).

 

 

ADD (Edit: Replaced Video):

 

didn't wait through the whole thing, and there is no audio (video textures in my engine don't currently support audio), but it should give a basic idea at least...

 

basically, in the (updated) video, multiple internet TV shows are playing at the same time.

the frame-rates are not drastically impacted in this case.

 

however, the videos are 512x512 at 30Hz.

also noted was that the video decoder was built with debug settings when this was recorded.

 

the (lame-looking) fire-effect is also visible at one point (it is also a video).

 

ADD: test, confirm, framerate still holds with 1024x1024 versions of the videos.

Edited by BGB

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!