Jump to content
  • Advertisement
Sign in to follow this  
  • entries
  • comments
  • views

random: blue-screen, video compositing, ...

Sign in to follow this  


so, off in this strange place known as the physical world, I recently set up a blue-screen.
technically, it is just a blue bed sheet, as this is pretty much the only thing I had on-hand (yes, green cloth would be better, but I don't currently have any).

did do a few tests... put them up:

and, as part of the process wrote a tool to composite the video streams.

pardon the awful blue-screening quality, my current setup is pretty bad (my camcorder can barely see the blue, ...).
some work is still needed in all this to try to make it not suck...

the video streams are basically treated as layers, and can be placed independently (*), and will be blended together into the output. currently this is done on the CPU using a 16-bit fixed-point representation for pixels (though a big part of the process at present uses a floating-point representation for pixels).

currently it mimics a GLSL-like interface and the video composition isn't particularly high-performance.
a simple batch-style command-oriented language is also used to drive the process.
I may or may not consider supporting use of my scripting-language for pixel-level calculations (it could be nifty, but is fairly likely to be slow).

*: they have both a bounding box and currently 2 transformation matrices, one allowing the layer to be placed in various orientations within the video-frame, and another for local coordinates to be transformed within texture-coordinate space.

I might later also consider triangles or polygons and a simple rasterizer (or take the naive-and-slow route of checking pixel-coordinates against triangles).

while it probably seems like a waste to do all this on the CPU (vs the GPU), I figured for what I was doing it was likely to be less effort, and performance isn't really critical for batch-tool video composition.

as-is, it implements both Photoshop style and OpenGL style blending modes (for example: "normal"/"overlay"/"color_burn"/... or "src_color one_minus_src_color", ...), though at present they are mutually exclusive for a given layer.

when using GL-style blending, layer-opacity still behaves in a PS-like manner (IOW: it effects overall layer blending, rather than being factored directly into the blending-calculations).

in more messing with video, I made another observation:

current major video codecs (H.263, H.264, XviD, Theora, ...) actually do a pretty poor job with things like image-quality and avoiding conversion loss and generational loss and similar, as-in, after transcoding video a few times, the image quality has basically gone to crap. like, they generally look ok in-motion, but if you pause and/or look closely, the poorness of the image quality becomes more obvious, even with maxed-out settings (100% quality), and gets worse with each transcode.

in contrast, you can save out an M-JPEG at 90% or 95% quality, and although the video file is huge, the image quality is generally pretty good. (and 90% quality in JPEG is somewhat higher than what passes for 100% in Theora or XviD).

I don't know of any obvious reason in the bitstream formats for why the major video codecs should have lackluster image quality, but suspect it is mostly because the existing encoders are tuned mostly for "streaming" bitrates, rather than ones more useful for editing or similar.

in all, the size/quality/speed tradeoff has mostly been leaning in favor of my custom RPZA variant (BTIC1C) though, as even if technically, the quality is pretty bad, and the compression is also pretty bad, for intermediate processing it still seems to fare a little better than XviD or Theora (encoding/decoding is considerably faster, and generation-loss seems to be a fair bit lower).

the most obvious quality limitation of BTIC1C at present for this use case though is the imitations of 15-bit colors.

I was left considering a possible extension to try to squeeze a little more image quality out of this (for dedicated 32-bit and 64-bit RGBA / HDR paths), namely probably using the main image to store a tone-mapped version, and then storing an additional extension-layer to recover the "true"/"absolute" RGBA values.

in this case, the RGB555 values wouldn't be absolute colors, but rather themselves treated as interpolated values.
the tone-mapping layer is likely to be stored in a format vaguely-similar to a byte-oriented PNG variant (stored at 1/4 resolution).

or such...
Sign in to follow this  


Recommended Comments



I might eventually consider developing some more graphical tools (vs the current batch ones),

as well as maybe work more on making the general blending faster.


it also depends some on interpretation whether the fixed point is Q11.4 or Q3.12.

raw float could be better, but float textures would eat a lot more memory.

half-float is also possible, but would have less precision than the fixed-point, but a larger dynamic range.


a more remote possibility is a "shared exponent packed vector", which could have a larger dynamic range at similar precision and the same memory footprint (64 bits/pixel).

likely: 4x(S.14) E4

or, maybe: 3x(S.13)E6 (RGB), SE5.10 (A)



Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!