Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


#Actualcr88192

Posted 04 February 2013 - 01:08 AM


not entirely sure here, but OpenGL seems able to compress textures relatively quickly.
or, is the idea that the built-in texture-compressor provided by OpenGL isn't "good", or isn't very fast, or something else?...

Generally, if a DXT compressor is very fast, then it's probably producing low quality results.
One other downside of asking GL to compress your data for you is that (on Windows) this is implemented in the graphics driver code, which is likely to be different on each of your user's PCs. This means that maybe one user's driver has a slow DXT compressor, while another's is fast. Maybe one user gets really bad quality textures, while others get decent quality? It's hard to ensure a consistent experience when you outsource some behaviour of your game to an unknown 3rd party plugin like this.


interesting.

will have to look into this.

generally, it seems moderately fast, and has "tolerable" quality, at least on the cards I have typically used (recent ATI and NVIDIA cards), though sometimes does introduce a slight banded/patchy look.

I had generally used it because it seems to help with the framerate.

For an example of objectively measuring the quality of different compression approaches, see L. Spiro's DXT compression blog post here, where he talks about measuring signal to noise ratios, or this blog post that LS links to has some good visual examples of how different the results of different algorithms can look.
 
 
As for video, there's probably not much point in DXT compressing individual frames (unless, yes, you somehow could directly transcode from the video format to DXT blocks!). The time spent performing the DXT compression would probably outweigh the theoretical benefits, which include:
* quicker time to transfer the frame to VRAM (but this time is probably already small compared to the MPEG/etc decoding time)
* faster pixel shader execution due to faster texture fetching (but pixel shading isn't likely a bottleneck)
* reduced VRAM usage (which isn't that important as you only need a frame at a time)

(edit: add, after a quick skim of blog post (will probably read more).
ok, so I guess the idea is that the common patchy/banded look of DXT compressed textures isn't an inherent property, but rather a side effect of quick/dirty encoders possibly not really doing any dithering? nifty... well, I guess this gives more reason to look more into these matters. ).


fair enough...


for the codecs I am using (Motion-JPEG and Motion-BTJ), converting from macroblocks to DXT blocks could be possible, but admittedly I don't know it it would save that much over going the full YUV (blocks) -> RGB -> DXT route.

and, could very well still be slower than the (current) strategy of using uncompressed textures for video, or maybe not really make a big difference (since, as-noted, decoding video frames isn't entirely free).


note (going off original topic, mostly general information):

in Motion-JPEG, each frame is basically an independent JPEG image (the video format is, essentially, just playing a series of JPEG images).

for Motion-BTJ, it is basically similar to Motion-JPEG, except that BTJ supports an alpha-channel, lossless coding, normal-maps, luminance and specular maps, layer stacks, embedded shader-info files, ... so can be used for some more elabotate effects (BTJ is essentially a JPEG containing a collection of other modified-format JPEG images inside of a makeshift TLV container format). the "BTJ" basically means "BGBTech JPEG", but I now call it BTJ mostly as "it isn't really JPEG anymore..." (and has since broken strict backwards compatibility).

in both cases, an AVI texture is slightly abnormal (only works correctly if drawn via the "shader system", compare: Quake 3 "shaders" or Doom 3 "materials").


note: since BTJ descended directly out of my JPEG codec, there is the side effect that a few basic BTJ features (such as alpha-channels) work with JPEG images (with a ".jpg" extension), and also with the "MJPG" FOURCC, but this is technically non-standard. however, since that point, the codecs were forked (mostly as I had reason to have both a "sane" JPEG codec, and also a "highly customized mutant format").


wandering further off original topic / asside:

BTJ was originally developed, mostly because AVI didn't provide any good way to provide this stuff otherwise, and potentially using a stack of parallel AVIs was not desirable (and I didn't feel like switching to a different container format), so it seemed preferable to basically just unleash some serious hacks on the JPEG format.

the analogy is basically if something like RIFF were shoved inside of a JPEG image, and in-turn contained more JPEG images.

so, decoding a frame generally consists of decoding the base JPEG image, along with any "component layers" (such as alpha-channel or normal map), followed by any contained "tag-layers" (essentially independent images). a shader-info file or script can refer to these layers, treating them like images (the video then is basically like an animated layer stack). (note that any images contained in a given frame will be uploaded to their respective GL textures).

currently, the AVIs are "compiled" typically from a pile of PNG (or BTJ) images, and some number of control-files (such as shader-info files and a frame-list).

BTJ images have some use as standalone images as well, basically as a feature for "compound" or "layered" images, and has a Paint.NET plugin, and supports many of the same features as the native PDN format. the engine then basically treats each layer as if it were its own image, for example "textures/base_foo/bar.btj::Background" or "textures/base_foo/bar.btj::Forground" and may refer to components like: "textures/base_foo/bar.btj::Background:Normal".

like with AVI videos, BTJ images are currently only really usable via the shader system.


thus far, I haven't done a whole lot "notable" with all this, apart from making a few random animation videos and putting them on my YouTube channel.

#2cr88192

Posted 04 February 2013 - 12:44 AM


not entirely sure here, but OpenGL seems able to compress textures relatively quickly.
or, is the idea that the built-in texture-compressor provided by OpenGL isn't "good", or isn't very fast, or something else?...

Generally, if a DXT compressor is very fast, then it's probably producing low quality results.
One other downside of asking GL to compress your data for you is that (on Windows) this is implemented in the graphics driver code, which is likely to be different on each of your user's PCs. This means that maybe one user's driver has a slow DXT compressor, while another's is fast. Maybe one user gets really bad quality textures, while others get decent quality? It's hard to ensure a consistent experience when you outsource some behaviour of your game to an unknown 3rd party plugin like this.


interesting.

will have to look into this.

generally, it seems moderately fast, and has "tolerable" quality, at least on the cards I have typically used (recent ATI and NVIDIA cards), though sometimes does introduce a slight banded/patchy look.

I had generally used it because it seems to help with the framerate.

For an example of objectively measuring the quality of different compression approaches, see L. Spiro's DXT compression blog post here, where he talks about measuring signal to noise ratios, or this blog post that LS links to has some good visual examples of how different the results of different algorithms can look.
 
 
As for video, there's probably not much point in DXT compressing individual frames (unless, yes, you somehow could directly transcode from the video format to DXT blocks!). The time spent performing the DXT compression would probably outweigh the theoretical benefits, which include:
* quicker time to transfer the frame to VRAM (but this time is probably already small compared to the MPEG/etc decoding time)
* faster pixel shader execution due to faster texture fetching (but pixel shading isn't likely a bottleneck)
* reduced VRAM usage (which isn't that important as you only need a frame at a time)

fair enough...

for the codecs I am using (Motion-JPEG and Motion-BTJ), converting from macroblocks to DXT blocks could be possible, but admittedly I don't know it it would save that much over going the full YUV (blocks) -> RGB -> DXT route.

and, could very well still be slower than the (current) strategy of using uncompressed textures for video, or maybe not really make a big difference (since, as-noted, decoding video frames isn't entirely free).


note (going off original topic, mostly general information):

in Motion-JPEG, each frame is basically an independent JPEG image (the video format is, essentially, just playing a series of JPEG images).

for Motion-BTJ, it is basically similar to Motion-JPEG, except that BTJ supports an alpha-channel, lossless coding, normal-maps, luminance and specular maps, layer stacks, embedded shader-info files, ... so can be used for some more elabotate effects (BTJ is essentially a JPEG containing a collection of other modified-format JPEG images inside of a makeshift TLV container format). the "BTJ" basically means "BGBTech JPEG", but I now call it BTJ mostly as "it isn't really JPEG anymore..." (and has since broken strict backwards compatibility).

in both cases, an AVI texture is slightly abnormal (only works correctly if drawn via the "shader system", compare: Quake 3 "shaders" or Doom 3 "materials").


note: since BTJ descended directly out of my JPEG codec, there is the side effect that a few basic BTJ features (such as alpha-channels) work with JPEG images (with a ".jpg" extension), and also with the "MJPG" FOURCC, but this is technically non-standard. however, since that point, the codecs were forked (mostly as I had reason to have both a "sane" JPEG codec, and also a "highly customized mutant format").


wandering further off original topic / asside:

BTJ was originally developed, mostly because AVI didn't provide any good way to provide this stuff otherwise, and potentially using a stack of parallel AVIs was not desirable (and I didn't feel like switching to a different container format), so it seemed preferable to basically just unleash some serious hacks on the JPEG format.

the analogy is basically if something like RIFF were shoved inside of a JPEG image, and in-turn contained more JPEG images.

so, decoding a frame generally consists of decoding the base JPEG image, along with any "component layers" (such as alpha-channel or normal map), followed by any contained "tag-layers" (essentially independent images). a shader-info file or script can refer to these layers, treating them like images (the video then is basically like an animated layer stack). (note that any images contained in a given frame will be uploaded to their respective GL textures).

currently, the AVIs are "compiled" typically from a pile of PNG (or BTJ) images, and some number of control-files (such as shader-info files and a frame-list).

BTJ images have some use as standalone images as well, basically as a feature for "compound" or "layered" images, and has a Paint.NET plugin, and supports many of the same features as the native PDN format. the engine then basically treats each layer as if it were its own image, for example "textures/base_foo/bar.btj::Background" or "textures/base_foo/bar.btj::Forground" and may refer to components like: "textures/base_foo/bar.btj::Background:Normal".

like with AVI videos, BTJ images are currently only really usable via the shader system.


thus far, I haven't done a whole lot "notable" with all this, apart from making a few random animation videos and putting them on my YouTube channel.

#1cr88192

Posted 04 February 2013 - 12:13 AM


not entirely sure here, but OpenGL seems able to compress textures relatively quickly.
or, is the idea that the built-in texture-compressor provided by OpenGL isn't "good", or isn't very fast, or something else?...

Generally, if a DXT compressor is very fast, then it's probably producing low quality results.
One other downside of asking GL to compress your data for you is that (on Windows) this is implemented in the graphics driver code, which is likely to be different on each of your user's PCs. This means that maybe one user's driver has a slow DXT compressor, while another's is fast. Maybe one user gets really bad quality textures, while others get decent quality? It's hard to ensure a consistent experience when you outsource some behaviour of your game to an unknown 3rd party plugin like this.


interesting.

will have to look into this.

generally, it seems moderately fast, and has "tolerable" quality, at least on the cards I have typically used (recent ATI and NVIDIA cards), though sometimes does introduce a slight banded/patchy look.

I had generally used it because it seems to help with the framerate.

For an example of objectively measuring the quality of different compression approaches, see L. Spiro's DXT compression blog post here, where he talks about measuring signal to noise ratios, or this blog post that LS links to has some good visual examples of how different the results of different algorithms can look.
 
 
As for video, there's probably not much point in DXT compressing individual frames (unless, yes, you somehow could directly transcode from the video format to DXT blocks!). The time spent performing the DXT compression would probably outweigh the theoretical benefits, which include:
* quicker time to transfer the frame to VRAM (but this time is probably already small compared to the MPEG/etc decoding time)
* faster pixel shader execution due to faster texture fetching (but pixel shading isn't likely a bottleneck)
* reduced VRAM usage (which isn't that important as you only need a frame at a time)

fair enough...

for the codecs I am using (Motion-JPEG and Motion-BTJ), converting from macroblocks to DXT blocks could be possible, but admittedly I don't know it it would save that much over going the full YUV (blocks) -> RGB -> DXT route.

and, could very well still be slower than the (current) strategy of using uncompressed textures for video, or maybe not really make a big difference (since, as-noted, decoding video frames isn't entirely free).


note (going off original topic, mostly general information):

in Motion-JPEG, each frame is basically an independent JPEG image (the video format is, essentially, just playing a series of JPEG images).

for Motion-BTJ, it is basically similar to Motion-JPEG, except that BTJ supports an alpha-channel, lossless coding, normal-maps, luminance and specular maps, layer stacks, embedded shader-info files, ... so can be used for some more elabotate effects (BTJ is essentially a JPEG containing a collection of other modified-format JPEG images inside of a makeshift TLV container format). the "BTJ" basically means "BGBTech JPEG", but I now call it BTJ mostly as "it isn't really JPEG anymore..." (and has since broken strict backwards compatibility).

in both cases, an AVI texture is slightly abnormal (only works correctly if drawn via the "shader system", compare: Quake3 or Doom3).


note: since BTJ descended directly out of my JPEG codec, there is the side effect that a few basic BTJ features (such as alpha-channels) work with JPEG images (with a ".jpg" extension), and also with the "MJPG" FOURCC, but this is technically non-standard. however, since that point, the codecs were forked (mostly as I had reason to have both a "sane" JPEG codec, and also a "highly customized mutant format").


wandering further off original topic / asside:

BTJ was originally developed, mostly because AVI didn't provide any good way to provide this stuff otherwise, and potentially using a stack of parallel AVIs was not desirable (and I didn't feel like switching to a different container format), so it seemed preferable to basically just unleash some serious hacks on the JPEG format.

the analogy is basically if something like RIFF were shoved inside of a JPEG image, and in-turn contained more JPEG images.

so, decoding a frame generally consists of decoding the base JPEG image, along with any "component layers" (such as alpha-channel or normal map), followed by any contained "tag-layers" (essentially independent images). a shader-info file or script can refer to these layers, treating them like images (the video then is basically like an animated layer stack). (note that any images contained in a given frame will be uploaded to their respective GL textures).

currently, the AVIs are "compiled" typically from a pile of PNG (or BTJ) images, and some number of control-files (such as shader-info files and a frame-list).

BTJ images have some use as standalone images as well, basically as a feature for "compound" or "layered" images, and has a Paint.NET plugin, and supports many of the same features as the native PDN format. the engine then basically treats each layer as if it were its own image, for example "textures/base_foo/bar.btj::Background" or "textures/base_foo/bar.btj::Forground" and may refer to components like: "textures/base_foo/bar.btj::Background:Normal".

like with AVI videos, BTJ images are currently only really usable via the shader system.


thus far, I haven't done a whole lot "notable" with all this, apart from making a few random animation videos and putting them on my YouTube channel.

PARTNERS