Has anyone here done performance analysis measuring the difference between using DXT3-5 textures vs textures that are half sized? Both yield a 4:1 compression ratio, yet the quality loss of a DXT5 texture under most compression algorithms is consistently worse than a simple downscale.
My understanding of hardware dxt decompression is that the cost of decompression is next to free, but not necessarily cheaper than a half-sized but uncompressed texture fetch given that both textures end up being swizzled/tiled (terminology depending on your platform) and of the same size in the cache. Am I completely wrong in my assumption on a hardware level, or are there fringe cases where this is either true or false (such as dealing with anisotropic textures, textures of a certain size, so on so forth)
Hopefully some of the hardware gurus on here can shed some light.
I haven't had a chance to play around with multiple adapter configurations yet, as I do the majority of my development on a laptop, but I was hoping a few people here might have and could answer some questions.
Can a single DX10/11 Device object be used to render to monitors connected to different video cards, or do I have to create one device per adapter?
Does SLI/Crossfire count as just a single adapter, or does it show up as 2? And in the case of it just being one adapter, are there any special device setup requirements?
From reading the DX10 docs, it seems the only resource that can be shared between multiple devices is a non-mipmapped 2D rendertarget. Does DX11 have this limitation with multiple devices too?
Has anyone had any success with using the GL libs that ship with the latest Windows SDK?
Simple window loading code that has always worked in the past using the libs that are supplied with VS2005 Enterprise now hang on the first call to wglMakeCurrent(HDC, HGLRC) somewhere deep within ntdll.dll, and for the life of me i cant understand why. Hoping someone here can shed some light on this, I can just use the older libs, but i'd much rather use what should be newer.
Ok, firstly i know how old the tech is, but its in general easier to setup than DirectShow (which is also obsoleted anyway right?) but onto the question...
Im using VFW to capture video from a webcam, which requires a capture control to be set up. Problem is i just want the video feed, and not have the control displayed. If i hide the control (even by moving it offscreen) then the capture callback stops triggering.
Anyone had any experience with VFW and knows how to keep the callbacks triggering without having the control visible?
Ok, we're running into a few small issues between our platform implementations when it comes to dealing with depth textures. The problem we are having lies with the Windows' Direct3D implementation of D3DFMT_D24S8.
Has anyone had any luck with using a depth-stencil render-target as a texture, for doing post effects? Or are we stuck with having to essentially duplicate the depth buffer in another color target?