Jump to content
  • Advertisement
Sign in to follow this  
MysteryX

Convert YV12 to YV24 using Shader

This topic is 867 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Using DX9, I can take in YV24 video frames, convert them to RGB, execute various HLSL shaders, convert back to YV24 and transfer the data back to the CPU.
 
I'm using texture formats D3DFMT_X8R8G8B8, D3DFMT_A16B16G16R16 or D3DFMT_A16B16G16R16F based on what's needed.
 
Question: Would it be possible to do the YV12 to YV24 conversion via Shader as well? The problem is that the U and V planes are twice smaller than the Y plane and they need to be resized.
 
I see that there are texture formats such as D3DFMT_A8 that may allow passing a single plane, so it may be possible for a shader to take in all 3 planes as separate samplers.
 
Would that be possible? Then, what HLSL script would I need to perform such a conversion?
 
If I take in 3 separate planes and merge them as one, in theory I believe this might work. But then, I need HLSL code to perform the chroma resizing back and forth. So far I have a Bicubic HLSL resize shader but I find it to soften the chroma too much. Testing this in AviSynth, I find Spline36 resizer to work better but I don't have such resize shader. Anyone has code that will work for this purpose?
 
Thanks
 
 
Edit: I'm trying to implement it, but the code is failing here on StretchRect. How can I transfer a single-plane texture from the CPU to the GPU?

HRESULT D3D9RenderImpl::CopyAviSynthToBuffer(const byte* src, int srcPitch, int index, int width, int height, IScriptEnvironment* env) {
	// Copies source frame into main surface buffer, or into additional input textures
	CComPtr<IDirect3DSurface9> destSurface = m_InputTextures[index].Memory;
	if (index < 0 || index >= maxTextures)
		return E_FAIL;

	D3DLOCKED_RECT d3drect;
	HR(destSurface->LockRect(&d3drect, NULL, 0));
	BYTE* pict = (BYTE*)d3drect.pBits;

	env->BitBlt(pict, d3drect.Pitch, src, srcPitch, width * m_ClipPrecision[index], height);

	HR(destSurface->UnlockRect());

	// Copy to GPU
	return (m_pDevice->StretchRect(m_InputTextures[index].Memory, NULL, m_InputTextures[index].Surface, NULL, D3DTEXF_POINT));
}

Edit2: Even if I manage to get the 3 planes in, it won't be possible to read the data back as YV12 as the shader can only return one texture. Unless I find a way to hack the YV12 data into an appropriate format? For each 4 pixels, YV12 has 4x Y, 1x U and 1x V which means 6 bytes per 4 pixels or 12 bit per pixel. Perhaps packing the YUV data into a format such as D3DFMT_A8P8? I see quite a few challenges there...

Edited by MysteryX

Share this post


Link to post
Share on other sites
Advertisement

I'm not even able to call CreateOffscreenPlainSurface with formats such as D3DFMT_R8G8B8 so I can forget about this.

 

I was hoping to reduce memory transfers by not transferring the useless Alpha channel but it doesn't look like that's an option.

Share this post


Link to post
Share on other sites

I found how to get this to work: replace StretchRect with D3DXLoadSurfaceFromSurface and then I'm able to use D3DFMT_L8.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!