gpu_noob

Members
  • Content count

    10
  • Joined

  • Last visited

Community Reputation

114 Neutral

About gpu_noob

  • Rank
    Member
  1.   I'm feeding it to an video encoding algorithm. I'm not sure          I don't really want to obtain a copy of the pixels but rather format the pixel data in system memory as follows:   B G R A B G R A...   R R R R R R R R... G G G G G G G G... B B B B B B B B... A A A A A A A A   so that I can feed the Red Channel to the video encoding algorithm which contains Luminance information about the image.
  2.   I just used byte* r = new byte[Height*Width] and delete[] r outside the renderloop and I get about 0.2ms reduction. I'm running on AMD Phenom 945. Also i'm getting about 5ms when rendertarget is about 1920x1080.
  3.   I need to do this in real-time up to framerate of 60FPS.         How can I test if GetRenderTargetData has completed? I'm using the following lines of code    
  4. I'm not sure how to profile properly but I used QueryPerformanceCounter to check the execution times for lockRect (0.003ms), getrendertargetdata(0.5ms) extracting red channel (5ms).   The bgra image is a formatted YUV where red channel is the Luminosity. I need to record the red channel data because it contains Y, which is used in a video encoding algorithm.
  5. This is indeed a bottleneck for me. I suspect this has to do with how many memory read/write operations happen at low level. The problem is that it's using up a lot of CPU power for larger images (1000x1000)   As an alternative, how can I use the GPU to obtain only the red channel? I'm currently rendering the BGRA bitmap image in Direct3D9 and obtaining it to system memory using GetRenderTargetData() and LockRect() then copying the Red Channel using the above method.   Is there any Direct3D way of copying only Red channel to system memory?
  6.   I'm not sure what you mean by copy red rendertarget to RAM. I thought the rendertargets have to be 32-bit aligned. Is there an example of how extract only red channels from rendertarget texture using pixel shaders?
  7.   Would it be possible to somehow transfer the single channel rendertargetdata to system memory via with d3d10 copyresource or d3d9 getrendertargetdata? I need to be able to access the Red channel on CPU.
  8. byte* bgra = byte array of a BGRA formatted bitmap image; byte* r = new byte[Height*Width]; for (int i = 0; i < Height; i++) { for (int j = 0; j < Width; j++) { int offset = i*Width + j; r[offset] = bgra[offset*4 + 2]; } } delete[] r;   I'm using the above code to obtain red channel values from a byte array of a BGRA bitmap image. The image is formatted as: B G R A B G R A... (Size of W*H*4)   I want to obtain a byte array of    R R R R... (Size of W*H)   Is there a more efficient way of doing this without using for loops?  
  9. GPU Readback in YUV format

    Thank you. I'm trying to convert the RGB data into YUV420 (specifically I420). Libyuv only seems to only have NV12. I am newbie in GPU / DirectX programming so I'm not really sure how to work with YUV textures (filling/writing to the texture) or writing pixel shaders. Are there any good tutorials on this?   Here's what I'm planning to do:    // Create 3 8-bit textures (Luminance, Blue Chrominance, Red Chrominance) d3d->CreateTexture (  width,height, 1, 0, D3DFMT_L8, D3DPOOL_DEFAULT, &(yTexture), NULL   ); d3d->CreateTexture (  width/2, height/2, 1, 0, D3DFMT_L8, D3DPOOL_DEFAULT, &(uTexture), NULL   ); d3d->CreateTexture (  width/2, height/2, 1, 0, D3DFMT_L8, D3DPOOL_DEFAULT, &(vTexture), NULL   ); // Use pixel shader to calculate Y U V Texture Pixel values (not really sure how to do this yet) // Lockrect yTexture, Copy pBits. Repeat for uTexture and vTexture (can they even be lockrected since its in the video memory and GetRenderTargetData may not work) Am I on the right track or completely wrong?
  10. Hello, I'm trying to capture each frame of the renderer in YUV420P format for encoding. I don`t really understand how to do this yet. Right now I'm getting the data in RGB format like this:   // initialize d3d->CreateTexture(width,height,1,D3DUSAGE_RENDERTARGET,D3DFMT_X8R8G8B8,D3DPOOL_DEFAULT,&backBuffer, NULL); backbuffer->GetSurfaceLevel(0,&backBufferSurface); d3d->CreateOffscreenPlainSurface(width,height,D3DFMT_X8R8G8B8,D3DPOOL_SYSTEMMEM,&tempSurface,NULL); d3d->GetRenderTargetData(backbufferSurface,tempSurface); // in render loop D3DLOCKED_RECT lr; tempSurface->LockRect(&lr,0, D3DLOCK_READONLY); // Do processing with lr.pBits tempSurface->UnlockRect(); So from what I understand from reading other threads is that I can: - convert the lr.pBits to Y U V planes on CPU or  - use pixel shader to convert the RGB texture to YUV texture and lockrect that to get pBits in YUV   I'm really not sure how to do either of these methods. Could someone please enlighten me on how would I go about doing either of these or if there's any better method (perhaps using DXVA)?