How do I convert an array to an accelerated texture?

Started by
2 comments, last by Nik02 9 years, 3 months ago

I have the following code. This blasts 32-bit pixels to the screen using an array. The array is processed by the CPU, rather than the graphics card, so I hear doing this via a texture would be faster in the long run. How can I move the array to an accelated texture?

 
UCHAR buffer32[SCREEN_WIDTH * SCREEN_HEIGHT * 4];
 
void plotPixel32(int x, int y, int r, int g, int b, int a = 255)
{
    buffer32[4 * (y * SCREEN_WIDTH + x) + 0] = r;
    buffer32[4 * (y * SCREEN_WIDTH + x) + 1] = g;
    buffer32[4 * (y * SCREEN_WIDTH + x) + 2] = b;
    buffer32[4 * (y * SCREEN_WIDTH + x) + 3] = a;
}
 
void Game_Render()
{    
    gl.clear();

    for (int i = 0; i < 5000; i++)
        plotPixel32(rand()%SCREEN_WIDTH, rand()%SCREEN_HEIGHT, rand()%256, rand()%256, rand()%256);

    // raster it
    glRasterPos2f(0, 1);  // raster at the top-left screen
    glPixelZoom( 1, -1 );
    glDrawPixels(SCREEN_WIDTH, SCREEN_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, buffer32);
 
    gl.present();
}
Advertisement

The way you are doing this right now, you are creating the pixels in System Memory (RAM). In most modern computers, the graphics card loads the textures into its own memory. Thus, you'll have to copy your buffer from one memory to another every frame in order to get your desired effect, which is very inefficient, but good enough for a one-off use.

A better approach, then, is to not create the array in System Memory but to do things directly on the graphics device. There are two approaches that I know of for this:

1. Use shaders. Use the vertex shader to draw a rectangle the size of the screen. Use the pixel shader to draw your random colors.

OR

2. Use OpenCL. OpenCL runs C / C++ programs on the GPU. You can integrate OpenCL into your graphics pipeline as well. There are tutorials out there for this - I'm not too familiar with how to do this myself.

Thanks for the thoughts. I'm wondering if this is possible the most:

1) Create a texture in video memory

2) Copy the initial data to it (usually all 0's, black)

2) Obtain a pointer to the texture's data for manipulation.

You'd still do the "manipulation" step in the CPU, and that wouldn't accelerate the process at all.

Any acceleration you get from the GPU is when you actually perform the drawing logic on the GPU itself. Copying the data back and forth (CPU<->GPU) will not only cease to accelerate things, but also makes things slower.

Niko Suni

This topic is closed to new replies.

Advertisement