SDL 2.0 - Enable VSync

Started by
3 comments, last by Kaptein 10 years, 3 months ago

The image that is the map tiles is approximately 11,000 pixels wide. When I load it into SDL_Texture with software rendering, it works. When I use this, however, it does not:

screen=SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_SOFTWARE );
Using SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_SOFTWARE without SDL_RENDERER_ACCELERATED also does not work. If I want the tiles to render, I must supply SDL_RENDERER_SOFTWARE only. How do I enable VSync in SDL 2.0 with software rendering, or why can I not store that as a texture in the GPU, yet I can store it in the CPU just fine?

Thanks!

Advertisement

1st off, I dont understand the question.

Why are you using both SDL_RENDERER_ACCELERATED and SDL_RENDERER_SOFTWARE??? Use one or the other, im pretty sure they are mutually exclusive.

Btw, you can't store a texture in the cpu either, i think you mean in ram. Although even this make no sense since you have to load it in vram to use it...

To enable vsync make sure double buffering is enabled.

Thanks for the response.

I am using SDL without OpenGL, using SDL_RenderCopy to display graphics. Double buffering seems to be for OpenGL only: http://wiki.libsdl.org/SDL_GL_SetAttribute#Remarks

Tiles render, but VSync doesn't work:


screen=SDL_CreateRenderer(window, -1, SDL_RENDERER_SOFTWARE );

Nothing renders (this should enable VSync alongside software rendering):


screen=SDL_CreateRenderer(window, -1, SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_SOFTWARE );

Everything renders but tiles (if using 2,000x8, the tiles render, but the 11,000x8 image is necessary for this project):


screen=SDL_CreateRenderer(window, -1, SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_ACCELERATED );

What I am asking:

How do I (a) enable VSync in SDL 2.0 with software rendering, or (b) how can I store the 11,000x8 pixels image as a texture in the GPU? It stores in ram properly because it renders with software rendering.

There is a texture size limit on GPUs. A common limit is 4096 x 4096. Break your image into smaller sections and it should work fine.

Create a blitter function for your image, and blit whatever parts you need into pixel arrays, then upload that to the gpu

It's also possible to upload the entire thing to the gpu, and fetch whatever parts of it you need, but that is an advanced topic for another day.

There is nothing wrong with just upload what you need (what you will be seeing).

I dont think you need this, but if you need a zoomed out view of your image, then that is something you can discuss with others int the Graphics Programming forum later on. However, since your image is 8 pixels in height, I don't see what good it would do.

This topic is closed to new replies.

Advertisement