In SDL, how do I blit an image on the screen at floating-point coordinates?
I use SDL_BlitSurface , but it's position parameters are supposed to be SDL_Rect , which , if I'm not wrong, has integer x, y members .
Float coordinates
You're going to have to convert them to integer - more accurately "whole pixel" values. There's no such thing as the 4.72-th pixel from the left.
My guess is you want to use the floating-point coordinates to blit at interpolated locations? Usually blitting just puts pre-rendered objects (to include fully-rendered back buffers) onto the screen at screen-contextual coordinates.
My guess is you want to use the floating-point coordinates to blit at interpolated locations? Usually blitting just puts pre-rendered objects (to include fully-rendered back buffers) onto the screen at screen-contextual coordinates.
If you're asking how do you get from, say, screen coordinates measured (0, 0) as top-left and (1, 1) as bottom right to screen coordinates measured (0, 0) as top left and (1024, 768):
If your coordinates are (0.5, 0.25), simply multiply them by the maximum absolute integer coordinates and convert to integer.
Or: ((int)(0.5 * 1024), (int)(0.25 * 768)).
If you want something more precise, you'll have to supply more information.
If your coordinates are (0.5, 0.25), simply multiply them by the maximum absolute integer coordinates and convert to integer.
Or: ((int)(0.5 * 1024), (int)(0.25 * 768)).
If you want something more precise, you'll have to supply more information.
Well, uhm , I was asking because in a game I'm currently making, I want to set the gravity low [like 0.5], then add it to velocity, then add velocity to the coordinates from *destrect when blitting.I guess I'll just stick to integers for the moment.
Sorry if I'm misunderstanding, but can't you just keep track of coordinates as floats in the background, and only convert (cast, round, whatever) to integers when it is time to blit? This is typically what I do (admittedly never used SDL, but I don't think it matters).
If you really want to *blit*, you are stuck with integer coordinates. Neither SDL nor any underlying graphics API or hardware supports something different.
However, blitting is only the second best thing you can do anyway. Draw a textured quad (or two triangles) instead. Using OpenGL through SDL is well supported.
Not only does this solve your coordinate issue, but it is also fully accelerated in hardware on every card sold during almost two decades.
However, blitting is only the second best thing you can do anyway. Draw a textured quad (or two triangles) instead. Using OpenGL through SDL is well supported.
Not only does this solve your coordinate issue, but it is also fully accelerated in hardware on every card sold during almost two decades.
The graphics card (driver?) does the same exact thing.
Also note that SDL 1.3 (now known as 2) renders with OpenGL, so it's probably better to use it, unless you have a special case which needs manual optimizations (like getting millions of sprites on the screen, which you will never really need).
Also note that SDL 1.3 (now known as 2) renders with OpenGL, so it's probably better to use it, unless you have a special case which needs manual optimizations (like getting millions of sprites on the screen, which you will never really need).
1. No, it doesn't. It's a completely different thing.
The graphics card (driver?) does the same exact thing.
Also note that SDL 1.3 (now known as 2) renders with OpenGL
2. Yes, but as I am trying to point out, there is a huge difference between e.g. [font=courier new,courier,monospace]glDrawPixels [/font]and [font=courier new,courier,monospace]glDrawElements[/font].
One is deprecated OpenGL 1.2 functionality, which is an entirely different pipeline (with kernels, color matrix and whatnot, see imaging subset) that has always been kind of of half-heartedly supported.
The other is the fully accelerated, native way of drawing textured geometry (such as a quad) using dedicated hardware.
Which, on my system, makes a difference of roughly 1 to 10. Your mileage may vary.
Normally you keep track of the object's position/velocity etc... using floats, and convert them to integers each frame when rendering. In this way you get to keep float accuracy for computations, and you only convert back to integers when you actually color pixels.
There is a way to reach subpixel level, it's called multisampling, but you don't need it now. It's notably used to "smooth out" the edges in games (anti-aliasing), but can be used to simulate the effect of a displacement of, say, half a pixel. For instance, if you were to displace a black pixel, half a pixel to the left, you would end up (theoretically) with two adjacent, gray, pixels. If you were simulating a displacement of a quarter of a pixel, you would end up with one light gray pixel, and another darker pixel. Get the idea?
But you don't need it now. Just do your logic in floating-point and render as integers, that's how it's usually done when drawing directly into pixels.
There is a way to reach subpixel level, it's called multisampling, but you don't need it now. It's notably used to "smooth out" the edges in games (anti-aliasing), but can be used to simulate the effect of a displacement of, say, half a pixel. For instance, if you were to displace a black pixel, half a pixel to the left, you would end up (theoretically) with two adjacent, gray, pixels. If you were simulating a displacement of a quarter of a pixel, you would end up with one light gray pixel, and another darker pixel. Get the idea?
But you don't need it now. Just do your logic in floating-point and render as integers, that's how it's usually done when drawing directly into pixels.
2. Yes, but as I am trying to point out, there is a huge difference between e.g. [font=courier new,courier,monospace]glDrawPixels [/font]and [font=courier new,courier,monospace]glDrawElements[/font].
One is deprecated OpenGL 1.2 functionality, which is an entirely different pipeline (with kernels, color matrix and whatnot, see imaging subset) that has always been kind of of half-heartedly supported.
The other is the fully accelerated, native way of drawing textured geometry (such as a quad) using dedicated hardware.
Which, on my system, makes a difference of roughly 1 to 10. Your mileage may vary.
Neither is deprecared, and with GPU-side pixel buffers I can imagine that drawing pixel rectangles are much more efficient than before. I have not looked into pixel drawing in the modern era though, so I would like to know how modern use of pixel rectangles actually perform.
edit: Actually, I take part of the above back: glDrawPixels itself is in fact deprecated. I realized that shortly after posting. There is, however, glBlitFramebuffer for somewhat the same functionality instead.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement