Archived

This topic is now archived and is closed to further replies.

SDL - Alpha channel issues

This topic is 5000 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Heya! I''ve been playing around with alpha transparency using SDL and I''m quite impressed by what it can do (and with so much ease, too!) However, I''ve run into a little bit of a snag. As you can see from the window, I can only seem to set alpha transparency for the full surface itself, not just a part of it (I''m trying to make the inside of the window 50% translucent). The obvious solution would be to use a PNG or other format with an alpha channel, however this raises an other issue: loading it. SDL only contains methods for BMP loading, and I don''t want to add support for a format I only plan on using in one place (as I understand it, this would mean adding support for a host of decompression algorithms and the likes; I''m not being lazy, I''m just trying to avoid something I really don''t need). All''s I need to do is, basically, set 0x00101010 to 0x80101010 (and support having 0x00FF00FF as my color key, too). What can I do?

Share this post


Link to post
Share on other sites
dude, you need to be using SDL_image. go to www.libsdl.org and get SDL_image. it supports all kinds of file types, like PNG, BMP, and a million others. oh, and its extremely easy to use, the difference is this:

surface = Load_BMP("image.bmp")

to this

surface = IMG_Load("image.whatever");

thats the ONLY difference!

Share this post


Link to post
Share on other sites
First of all, know about SDL_image, which can load virtually any format you''d ever want to use. Get it here.

You can use alpha in two ways in SDL - per-pixel or per-surface. The latter you do with SDL_SetAlpha. The former you can set using for example the putpixel() function from the SDL docs, or any other function that sets pixel colors. Or access the pixels array directly.


---
Just trying to be helpful.

Sebastian Beschke
Just some student from Germany
http://mitglied.lycos.de/xplosiff

Share this post


Link to post
Share on other sites
Just be aware that anything with per-pixel alpha (PNG graphics) will completely destroy your frame rate if you have large images or many smaller ones. So while it's very nice, use surface-level alpha or no alpha at all in cases where you don't absolutely, positively require per-pixel alpha.

For example: take your box there. You want an edge and middle. You can load two images, one for each. Blit the interior BMP at 50% surface alpha, then the seperate rim BMP with the interior blacked out (set color key for full transparancy on that color. SDL skips these pixels = faster blit). Also, any transparent pixel of fully transparent value will be skipped (i believe), so you could do a BMP interior with surface-level alpha, then a second PNG rim which blends nicely on top (just make sure the center is 100% transparent so it doesn't blit all those pixels = waste of time)


[edited by - leiavoia on April 4, 2004 4:33:00 PM]

Share this post


Link to post
Share on other sites
no, once loaded they are precisely the same. well, almost.

It''s sometimes worthwile to use SDL_DisplayFormat to convert ANY loaded image to the same pixel format as the screen (there''s also an alpha version)

Share this post


Link to post
Share on other sites
yes, ive heard a lot about SDL_DisplayFormat()... but i dont understand what it does

"DisplayFormat to convert ANY loaded image to the same pixel format as the screen (there''s also an alpha version) "

what does that mean to convert it to the same pixel format? what is meant by pixel format? are we talking about the bpp here? as in 32 bit,24 bit, etc? if so, all my images are 24 bit and my screen is 24 bit also, so this should not be needed, correct?

Share this post


Link to post
Share on other sites
Well, it''s like this:

SDL_image likes to set the format of the image to what it should be to handle the data in that image. If you load a PNG then, it gives each pixel an alpha value. If you don''t want that (and i hear your argument for less disk space), you need to convert the image to the screen format (boring 3-channel BMP style) which of course negates any pixel-level alpha the original image had.

You should get into the habit of converting any surface you load, even BMPs. I never thought it of any value either until i tried and my framerate for blitting a 1024x768 background BMP just about doubled. Try it and see.

So if your screen is 32bit and and your image is not, each pixel has to get converted every time you blit. That''s a lot! So convert everything to the proper format after loading. Not everyone runs a true-color screen.

Share this post


Link to post
Share on other sites
The above poster''s explanation looks long, so I didn''t read it.

The short of it is, it converts images loaded as BGR to the screen''s RGB. Also does 24bpp->16bpp or whatever, as needed.

I _think_ it will also put the surface into video memory, if you used HWSURFACE or whatever it is when you called SetVideoMode.

In short, it''ll probably make blitting a lot faster, in SOME cases.

Share this post


Link to post
Share on other sites
quote:
Original post by leiavoia
Not everyone runs a true-color screen.


HEATHENS! o.o

I'm running into an other issue right now. I've loaded the (newly-converted to) PNG version of my system graphics, but... in short, it doesn't work.

What I do in my game is pretty simple. I have a bitmap containing three by three 16x16 tiles used to render the window, as well as the entire game font. I load this into a surface. I then have a window and label class which both create an offscreen surface and render either a window or some text, respectively onto that surface. Then I just call the window or label object's render method to update the display (which blits the surface to the specified position on the screen, saving the overhead of calculating the position of every letter or building the window/label with a number of small blits).

This works fine.

However, the translucency does not.

The alpha channel seems to remain at 0 all the time unless I set it myself, in which case it changes to whatever I set it to. Back to square one. Apparently, the blit doesn't carry over the alpha channel? That rather sucks... is there something I can do? :/

Edit: the alpha channel on the loaded PNG is fine. The one on the rendered window or label is not; that's the one that doesn't seem to carry over when I blit parts of my system graphics. Which makes sense, I suppose... if it does per-pixel blending right away instead of storing the alpha channel in my offscreen surface along with the rest of the image data...

[edited by - RuneLancer on April 4, 2004 8:18:43 PM]

Share this post


Link to post
Share on other sites
hey guys, thanks for the help, but im still a little confused on what you mean by "screen". are you taling about the users desktop BPP? or are you talking about the BPP of the screen you setup when calling SDL_SetVideoMode?? actually, this whole topic is confusing me. lets say i call SetVideoMode to be 24 BPP. now, what if the users desktop is set to 32 bit? is this what you mean by it will convert each image to 32 bit on the fly?

or, do you mean, what if you call SetVideoMode to be 24 bit, and all the images you blit are 16 bit. then it will convert the images BPP from 16 to 24 on the fly?

which one do you mean? now im really confused.... so that screen you setup has to match the users desktop? or does the images have to match the screen you set up? please someoen clarify... thank you!

[edited by - graveyard filla on April 4, 2004 9:06:44 PM]

Share this post


Link to post
Share on other sites
Here''s a screenshot of what I have.



I''ve blitted the loaded system bitmap (well, PNG): translucency in the window is apparent. So obviously it loads well.

Ignore the pink window, I haven''t worked on the window class yet.

As can be seen from the label, though, translucency doesn''t work at all (black should be 100% translucent; color-keyed in fact, though I''m using 100% translucency for it at the moment).

Share this post


Link to post
Share on other sites
@RuneLancer

Well, when you blit to another surface, the other surface does not carry the properties of the first (blit-from) surface. All it does is kind of "project" the contents of one surface to another. Obviously you cannot "project alpha" aka blow holes into the target surface. If this where the case, you could never overlay one graphic with another, you''d just replace it every time. If you want the target surface to have the same pixel-alpha info, you need to set that (althoug it isn''t something i usually do, so don''t ask me to show you how. It''s a lot of work. You have to go and set the alpha on every pixel every time i think).

@graveyard filla

when i mean the screen, there is the user''s desktop. Mine is set for 24bit (8bits for each RGB channel) color. Some users only use 16. When you start SDL or create any new surface, you have to tell it what bit depth you are working at. If you create, say, a 32 bit surface and then blit to 16 bit surface (including the "screen" surface) or vice-versa, you have to reconvert everything every time you blit. If your desktop is not the same level as your surfaces, you take another hit. Try it and see. Set your desktop to 16 and use 32 when you start SDL and see what happens to your framerate counter.

Share this post


Link to post
Share on other sites
@RuneLancer

For your "temp surface", one alternative is just to create a whole new surface every time you change it''s contents -or- create a new surface from a surface you already have (aka "copy"). I think it''s called SDL_CreateRGBSurfaceFrom() or something to that effect. That way you don''t have to reset every pixel every time you change the contents.

This is basically how font-rendering libs for SDL work. If you want a new font string surface, it just creates a whole new surface for you instead of mapping to an old one. When you are through with it, you trash it and get a new one.

Share this post


Link to post
Share on other sites
@leivoia:
That''s what I do. In short...

TextLabel::SetText(...)
{
...
if(SDL_Label) SDL_FreeSurface(SDL_Label);

//I load my string res here.
LoadString(hInst, ResID, Text, sizeof(Text));

...

SDL_Label = SDL_CreateRGBSurface(SDL_HWSURFACE | SDL_SRCALPHA, W, H, 32, 0xFF, 0xFF00, 0xFF0000, 0xFF000000);

//I render my label''s contents here...
for(i = 0; i < (int)strlen(Text); i++)
{
...
}
}


And whenever I want to render, I just call my class''s Render method.

TextLabel::Render()
{
SDL_Rect DstRect = {X, Y, 0, 0};
SDL_BlitSurface(SDL_Label, NULL, *SDL_Screen, &DstRect);
}


Simple as pie.

So there''s really no "easy" way out, is there? Shame. There usually isn''t, anyways I guess I could just colorkey my image (for the transparent pixels) and when building the surface, manually set the alpha of my black pixels to 0x80. Meh, not too hard... It probably won''t lag my window creation enough to matter.

Share this post


Link to post
Share on other sites
quote:
So there''s really no "easy" way out, is there?


Well, *ahem*, there is...

it''s called SDL_ttf. It uses FreeType2 to render .ttf fonts to SDL surfaces. They look nicer and someone''s already taken the time to code up a lib for you, so i would suggest making use of their hard work. Plus, you get to make use of the thousands of free ttf fonts out there which people have already made. It''s not a very well documented lib, but it really doesn''t have too many functions, so it''s no big deal. It comes with a sample program you can dissect, but looking at the header file alone will give you all you need to know.

SDL_ttf 2.0

Share this post


Link to post
Share on other sites
That doesn't solve the window issue. I have no problems at all with my label class if I use a colorkey.

Edit: Read the OP; text's fine. The problem's with getting part of my windows transluscent (the inside). If they were of a constant size it'd be fine since I could just prerender a PNG, but they're not so I have to create them on the fly. I'm using the Label class as an example because they both use very similare code and essentially turn out being the same with different graphics (well, the window is drawn using simpler code but...) If I can't manipulate that PNG like I intend to, neither work anyways.

But the real issue is the window. There IS an easy way out: set a colorkey and have one out of two pixels transparent. But that's pretty bleh.


[edited by - RuneLancer on April 5, 2004 1:10:58 AM]

Share this post


Link to post
Share on other sites
did you try overlapping two images like i suggested earlier? Basically blit one surface with surface alpha 50%, then another border on top of it with the center keyed out. That doesn't make it very flexible (still need fixed size windows). You could do it the way you make tables in HTML though: have 8 graphics arranged around the transparent center graphic


topleft top topright
left right
botleft bot botright


and have a class that arranges them based on your "window size". You manually place the corner graphics (size never changes) and tile the sides to make it span all the way across. Then do whatever you want with the center. might work. No overlapping this way.

[edited by - leiavoia on April 5, 2004 1:09:17 AM]

Share this post


Link to post
Share on other sites
I can do it like that, however the problem now''s getting variable window sizes. That, or rendering every possible window size (that would be used in the game). Which, come to honestly think of it, is not that much... There''s the text window, the character name window, the character portrait window, and the "You found a (insert item name)!" window. The problem is the status screen, which has quite a bit of them.

I''m thinking of doing something rather similare to what you propose, come to think of it. I''d have two surfaces: the transluscent bit and the overlayed, solid bit. It would mean two blits per render (per window), but I don''t think it''d be too bad...

So two options: that, or I set the alpha manually. With the above, I have a small bit more overhead to render my windows (probably only noticeable when I''d have a few dozen onscreen at once; not happening). The manual method means a little more overhead when drawing the window, but blitting to screen would be faster.

Hrm... I''ll try both and post back with my results. I should be able to get it working now; thanks a bunch for your advice everyone!

Share this post


Link to post
Share on other sites
for the overlay method, remember that any pixel completely transparent or color keyed is skipped (no point blitting nothing, right?) so it''s really not that much of an increase, especially if the two layers overlay but do not overlap, if that makes any sense (you are only drawing the visual portions of the two = price of one)

like, layer one only draws these pixels:

*********
* *
* *
*********


layer two only draws these:


*******
*******



but combined, you get the visual appearence of:

*********
*********
*********
*********


which is what a single-layer blit would have done anyway.

Share this post


Link to post
Share on other sites
quote:
Original post by leiavoia
@graveyard filla

when i mean the screen, there is the user''s desktop. Mine is set for 24bit (8bits for each RGB channel) color. Some users only use 16. When you start SDL or create any new surface, you have to tell it what bit depth you are working at. If you create, say, a 32 bit surface and then blit to 16 bit surface (including the "screen" surface) or vice-versa, you have to reconvert everything every time you blit. If your desktop is not the same level as your surfaces, you take another hit. Try it and see. Set your desktop to 16 and use 32 when you start SDL and see what happens to your framerate counter.


hey thanks for the reply. ok, so i made a function which makes 2 temp surfaces, the first one loads in an image, and the second one will copy that surface to the SDL_DisplayFormat and return it. but this doesnt stop my screen from being the wrong format, correct? should i try to check to see which BPP the user currently has, and then make my screen via SetVideoMode based on the users BPP? im sure there must be functions that come with SDL to do this, right? thanks for any help!

Share this post


Link to post
Share on other sites
Use SDL_GetVideoInfo to get info on what the user has then either let the user pick a resolution or switch to one by default which the user has.

You can then map everything to that particular color depth and you won''t have to worry about incompatibility issues as much as you would if you used your own resolution (which I''m doing right now. Even at 640x480, though, problems can occure, so I''ll have to fix that eventually)

Share this post


Link to post
Share on other sites
high,

this is what i tried, and its making my program crash!!


const SDL_VideoInfo* current_screen = NULL;

current_screen = SDL_GetVideoInfo();

int bpp = current_screen->vfmt->BitsPerPixel;

data.screen=SDL_SetVideoMode(800,600,bpp,SDL_HWSURFACE|SDL_DOUBLEBUF);



do you know what im doing wrong? at first i had put current_screen->vfmt->BitsPerPixel directly into the data.screen call, but i thought maybe casting it to an int would help. it didnt, though. do you know what i could be doing wrong? thanks for any help

Share this post


Link to post
Share on other sites
Well, frankly, that looks ok to me... Just use a Uint8 instead of an int though, that''s what it''s supposed to return.

I doubt that''ll fix it though, I have no clue what could be wrong. It IS rather late though. Or early, if you want to call it that.

Share this post


Link to post
Share on other sites