Jump to content
  • Advertisement
Sign in to follow this  
Renran

Graphics Mode programming SDL..... GDI.....

This topic is 3870 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I have a small terrain program in SDL + Opengl. But I thought that its speed was not so good (120 fps) So, I did some searching in programming for DOS 16 bit (Vga =limited as most post on this side say). So 16 bit dos is not a better way. Then I did try GDI programming (is too slow = posts on this side) But what if I want to make some fast asm (for instance) codes to use in SDL just like Opengl exists in SDL. Or making fast code in the GDI. So what I want to know is, how can I program fast code in SDL or GDI, without using (as a wrapper) the statements like for instance: SetPixel(hdc, 100, 100, RGB(255,0,0)); === GDI statement. But using my own code in asm or C++ and searching for a fast triangle drawing. Where can I find low level info? The "Focus on SDL" book gives only high level (SDL statements) info. So how can I program low level like opengl (but own code) in SDL or GDI, just like we used to make own code for dos in the old days? Books, info........? Hope You understand! Thanks, Renran

Share this post


Link to post
Share on other sites
Advertisement
From my understanding, in windows, thou shalt not touch hardware directly. Have you found any spots in your current code that may be slowing it down?

Share this post


Link to post
Share on other sites
Hey,

Yes,I know and You are right!!!
But did we lose all of our freedom choosing windows as an OS?
Did we give up the creativity of making our own home made graphic libraries?
Do we have to stick with opengl or directx and never see the open space of
creativity any more than something like DOS did give us?
See for instance: http://www.zephyrsoftware.com/order/zchgcc.html
or: http://www.nondot.org/sabre/graphpro/
Home made, still working under XP.
But we gave it all up. It's gone!
Can you make a new graphics library, no you cannot!
But that does not mean that I do not use opengl or directx!
I like them but it is a wall, an end, isn't it?
So why should I not search to overcome that wall and go further in freedom?
Why not make a whole new library for the GDI?
But where to start?
Even the assembly guys are using opengl or directx statements under windows!!!!
Just a thought!!!!

Greetings and thank You,

Renran

Share this post


Link to post
Share on other sites
Why reinvent the wheel? The entire purpose of things like OpenGL and DirectX are so you don't have to.

Do these API's take away your creativity to make graphics? I don't see how they would. If anything, they can increase your creativity becuase you can focus on being creative instead of designing these low level routines to get pixels on the screen.

Share this post


Link to post
Share on other sites
Quote:
Even the assembly guys are using opengl or directx statements under windows!!!!


Except you don't understand why.

If you rewrite SetPixel(hdc, 100, 100, RGB(255,0,0)) into assembly, it will be exactly as slow.

If you send entire texture to the GPU, it will be 100,000 times as fast. The cost of OGL function call will be too small to measure.

The reason everyone is using OGL is because if used *properly*, it will leave CPU running at 0%, and GPU will be doing all the work. Not even the most optimized assembly can reach 1% of performance of today's GPUs. The reason to use OGL is so that it's GPU that does the work, not CPU.

Also - Get/Set Pixel is the slowest possible method of modifying pixels.

Share this post


Link to post
Share on other sites
If you want to directly modify pixels in a bitmap, rather than use SetPixel(), use a device independent bitmap. I was once looking for a fast alternative to SetPixel too, and this is what I came up with. You may be able to tweak it to work for you.



BITMAPINFO bmi;
HBITMAP MyBitmap;
unsigned char * pPixelArray; //This will become a pointer directly to the bitmap pixel data

bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth = 800;
bmi.bmiHeader.biHeight = -600; //dont ask me why it has to be negative
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32; //You can use 24 bit to save memory, but 32 bit color is faster; a nice even 4 bytes per pixel.
bmi.bmiHeader.biCompression = BI_RGB;
bmi.bmiHeader.biSizeImage = 0; //I don't guess any of these values are very important, cuz I don't use them
bmi.bmiHeader.biXPelsPerMeter = 0;
bmi.bmiHeader.biYPelsPerMeter = 0;
bmi.bmiHeader.biClrUsed = 0;
bmi.bmiHeader.biClrImportant = 0;

MyBitmap = CreateDIBSection( NULL, &bmi, DIB_RGB_COLORS, (VOID**)&pPixelArray, NULL, 0);







And here is how you would write data to the bitmap
PixelLocation = 4* (X + Y*800);

pPixelArray[ PixelLocation ] = 255; //blue
pPixelArray[ PixelLocation + 1] = 255; //green
pPixelArray[ PixelLocation + 2] = 255; //red






In my program, I select this bitmap into a memory device context, and then use BitBlt to copy from my memory dc to the main window dc.

If all you are doing is plotting pixels, then this is as fast as it gets for the effort. Hardware will handle copying the bitmap to the screen using BitBlt.

Share this post


Link to post
Share on other sites
Quote:
Original post by Renran
So how can I program low level like opengl (but own code) in SDL or
GDI, just like we used to make own code for dos in the old days?

You can't create low-level code with SDL because SDL takes care of the low level code for you. Essentially, what you are wanting to do is rewrite SDL or OpenGL, something not very plausible. They are so widely used that if it was possible for one person to easily make them faster, it would be done already.

Share this post


Link to post
Share on other sites
If crytek could make Crysis in DX, and Carmack is using OGL since Quake 2 (that is, as soon as the first consumer HW appeared on the marked, perhaps even a bit sooner), then chances are that this is the right choice, even performance wise.
On your pc you have essentially two processing units. For graphical applications, one of the two is way, way more powerful than the other, so I belive that using it is the best thing. You only need to use it appropiately....

Share this post


Link to post
Share on other sites
Hey, thanks all !!!!!

And CET Kaerf, I will try Your suggestions.
And I know I have to lay me down with opengl or directX.
There is no other option.
But I think it is OK to think about faster graphics, isn't it?
Please look into:
http://www.asmcommunity.net/board/index.php?PHPSESSID=pjt9ubq96g2l4qq2ugjokr09p6&topic=28760.0
It is also trying to overcome things.
But I'm not in directX, rather OpenGL.
I wish we could have insight of programming the GPU directly, but in the
sense that we follow window api rules.
I know that the bottleneck is link between CPU and GPU.
Unfortunatly that way to overcome is undocumented.
But it exists, it is there, right?

Greetings,

Renran

I think You understand me now.

Greetings,

Share this post


Link to post
Share on other sites
The bottleneck is really the bus between the GPU and motherboard....But I don't get how you think getting direct hardware access will make your graphics any faster.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!