# OpenGL Draw Image (Not Textures) to screen

## Recommended Posts

Hey guys, I might be asking something stupid but the similarities between OpenGL and Direct3D9 are being swamped by the differences. In my project I have a bunch of image resources (specifically 565 16-bit images) that are not square or power of 2 most of the time and I know that not all machines support using a non-square or not power of 2 texture, In OpenGL it's as simple as calling glDrawPixels()... but from my understanding of Direct3D9 thus far, it may not be as simple as that for me. I did find a few pages regarding D3DX and Sprites but I'm not that far into fully understanding DirectX, and I'm just trying to learn plain D3D9 on it's own for the time being, also I have little to no choice in either changing the resources to work better as textures nor am I able to link D3DX to my project (I'm using GNU GCC MinGW, it produces much smaller executables and they seem to run marginally faster than the MSVC++ 2008 executables, as well as the fact they don't seem so prone to crashing with the bad programming done back in the late 90s on the game engine I'm trying to ramp up) I'll give a few details as to why I'm in such a sticky situation with these restrictions, I've got the source-code (legit) to an old game, I've begun rewriting the D3D7 code into D3D9 and thus far there have been no Direct3D related crashes on any system this game has been tested on since the upgrade, I'm still a ways off completing this, but before I move onto managing the textures for the game geometry, I want to have the User Interface working (the default game code already prints out quite a bit of useful information to the screen and I need some of these image resources to work) Basically all I want to do is take the Image data (whether it be RGB's RGBA's or 565) and print it onto the screen directly in a fashion that does not temporarily stall the game as writing to the backbuffer does (Locking the back buffer causes a stall that is considerably noticeable even at 640x480 resolution, which is absolutely unacceptable.) Also note I'm on a 5 year old machine, and I want this game to work on new computers, but also old ones, so fixes that only work on new system are unacceptable. Also, this fits into the same subject, I want to be able to print a single pixel out on the screen, don't worry, I don't intend to draw any more than 100 or sow pixels manually, but if D3D7 could do it easily... why shouldn't D3D9 be able to? OpenGL has allowed me to do it as well. Thank you all in advance if you answer, your help is appreciated when given :)

##### Share on other sites
Quote:
 Original post by RexHunter99Basically all I want to do is take the Image data (whether it be RGB's RGBA's or 565) and print it onto the screen directly in a fashion that does not temporarily stall the game as writing to the backbuffer does (Locking the back buffer causes a stall that is considerably noticeable even at 640x480 resolution, which is absolutely unacceptable.) Also note I'm on a 5 year old machine, and I want this game to work on new computers, but also old ones, so fixes that only work on new system are unacceptable.Also, this fits into the same subject, I want to be able to print a single pixel out on the screen, don't worry, I don't intend to draw any more than 100 or sow pixels manually, but if D3D7 could do it easily... why shouldn't D3D9 be able to? OpenGL has allowed me to do it as well.
Why use D3D9 for this then? It sounds like you either want DirectDraw, or you'll need to do a bit of work to get it to work on D3D9. I don't know why OpenGL would be faster, locking the backbuffer should be pretty much the same performance as calling glDrawPixels (So long as you lock with the correct flags).

I'd create a surface (not a texture), LockRect() it at the start of the frame, draw whatever you like into it manually, then UnlockRect() and call UpdateSurface to copy it to the backbuffer.

##### Share on other sites
How fast does that work? glDrawPixels() is fast enough that the framerate is visibly unaffected (it drops but the human eye will never notice unless the hardware you run the game on is really old)

The original D3D7 code Locked the backbuffer surface, dropped the image onto it, then Unlocked the surface, of course D3D is much different now than it was then... but I am trying as hard as I can to avoid putting in any unessecary libraries and if the method you've supplied with UpdateSurface won't visible affect an already good frame rate, then I'll use it.

##### Share on other sites
No, glDrawPixels is still slow, it was removed from modern versions of OpenGL for that reason. Using textures is the way to go.

##### Share on other sites
So what do I do about users who have Graphics Hardware that do not allow non-square/power-of-2 textures? waste texture space by generating a power of 2 (and square if I must) texture and Blt the data to that? That's a waste tbh and I'm trying my best to support new and old machines (said so above)

glDrawPixels() does not visibly stall my hardware which is quite old, so I'm happy with saying it's not too slow (yes, in truth it is slow, but in the application it isn't really slow enough to matter since I'm not doing anything advanced)

##### Share on other sites
Quote:
 Original post by RexHunter99The original D3D7 code Locked the backbuffer surface, dropped the image onto it, then Unlocked the surface, of course D3D is much different now than it was then... but I am trying as hard as I can to avoid putting in any unessecary libraries and if the method you've supplied with UpdateSurface won't visible affect an already good frame rate, then I'll use it.
D3D7 and D3D9 are pretty similar under the hood for direct backbuffer access - I wouldn't expect much in the way of performance differences from doing something like locking the backbuffer - which will be slow no matter how you do it.
UpdateSurface() is probably the fastest way to update the backbuffer directly.

Quote:
 Original post by RexHunter99So what do I do about users who have Graphics Hardware that do not allow non-square/power-of-2 textures? waste texture space by generating a power of 2 (and square if I must) texture and Blt the data to that? That's a waste tbh and I'm trying my best to support new and old machines (said so above)
If you're using the UpdateSurface method, you're not using textures, so this doesn't apply. The power-of-2 and square restrictions only apply to textures, not surfaces.

##### Share on other sites
It'll be too much of a hassle to use UpdateSurface, I'd have to completely rebuild how t he entire game works (and I will try out every option to me before I begin rewriting around 6000 lines of code out of 20,000+)

Looks like LPD3DXSPRITE might be something I'll have to use... now if only I could get the damn D3DX headers to work with MinGW in Code::Blocks >_>
Has this problem been solved before? MinGW dislikes the "SUB" at the end of the header IIRC, and MinGW only comes with outdated standard d3d7,8,9 headers, no D3DX...

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• ### Forum Statistics

• Total Topics
627772
• Total Posts
2979009
• ### Similar Content

• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!

• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks

• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.

• 11
• 10
• 10
• 23
• 9