robolee

Members
  • Content count

    18
  • Joined

  • Last visited

Community Reputation

111 Neutral

About robolee

  • Rank
    Member
  1. Drastic performance loss

    Ah yes, taking out the Enable/disable, should have thought of that, however it made little if no difference at all.   And the builds are exactly the same, same compiler settings, same code except for that draw code.   The array thing can't be avoided to an extent, the tiles are called from the same texture but with differing sub-bitmap positions, and are not pre-set, so regardless that array will have to be set to differing values every tile, unless I set hundreds of bytes of data aside... Which I'd rather not do when immediate mode is so much faster, at around 300fps when I put the array creation into it for no reason at all other than to see how much it would bog it down. And the draw code is generic, not just for drawing tilemaps, any further optimization to that regard set-multi-array grids, would make it less useful for freely positioned sprites.   Seeing as how immediate mode is so much faster and making this work with optimized array creation etc. would take more work than it's worth I think it would be pointless.   So is there a better way of doing it? A more OpenGL ES way? FBOs (I'm too tired to look into that right now)
  2. When using immediate mode drawing I get roughly 500 fps (60 rendered frames per second so roughly 9 frames of logic per rendered frame), when changing absolutely nothing other than changing to vertex array and glDrawArrays, I drop down to roughly 120fps. http://i.imgur.com/ToaW9Ur.png (linked because it's a pretty big image, also "frames displayed last second" is a mistake, it's not the displayed frames any more -used to be- it's the number of logic loops, actual rendered fps is a solid 60)   The reason I tried vertex arrays was because I thought it was supposed to be faster than immediate mode (single operation stream instead of multiple). Am I doing something drastically wrong?   Would vertex buffer objects be faster? How would I implement that in the current code?   Note: I don't show any other code because they are otherwise identical, and I made sure of that.
  3. GL returns no errors using either GL_TEXTURE_2D or GL_TEXTURE_RECTANGLE_NV (just tested using both). - "So, first of all I'll just say that it's all fully working 2d sprite rendering code... That is to say fully working for me". [i]IF[/i] there's an issue then it is an issue of compatibility I believe. Like I said I think it's a problem on his end (dodgey driver or copying wrong files or other user mistake), I just want to know if I'm not crazy and it should be working or if there's some genuine compatibility problem. And of course texture rectangles aren't necessary, texture rectangles just make the texture generation a little simpler (you use 1:1 pixel coordinates instead of normalized coordinates), but aren't be compatible with old hardware. However this friends hardware is relatively new and [i]neither method[/i] worked. This was just a new thing I tried more recently which is why it's in the main code instead of texture_2d. [b]Anyway [/b]anyone got any advice on the second issue (rotation)?
  4. You can't do jack without the code so I'll post it right off the bat: [url="http://pastebin.com/6C2RTMb2"]http://pastebin.com/6C2RTMb2[/url] -draw.h [url="http://pastebin.com/6ysGygxp"]http://pastebin.com/6ysGygxp[/url] -draw.cpp (using texture rectangles) [url="http://pastebin.com/R7Daf8n5"]http://pastebin.com/R7Daf8n5[/url] -draw.cpp (replacement functions for using texture_2d instead of texture rectangles) So, first of all I'll just say that it's all fully working 2d sprite rendering code... That is to say fully working [i]for me[/i], my friend [i]claims[/i] that no matter what I change my draws just render as white rectangles, despite the fact that it's fully working on my own machine (nvidia 8800gt) and an old crappy laptop I own which only has intel integrated graphics. I just wanted to know if it should be working properly and it's just a driver error/problem on his half. Also other than that I would like general advice on how I could improve and also what I can do to make it as compatible as possible. Secondly when doing smooth rotation using linear interpolation on the min and mag filters I get magenta fuzz around the sprite (magenta is set to 0 opacity on image load) and image zoom is linear as well. Nearest Neighbour zoom is good but the rotation is horrible. Is it possible to get smooth rotation without the magenta fuzz and to scale up using NN? I guess ideally what I want is AA for the edge of the sprite whilst using NN body rotation and zoom, which I hadn't thought of until I just came to write it down. Is that easily achievable? some notes(/brain farts - ignorable): Texture rectangles seem a lot easier to set up but support should be a lot less, especially for older hardware. The last thing I was going to try with Texture_2D is matching/square power of 2 w&h (32*32, 64*64 etc.), the last thing I did there was to make it power of 2 w&h but nearest increment for w&h separately (so it could be 32*64, 64*128 etc.) which I thought should be extremely compatible? Here's my self written image loader if you wanted to view it: [url="http://pastebin.com/ejkj2hvm"]http://pastebin.com/ejkj2hvm[/url] at the moment it only loads and saves 24bit BMPs, honestly I don't need any more functionality than that and it was fun to learn. If I wanted to load more types I won't be trying that myself, BMPS are far enough in terms of enjoyment/difficulty to implement. And just for my interest would porting this code to OGL ES be straightforward? This is just a secondary question I don't plan to any time soon but my friend thinks that it would be cool to port an app to the iPhone or iPad (though honestly I personally hate apple products) and besides it won't be happening for a while anyway even I decided I wanted to.
  5. For a bit of info I'm using Async TCP and here's my networking code in it's entirety, excluding the header file which basically includes just the functions prototypes and the "Socket" and "client_data_struct" structs: Message handling: [CODE] case WM_SOCKET: switch (WSAGETSELECTEVENT(lParam)){ case FD_CONNECT: connected(WSAGETSELECTERROR(lParam)); break;//successfully connected to server (client) case FD_ACCEPT: accept_connection(); break;//recieved connect attempt (server) case FD_CLOSE: close_connection(wParam, WSAGETSELECTERROR(lParam)); break;//socket closed (server) case FD_READ: recieve_data(wParam); break;//data recv'd wparam=socket (both) } break; [/CODE] networking code: [url="http://pastebin.com/8rxTvFJK"]http://pastebin.com/8rxTvFJK[/url] So yeah I was just wanting tips on how to deal with the errors, and also any tips on how to improve the code for error checking in general would be great. Also are there any noticeable problems/instabilities in my code? (note it's all working fine except that my code doesn't really handle errors, I have used to send and receive data across a network) edit: Okay it seems nobody is interested in helping with generic issues like this, I admit it's not a direct problem with a simple answer. I just wanted to hear some helpful tips and stuff and confirmation that my networking code is at least okay (it works well enough so far). Whatever it's not essential and I was going to go ahead with it regardless, if you want to use my code feel free.
  6. OpenGL glTexImage2D problem

    Sorry disregard this I was messing up my width and height (swapped them around in fact). :/ gawd I hate it when hours of bug checking results in something so trivial being the problem. Perhaps I was a little too hasty in making this thread...
  7. Hi all, for some reason glTexImage2D only seems to work (as in display the correct image and not garbage) in my rendering function... well possibly not the only place, I should say that it just doesn't seem to work here: [code]void OGL_Texture::gen_texture(uint16_t w, uint16_t h, uint8_t *PixData){ glGenTextures(1,&Texture_ID); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, Texture_ID); //2d texture, level of detail, number of components (3=rgb), w,h, border, byte order, component size, data glTexImage2D(GL_TEXTURE_2D, 0, 3, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, PixData); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glDisable(GL_TEXTURE_2D); } [/code] But when putting the glTexImage2D in my rendering function it works (just a standard glenable(texure2d),bind texture, teximage2d, draw quads)... Any reason why? (just a note that this is after the window and context creation literally the only OpenGL things before the rendering is: glFlush(); glClearColor( 0.7f, 0.7f, 0.7f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); and they don't seem to affect it if they are ran before the function and the pointer to the pixel data remains unchanged)
  8. OpenGL Texture rendering problems

    GL_CLAMP seemed to work (GL_CLAMP_TO_EDGE produced a "not declared in this scope" error though...), also it seems to have fixed the double pixel issue with NPOT textures. [img]http://i.cubeupload.com/e1SNan.png[/img] Thanks everybody.
  9. OpenGL Texture rendering problems

    [img]http://i.cubeupload.com/k3GZVf.png[/img] These are my problems and somebody said that the texels must be 0.5 out, so I did a half texel shift and it worked, and now it messes up my textures...I just about give in with OpenGL
  10. OpenGL Texture rendering problems

    Well that does seem to line the texture up properly (weirdly, I don't see how the position of the quad the texture is drawn on should affect the texture at all), but it messes up the position of the quad and I need to keep the -0.5 or it offsets the [i]pixel[/i] position by one... It seems like I can either have pixel accuracy or texel accuracy but not both (which I need, OpenGL was clearly never truly designed for 2d) :/ [Edit] You edited your post with that quote, which I don't understand how it is supposed to help... I am using orthographic projection "glOrtho(0.0f,width,height,0.0f,-1.0f,1.0f);". I don't get what you are pointing out.
  11. OpenGL Texture rendering problems

    I mean look at the image in the first post, there is some pixels in the bottom left that shouldn't be there (that seem to have looped to the top left hand corner as well due to the double pixel weirdness). Here's how it looks at 128*128: [img]http://i.cubeupload.com/YCrPaJ.png[/img] Obviously the image is shifted to the right by a pixel somehow but the double pixel weirdness is gone. Okay I followed that wiki link about NPOT textures and it says rectangle textures have never had that limitation, but I can't find an example of using "texture rectangles". Should I use texture rectangles? [Edit] Oh and about the -0.5, yeah it was a problem with things not being in the right place and -0.5 fixed it.
  12. OpenGL Texture rendering problems

    [quote name='karwosts' timestamp='1296860791' post='4769783'] This is just a consequence of rasterization. You've specified your texture to sample the nearest texel, so when you're drawing it at a different resolution than native, screen pixels fall in between texture texels and it has to choose between them. In your particular case you may get a better result if you use LINEAR sampling instead of NEAREST [list=1][*] glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR);[*] glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR);[/list][/quote] The problem is that the resolutions [i]are[/i] the same, the image isn't being resized at all, and I need pixel perfect accuracy but GL_LINEAR blurs the image and still has the same problems but just masked by the blur.. Okay using a texture size of 128 actually got rid of the double pixel problem but it still displays the image shifted a pixel to the right and I need it to be a NPOT texture.
  13. [font="Arial, Helvetica, sans-serif"][size="2"]I'm using OpenGL and have a texture from memory and for some reason it is being drawn oddly some rows have double height pixels and it's being drawn a pixel to the right, using SDL I saved the data as a bitmap just to prove that the memory is correct and it is a rendering problem.[/size][/font] [font="Arial, Helvetica, sans-serif"] [/font] [font="Arial, Helvetica, sans-serif"][size="2"]Here's what it should look like [/size][/font][font="Arial, Helvetica, sans-serif"][size="2"](output using sdl with exactly the same image data)[/size][/font][font="Arial, Helvetica, sans-serif"][size="2"]: [/size][/font] [font="Arial, Helvetica, sans-serif"] [/font][url="http://i.cubeupload.com/lFHvim.png"][img]http://i.cubeupload.com/lFHvim.png[/img][/url] [font="Arial, Helvetica, sans-serif"] [/font] [font="Arial, Helvetica, sans-serif"][size="2"]Here's how OpenGL displays it:[/size][/font] [font="Arial, Helvetica, sans-serif"][size="2"][img]http://i.cubeupload.com/jGUc8q.png[/img] [/size][/font] [font="Arial, Helvetica, sans-serif"][size="2"]As you can see some pixels are double height (I can draw extra pixels to the image and it affects whole rows) and it wraps around by a pixel. (outline is not part of the image)[/size][/font] [font="Arial, Helvetica, sans-serif"] [/font] [font="Arial, Helvetica, sans-serif"][size="2"]I'm assuming it's a problem with texture parameters or environment or something, but really I have no idea... I'd really appreciate it if somebody were to offer a solution. Here's the source: [/size][/font][font="Arial, Helvetica, sans-serif"][size="2"][url="http://pastebin.com/rVkk02SY"]http://pastebin.com/rVkk02SY[/url] I removed a bit from main but render() still has some unnecessary extra code but I left it in just in case it's somehow affecting how it's rendering the texture.[/size][/font]
  14. OpenGL memory leak

    It's a very reliable indicator actually, for example in a game i forgot to delete old maps when it loaded a new one (silly mistake), i found it by checking the task manager (noticed that the total memory just kept increasing even if loading a smaller map (huge difference in map size) where the expected memory usage should decrease, checked the loading function and realized). Hmmm... seems I didn't let it run long enough, it eventually stopped rising... but then again when I fullscreen'd the memory rose up to 14,952k but shrinking the screen didn't decrease the memory usage which I would expect. Here's what caused me to think there was a memory leak (taken at 5ish second intervals): Even though the memory usage stops climbing after a while I've never seen this kind of behavior for a program the memory usually climbs very quickly when adding stuff to memory and then stops or jitters up and down a bit. If you don't think this is a memory leak then my second question still stands.
  15. OpenGL memory leak

    Task manager, increasing memory usage. How else would I know if there was a memory leak?