cyotek

Member
  • Content count

    5
  • Joined

  • Last visited

Community Reputation

104 Neutral

About cyotek

  • Rank
    Newbie

Personal Information

  1. Kaptein and mhagain, Thanks for the responses. I am enabling 2D textures prior to bind anything, that's not the issue. However, mhagain has hit the nail on the head it seems. Originally I tried to keep all textures power of two, until I read somewhere it wasn't actually required - so I got lazy for textures I used as single images, rather than being part of a tilesheet. And this seems to be exactly what the problem is - I did a quick test on changing one of the text graphics to be power of two, and it's loaded fine in the VM. So I just need to update my tilesheet code (which splits up textures into grids with equally sized cells) in order to support different sized sub images, then I think I'm sorted. Thanks very much, I've been trying to fix this for a while and the fact I'd lazily switched image sizes never occured to me! Regards; Richard Moss
  2. Hello, While testing a build of my game on an XP machine some weeks back, I found that some of the textures, even those not using alpha blending, were being rendered as white rectangles on the VM. On the physical Windows 7 machine, they displayed perfectly. From doing various google searches, I am led to believe that white textures are pretty much an error state and therefore I'm doing something wrong. (Which I then further clarified with glGetError) The game used to work on the XP VM so I'm at a loss what I did to cause it - after I originally got textures working way back when I haven't really touched the code as it hasn't been nessecary and as I'm still pretty much a beginner at OpenGL I don't want to break things by mistake. Part of the problem is that I'm not even sure if I really have broken the game, or if it's the VM itself - while testing I noted that if I changed the color depth of the VM from 32bit to lower, then the game window always displayed a solid red color, nothing else. Of course, that could still be a fault with my code ;) Here's examples of what the game looks like running on Windows 7: [attachment=11290:good1.png][attachment=11291:good2.png] And this is what happens on the XP VM: [attachment=11288:bad1.png][attachment=11289:bad2.png] What is truly frustrating about this whole issue is it only affects some graphics, and it's always the same ones. I added a call to glGetError after calling glTexImage2D and this is returning InvalidValue. But that doesn't make sense to me - why it would it return fine on one machine and not another, and why wouldn't it affect all graphics instead of just some? This is the output of a debug log I added: [CODE]15/09/2012 20:04:45: Debug: Assigning texture 1 (splash) [Hint: Linear, Flags: None] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Activated Scene: 15/09/2012 20:04:45: Debug: Assigning texture 2 (gamebackground) [Hint: Nearest, Flags: None] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Assigning texture 3 (bonusbackground) [Hint: Nearest, Flags: None] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Assigning texture 4 (exitbackground) [Hint: Nearest, Flags: None] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Assigning texture 5 (statusbanner) [Hint: Nearest, Flags: None] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Assigning texture 6 (maintitle) [Hint: Linear, Flags: None] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Assigning texture 7 (optionstitle) [Hint: Linear, Flags: Alpha] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Assigning texture 8 (howtotitle) [Hint: Linear, Flags: Alpha] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Assigning texture 9 (creditstitle) [Hint: Linear, Flags: Alpha] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Assigning texture 10 (pausedtitle) [Hint: Linear, Flags: Alpha] 15/09/2012 20:04:45: Debug: InvalidValue 15/09/2012 20:04:45: Debug: Assigning texture 11 (debug) [Hint: Linear, Flags: Alpha] 15/09/2012 20:04:45: Debug: NoError 15/09/2012 20:04:45: Debug: Assigning texture 12 (rock-72_0) [Hint: Nearest, Flags: Alpha] 15/09/2012 20:04:45: Debug: NoError 15/09/2012 20:04:45: Debug: Assigning texture 13 (rock-32s_0) [Hint: Nearest, Flags: Alpha] 15/09/2012 20:04:45: Debug: NoError 15/09/2012 20:04:45: Debug: Assigning texture 14 (rock-36s_0) [Hint: Nearest, Flags: Alpha] 15/09/2012 20:04:45: Debug: NoError 15/09/2012 20:04:45: Debug: Assigning texture 15 (uni564-12s_0) [Hint: Nearest, Flags: Alpha] 15/09/2012 20:04:45: Debug: NoError 15/09/2012 20:04:45: Debug: Assigning texture 16 (JewelRush) [Hint: Nearest, Flags: Alpha] 15/09/2012 20:04:45: Debug: NoError[/CODE] Clearly it's not liking a lot of graphics independent of settings. The code is scatted about in different classes, hopefully I've gathered up all the relevant bits here. As mentioned I'm still a noob when it comes to OpenGL, so I'm currently doing all texture rendering via glDrawArrays. Initializing texture support: [source lang="csharp"]GL.Disable(EnableCap.CullFace); GL.Enable(EnableCap.Texture2D); GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha); [/source] Binding a System.Drawing.Bitmap to OpenGL: [source lang="c#"] public override void Rebind() { BitmapData data; Bitmap bitmap; int magnificationFilter; int minificationFilter; bitmap = (Bitmap)this.Image; if (this.TextureId == 0) { GL.GenTextures(1, out _textureId); Log.WriteLog(LogLevel.Debug, "Assigning texture {0} ({1}) [Hint: {2}, Flags: {3}]", this.TextureId, this.Name, this.Hint, this.Flags); } OpenGL.BindTexture(this.TextureId); data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); switch (this.Hint) { case TextureHintMode.Nearest: magnificationFilter = (int)TextureMagFilter.Nearest; minificationFilter = (int)TextureMinFilter.Nearest; break; default: magnificationFilter = (int)TextureMagFilter.Linear; minificationFilter = (int)TextureMinFilter.Linear; break; } GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, minificationFilter); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, magnificationFilter); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0); Log.WriteLog(LogLevel.Debug, GL.GetError().ToString()); bitmap.UnlockBits(data); }[/source] Although not shown in the above code, I experimented by creating a new bitmap object with an explicit RGBA pixel type and then drew the original image onto this prior to calling LockBits etc, this had no effect. Part of my SpriteBatch class that handles drawing with as few calls to glBindTexture as possible: [source lang="csharp"]public override void Draw() { if (this.Size != 0) { if (_requiresBlend) GL.Enable(EnableCap.Blend); OpenGL.BindTexture(_textureId); this.SetupPointers(); GL.DrawArrays(BeginMode.Triangles, 0, this.Size); this.Size = 0; if (!_requiresBlend) { GL.Disable(EnableCap.Blend); _requiresBlend = false; } } } private void SetupPointers() { GL.EnableClientState(ArrayCap.ColorArray); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(VertexDimensions, VertexPointerType.Double, 0, _vertexPositions); GL.ColorPointer<BrColor>(ColorDimensions, ColorPointerType.Float, 0, _vertexColors); GL.TexCoordPointer<Point>(UVDimensions, TexCoordPointerType.Float, 0, _vertexUVs); }[/source] Apologies for the somewhat rambling post, if anyone has any suggestions as to where I'm going wrong I'd be grateful. Thanks; Richard Moss
  3. Thanks for the follow up. I didn't realize you can have multiple viewports - I was setting up a default one once when the game started to the window size. Cool, the solution does seem to be easier that what I was thinking [no horrible calculations to work out bits of tiles!], I shall go do some more experimenting
  4. Thanks for the reply. I do set a co-ordinate system, but I set that once when the window is created (and I reset it if the window is resized). Didn't think about creating a sub co-ordinate system as such. When I say "viewport", I'm just referring to the region of screen that I'm drawing in, I'm not actually creating view ports with OpenGL - but your reply in point 3 seems to infer that OpenGL can in fact do them. Guess I need to go and look over the API reference to find out how [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] The screenshots are from my clone of a old arcade game called Boulder Dash that I used to play on an Atari 800XL, using some custom graphics I'm currently trying to draw too. Thanks again for your reply! Regards; Richard Moss
  5. Hello, Apologies if this is a somewhat idiotic question, I'm still somewhat new to OpenGL programming. In fact, I'm not even sure if this is the right forum or if it should have been posted in Beginners! Firstly, some background. I currently have a basic 2D tiling engine which uses OpenGL via the OpenTK bindings. For drawing textures, the engine is currently using the somewhat obsolete vertex arrays rather than VBO's (one challenge at a time!) Each map is a simple 2D array of tiles. I define a field size which is the visible viewport on the screen, and set up an array of vertexes which describe each tile position in the viewport. This is defined once. There is also an origin, which is simply the top/left co-ordinate of the map in relation to the player. Then, each update, the engine cycles through the visible tiles in the current map and updates the vertex array with texture coordinates from a tile sheet texture. If the player moves and the origin changes, this is naturally handled as part of the cycle through visible tiles. It's a very simple system [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] So far so good. However, the disadvantage is that it "scrolls" in full tiles. So if the character charges from one side of the screen to the other, it will jump along one block at a time. What I wanted to do was implement a form a smooth scrolling, so that instead of jumping the entire width of a tile, it will jump a smaller increment until it has reached the desired end position. I tried implementing smooth scrolling in a test project some months ago, and while it worked, it had one nasty drawback - on one side of the scroll you'd have a gap as technically the subsquent column wasn't visible so it wouldn't be drawn. And on the other side of the scroll you'd have an excess because it was trying to draw the last column beyond the boundaries of the view port. So assuming you had UI elements, they could be overwritten. If I was using GDI, then I might just set a clip region and then just draw the visible tiles +/- 1 column/row and everything would be displayed without overwriting any element. Of course, that would mean extra rendering being done for no purpose which is a bit daft. Not really sure I'm explaining this terribly well, so here's a couple of images to partially demonstrate: [attachment=8688:jwlrush 2012-05-05 13-49-45-90.png] [attachment=8689:jwlrush 2012-05-05 13-49-47-30.png] In the second screen shot, the player has moved one tile to the right. As the engine currently tries to keep the player character centred in the map, it automatically scrolls it along one tile. Which is fine, as the end result - but I want to smoothly bump it along instead of it just jumping. [attachment=8693:OpenGLTileMap 2012-05-05 14-23-26-95.png] This third image demonstrates a test program I did which tried smoothly scrolling the background from right to left and also scrolling the water from left to right. The jagged edges at the top and bottom show how my rendering is currently flawed. So, my questions would be: Firstly, does OpenGL support anything like GDI's clip regions so I could draw extra tiles without them going beyond what I define as a view port. [The lazy approach I suppose] -or- Should I recalculate the first/last column/row and draw only the visible portion of the texture [The hurt-your-brain-with-maths approach] -or- Is there another technique that other developers use that I haven't thought of? Again, apologies if this is a really dumb question, but thanks in advance for any advice that could be offered. Regards; Richard Moss