Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Jun 2000
Offline Last Active Apr 30 2016 12:35 PM

Topics I've Started

Help with VBO usage

11 March 2011 - 12:18 AM

Update: solved. See 1st reply, below.

I could really use some help debugging the use of a Vertex Buffer Object in LWJGL.

In this screenshot, you're looking at two viewports. On the left, I've drawn a grid, a 1x1 outline, and a textured quad, all using immediate mode. I'm trying to draw the same textured quad from a VBO, through the managing class SpritesBatchRenderer. At present, SpritesBatchRenderer just draws a transparent blue quad instead of a textured one, because I'm having no luck getting the same quad output.

Attached File  bad-render.png   75.33KB   69 downloads

There's a commented-out section in render(), which prints the occupied contents of the buffer, where I've confirmed that the vertex positions and texture coordinates are what they are supposed to be; a unit square. I've tried randomly positioning the center of that square in the [-3, 3] range, and get different scrambled quads; I'm guessing the floats are being interpreted oddly, or offset from where they're supposed to be, but I don't understand why or how.

Relevant source follows; SpritesBatchRenderer in full, and its only non-OpenGL external call, SpriteAsset.fillBuffer(). SpritesBatchRenderer.render() is near the bottom of its source block.

package assets;

import java.nio.ByteBuffer;
import java.nio.FloatBuffer;
import java.util.HashMap;

import lwjgl.Texture;

import org.lwjgl.opengl.GL11;
import org.lwjgl.opengl.GL15;

import assets.graphics.SpriteAsset;

public final class SpritesBatchRenderer
	private static final int   	V_PER_SPRITE = 4;
	private static final int   	ELEMENT_SIZE = 4;
	private static final int   	SZ_TEX   	= 2;                        	// U,V
	private static final int   	SZ_VER   	= 3;                        	// X,Y,Z
	private static final int   	SZ_UNIT  	= SZ_TEX + SZ_VER;
	private final Texture      	texture;
	private HashMap<Long, Integer> idToIdx  	= new HashMap<Long, Integer> ();
	private HashMap<Integer, Long> idxToId  	= new HashMap<Integer, Long> ();
	private long       			nextID;
	private FloatBuffer        	vertexBuffer;
	private boolean            	bDirty;
	private int                	capacity 	= 64;
	private int                	size 		= 0;
	private int                	vertexBO;
	public SpritesBatchRenderer (Texture aTexture)
		texture = aTexture;
		vertexBO = GL15.glGenBuffers ();
		resize (capacity);
	public long addSprite (final float aX, final float aY, final float aScale, final boolean flipX,
    		final boolean flipY, SpriteAsset anAsset)
		int theIndex = size++;
		if( size == capacity )
			resize ((int) (capacity * 1.5));
		long theID = ++nextID;
		idToIdx.put (theID, theIndex);
		idxToId.put (theIndex, theID);
		updateSprite (theID, aX, aY, aScale, flipX, flipY, anAsset);
		return theID;
	private void updateSprite (long theID, float aX, float aY, float aScale, boolean flipX,
    		boolean flipY, SpriteAsset anAsset)
		int theIndex = idToIdx.get (theID);
		bDirty = true;
		anAsset.fillBuffer (vertexBuffer, aX, aY, aScale, flipX, flipY, theIndex * SZ_UNIT
    			* V_PER_SPRITE);
	public void rmSprite (long aID)
		// Fetch and remove the binding for the current item
		int theIndex = idToIdx.get (aID);
		idToIdx.remove (aID);
		// If we removed an item from anywhere but the end of the list...
		if( theIndex != size )
			// Replace the removed item with the last item in the list.
			for( int i = 0; i < V_PER_SPRITE * SZ_UNIT; ++i )
				vertexBuffer.put (theIndex + i, vertexBuffer.get (size * i));
			// Update the binding for the moved item
			final long theMovedId = idxToId.get (size);
			idToIdx.put (theMovedId, theIndex);
			idxToId.put (theIndex, theMovedId);
			bDirty = true;
	private void resize (int aCap)
		final FloatBuffer oldVB = vertexBuffer;
		vertexBuffer = ByteBuffer.allocateDirect (aCap * V_PER_SPRITE * SZ_UNIT * ELEMENT_SIZE)
    			.asFloatBuffer ();
		if( oldVB != null )
			oldVB.position (0);
			vertexBuffer.put (oldVB);
		capacity = aCap;
		bDirty = true;
	public void render ()
		if( bDirty )
//			for( int i = 0; i < size * V_PER_SPRITE; ++i )
//			{
//				for( int j = 0; j < SZ_UNIT; ++j )
//				{
//					System.out.print (" ");
//					System.out.print (vertexBuffer.get (i * SZ_UNIT + j));
//				}
//				System.out.println ();
//			}
			vertexBuffer.position (0);
			GL15.glBindBuffer (GL15.GL_ARRAY_BUFFER, vertexBO);
			GL15.glBufferData (GL15.GL_ARRAY_BUFFER, vertexBuffer, GL15.GL_DYNAMIC_DRAW);
			bDirty = false;
		GL11.glEnableClientState (GL11.GL_VERTEX_ARRAY);
		GL11.glEnableClientState (GL11.GL_TEXTURE_COORD_ARRAY);
		GL15.glBindBuffer (GL15.GL_ARRAY_BUFFER, vertexBO);
		GL11.glVertexPointer (SZ_VER, GL11.GL_FLOAT, SZ_UNIT * ELEMENT_SIZE, 0);
		// TODO For testing purposes; change to 1,1,1,1 and bind() later
		GL11.glColor4f (0, 1, 1, 0.4f);
//		texture.bind ();
		GL11.glDrawArrays (GL11.GL_TRIANGLES, 0, size * V_PER_SPRITE);
//		texture.unbind ();
		GL15.glBindBuffer (GL15.GL_ARRAY_BUFFER, 0);
		GL11.glDisableClientState (GL11.GL_VERTEX_ARRAY);
		GL11.glDisableClientState (GL11.GL_TEXTURE_COORD_ARRAY);
	protected void finalize () throws Throwable
		dispose ();
	public void dispose ()
		if( vertexBO != 0 ) {
			GL15.glDeleteBuffers (vertexBO);
			vertexBO = 0;

	public void fillBuffer (FloatBuffer aBuffer, float aX, float aY, float aScale, boolean flipX, boolean flipY, int idx)
		final float uea = flipX ? ub : ua;
		final float ueb = flipX ? ua : ub;
		final float vea = flipY ? vb : va;
		final float veb = flipY ? va : vb;
		aScale *= 0.5;
		aBuffer.put(idx++, (float) (aX - aScale));
		aBuffer.put(idx++, (float) (aY - aScale));
		aBuffer.put(idx++, 0);
		aBuffer.put(idx++, uea);
		aBuffer.put(idx++, vea);
		aBuffer.put(idx++, (float) (aX + aScale));
		aBuffer.put(idx++, (float) (aY - aScale));
		aBuffer.put(idx++, 0);
		aBuffer.put(idx++, ueb);
		aBuffer.put(idx++, vea);
		aBuffer.put(idx++, (float) (aX + aScale));
		aBuffer.put(idx++, (float) (aY + aScale));
		aBuffer.put(idx++, 0);
		aBuffer.put(idx++, ueb);
		aBuffer.put(idx++, veb);
		aBuffer.put(idx++, (float) (aX - aScale));
		aBuffer.put(idx++, (float) (aY + aScale));
		aBuffer.put(idx++, 0);
		aBuffer.put(idx++, uea);
		aBuffer.put(idx++, veb);

I am ashamed. C++ simple question [resolved]

06 June 2008 - 06:55 PM

I haven't worked with C++ in so long (nearly 5 years; since early third-year in my B.Sc program), that I've forgotten how to do this. I've been doing things the Java way, or the O'Caml way, or the PHP way, or the C way, and I've forgotton the C++ paradigm for doing this. I have a few numbers and a few std::basic_string<char_t>. I want to concatenate them together and put them into a std::basic_string<wchar_t>. However, the following FPS counter, done the way I thought I remembered it should be done, just produces zero-length strings.
basic_stringstream<wchar_t> buff;
buff << fps << " fps";


What did I do wrong now? [Edited by - Wyrframe on June 9, 2008 11:15:06 AM]

[java] Java Class Usage

22 January 2008 - 06:34 AM

I'm working with a massive Java project (18+ MB of source code), in which about 40% of the classes are deprecated or legacy, and not used at any time during runtime. Problem is, they're mixed all up in the rest of the classes. Is there any way to get the Java VM to dump a list, preferably just newline-delimited, of all classes instantiated, and maybe what classpath root element that class was selected from? If I can get a list of used classes, then I can create the list of unused ones. Any suggestions?

Complete Redirection of OS Design - Opinions?

25 March 2007 - 06:04 AM

For some time, I've been designing a smalltalk-based operating system. The one part that I could never get off the ground is the bootloader and the kernel. Both mostly because I don't want to initially target the horrifyingly ugly excersize in archaic design that the IA-32 line represents. I've wanted to start with targetting a PowerPC or some other well-designed Motorola processor, but there is a distinct lack of information out there about their BIOS-equivalents and their boot process. So I had an idea. Maybe I could, instead, build it on top of GRUB and the IA-64 linux kernel. That would eliminate the first year or two from my work, right off the bat. Also, I would still avoid IA-32... go straight to IA-64, never even consider the IA-32 as a target. I have a memory management scheme down pat (the Slab allocator used in SunOS), and I don't know if I'd have to add them it the kernel, or if I'd have to modify the kernel to replace the existing memory management. The kernel would be able to take several device-management tasks out of my hands, and leave me with just their interfaces. Still a fair bit to learn, but far less than previously. This leaves me with hunting down the interface to the linux kernel so I can wrap the object system around it; hunting down more information on the IA-64 so I can create my compiler (I'd be using nasm+gcc for bootstrap image builds); and implementing everything that's left. Opinions? Am I still planning to do too much work? Are there better ways to get this part of the workload pre-done? Are there better alternatives out there? Am I still targetting an obscene architecture?

Using GNOME's default Open With

08 January 2007 - 03:13 PM

I'm writing an interface application to a version control system, and there's something I'm finding hard to do; how do I get a file to open with the application Gnome has registered as its default application? Under windows 98, there was c:\windows\start.exe; give it the file you want to open as an argument, and it would open the file with its assigned application. But how do you do it under GNOME?