Jump to content

  • Log In with Google      Sign In   
  • Create Account

Chananya Freiman

Member Since 12 Dec 2012
Offline Last Active Sep 14 2013 09:37 AM

Topics I've Started

WebGL: Pseudo instancing being slow

06 July 2013 - 02:15 PM

I am working on a 3D viewer with WebGL, and want to optimize the particle emitters that it supports.

The information for each particle is its rectangle in world space, texture coordinates, and a color, all of which can change over time.

 

At first I did it the simple way, with a big buffer where each vertex had all the above information, for a total of 54 floats per particle (2 triangles, 6 vertices, 9 floats per vertex: [X, Y, Z] [U, V] [R, G, B, A]).

Note that the vertex positions here are already in world space and form a rectangle.

 

This works fine, but is a bit on the slow side on the CPU, simply because updating the buffer each frame takes a lot of work.

 

So the next stage was to make pseudo instancing.

 

WebGL doesn't support anything beside 2D textures, so I have to use that for arbitrary storage.

The idea is, then, to make a 2D texture that can hold all the data for each particle (so now only 9 floats are needed for the per-particle data), and for the vertex attributes use just the instance ID, and vertex ID.

For example, the first particle is [0, 0, 0, 1, 0, 2, 0, 0, 0, 2, 0, 3], where each pair is the instance and vertex IDs.

 

Instead of sending the world positions that already form a rectangle, I just send the center, and size of each particle. A normalized rectangle is computed once and sent as a uniform, and then all the particles add it to their position scaled by their scale.

 

Instead of using the computed texture coordinates, I instead send an ID that says where in the texture this particle is, and the actual coordinates are computed in the shader.

 

So every particle has a total of 21 floats instead of 54 (for some reason I can't use non-float attributes, is this only in WebGL? it has been quite some time since I touched OpenGL), and out of those only 9 need updates every frame.

 

For a start, I wanted to get it done quickly and not waste too much time on further optimizations, so I just picked a square power of 2 texture size that fit my needs for a test model, which happened to be 32x32 pixels.

While only 9 floats were really needed for each particle, I just chose a 4x4 matrix format for now, and padded the data with zeroes.

So a 32x32 RGBA texture, in this scenario, can hold 256 particles (32*32/4).

 

Even though I chose to not make it optimized, it still requires far less bandwidth and updates than the original design.

 

Every frame after updating all the particles, I upload the new texture data, and render all the particles.

 

But here's the issue: for some reason, this is a whole lot slower than just using the flat, much bigger buffer.

I simply can't understand how that's even possible, I am uploading much less data, and doing a lot less work on CPU.

Is it possible that glTexSubImage2D is somehow much slower than glBufferSubData?

 

These are the most relevant pieces of code, and after that the vertex shader:

// Setup
this.textureColumns = 32;
this.textureRows = 32;

this.particleTexture = gl.createTexture();

gl.bindTexture(gl.TEXTURE_2D, this.particleTexture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA,  this.textureColumns, this.textureRows, 0, gl.RGBA, gl.FLOAT, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.bindTexture(gl.TEXTURE_2D, null);


...


// After updating all the particles
gl.activeTexture(gl.TEXTURE3);
gl.bindTexture(gl.TEXTURE_2D, this.particleTexture);
// hwarray is a Float32Array object with all the particle data
gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, 0, this.textureColumns, this.textureRows, gl.RGBA, gl.FLOAT, this.hwarray);
// Bind it to the sampler uniform
viewer.setParameter("u_particles", 3);
// This buffer holds all the instance and vertex IDs, it never changes
gl.bindBuffer(gl.ARRAY_BUFFER, this.buffer);
// viewer is my WebGL wrapper
gl.vertexAttribPointer(viewer.getParameter("a_instanceID"), 1, gl.FLOAT, false, 8, 0);
gl.vertexAttribPointer(viewer.getParameter("a_vertexID"), 1, gl.FLOAT, false, 8, 4);
// Finally draw all the particles, each one has 6 vertices
gl.drawArrays(gl.TRIANGLES, 0, 256 * 6);

The vertex shader:

uniform mat4 u_mvp; // Model-view-projection matrix
uniform mat4 u_plane; // This is the plane that each particle uses
uniform float u_cells; // This is the number of sub-textures inside the particle image
uniform float u_pixel_size; // This is the size of each pixel in relation to the texture size, so 1 / 32 in this scenario.
uniform float u_pixels; // This is the number of pixels for each row in the particle texture, 32 in this scenario
uniform sampler2D u_particles; // The actual particle data
uniform mat4 u_uvs; // This holds a normalized UV rectangle, every column's XY values are a coordinate

attribute float a_instanceID;
attribute float a_vertexID;

varying vec2 v_uv;
varying vec4 v_color;

// Gets the index-th particle as a matrix
mat4 particleAt(float index) {
  float x = u_pixel_size * mod(index, u_pixels);
  float y = u_pixel_size * floor(index / u_pixels);
  return mat4(texture2D(u_particles, vec2(x, y)), texture2D(u_particles, vec2(x + u_pixel_size, y)), texture2D(u_particles, vec2(x + u_pixel_size * 2., y)), texture2D(u_particles, vec2(x + u_pixel_size * 3., y)));
}

void main() {
  mat4 particle = particleAt(a_instanceID);
  vec3 position = particle[0].xyz; // Particle's position
  vec3 offset = u_plane[int(a_vertexID)].xyz; // The plane's vertex for the current vertex ID
  float index = particle[1][0]; // This is the sub-texture index for the texture coordinates
  float scale = particle[1][1]; // The size of this particle
  vec4 color = particle[2]; // The color of this particle
  vec2 uv = u_uvs[int(a_vertexID)].xy; // The texture coordinate for the current vertex ID
  
  // Final texture coordinate calculations
  vec2 cell = vec2(mod(index, u_cells), index / u_cells); // 
  v_uv = cell + vec2(1.0 / u_cells, 1.0 / u_cells) * uv;
  
  v_color = color;
  
  // And set the vertex to the particle's position offset by the plane and scaled to its size
  gl_Position = u_mvp * vec4(position + offset * scale, 1.0);
}

Thanks for any help!


WebGL: hardware skinning with a bone texture

30 June 2013 - 07:09 PM

I have WebGL code running hardware skinning in a vertex shader for animations, using a uniform array of matrices for my bones.

The problem arises when I don't have enough vector slots, which happens when there are more than 62 bones (I don't control the models or the number of bones, and I've already seen a model with 173 bones, which is crazy).

 

I tried using a float texture to store all my bones in, and fetch them in the shader, but I can't seem to do that correctly.

 

There is no texelFetch in WebGL's version of GLSL, no 1D textures and obviously no texture buffers or uniform buffers.

 

What I tried was creating a X on 1 2D float texture, where X is the number of floats required for all the matrices, feeding it with all the matrices.

 

I send to the shader the size of each matrix and the size of each vector, relative to the size of the texture, so I can get to any matrix with a normal texture fetch.

 

I believe this should work...in theory. But it doesn't.

 

This is the texutre initialization code:

var buffer = new Float32Array(...);
... 
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, buffer.byteLength / 16, 1, 0, gl.RGBA, gl.FLOAT, buffer);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);

And the part of the vertex shader that constructs a matrix from the texture, given a matrix index:

...
uniform sampler2D u_bone_map;
uniform float u_matrix_fraction;
uniform float u_vector_fraction;
...
mat4 boneMatrix(float bone) {
  return mat4(texture2D(u_bone_map, vec2(u_matrix_fraction * bone, 0)),
              texture2D(u_bone_map, vec2(u_matrix_fraction * bone + u_vector_fraction, 0)),
              texture2D(u_bone_map, vec2(u_matrix_fraction * bone + u_vector_fraction * 2.0, 0)),
              texture2D(u_bone_map, vec2(u_matrix_fraction * bone + u_vector_fraction * 3.0, 0)));
}
...

u_matrix_fraction and u_vector_fraction are the relative sizes I wrote above.

E.g., if the texture is 512x1 each matrix is 16/512, and each vector is 4/512, so to get the ith matrix, the code would need to go to i*u_matrix_fraction and grab 4 texels, each the size of u_vector_fraction.

 

These matrices result in my meshes going crazy, so something is wrong, but I am just not sure what.

 

Got any ideas?

 

Thanks for any help.


Making a node attached to a node hierarchy face the camera (billboard)

23 June 2013 - 08:44 AM

I have a node hierarchy, and I need to make some nodes inside it billboards.

Every time I update the hierarchy, I get the local translation and rotation of each node. I then use it to get the world matrix and world rotation of this node.

The rotation is stored as a quaternion, and it gets multiplied by the parent's rotation if a parent exists.

 

After this, the billboarding part begins.

 

What I tried is getting the difference between the camera's quaternion and the world rotation of the current node, and then simply adding that to the node via a rotation matrix.

This sounds like it should work to me, but it didn't.

 

No matter what other variations I tried, I simply can't get this to work.

 

Can you spot some errors of any sort in my code? (or is this calculation not even right in the first place?)

 

Thanks!

// The posotion of the node
var pivot = node.pivot;
// Local translation and rotation
var translation = this.localTranslation(node, time);
var rotation = this.localRotation(node, time);

var rotationMatrix = [];
var localMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1];

math.Quaternion.toRotationMatrix4(rotation, rotationMatrix);

// Create this node's local transformation matrix
math.Mat4.translate(localMatrix, translation[0], translation[1], translation[2]);
math.Mat4.translate(localMatrix, pivot[0], pivot[1], pivot[2]);
math.Mat4.multMat(localMatrix, rotationMatrix, localMatrix);
math.Mat4.translate(localMatrix, -pivot[0], -pivot[1], -pivot[2]);

// World matrix + accumulated rotation
if (node.parent) {
  math.Mat4.multMat(node.parent.worldMatrix, localMatrix, node.worldMatrix);

  math.Quaternion.concat(rotation, node.parent.worldRotation, node.worldRotation);
} else {
  node.worldMatrix = localMatrix;

  node.worldRotation = Object.copy(rotation);
}

if (node.billboarded) {
  // This is the camera's quaternion 
  var cam = Object.copy(camera.rotate);
  
  // And the node's quaternion
  var initial = Object.copy(node.worldRotation);

  // Negate it because I want the difference which is Camera * Node^-1
  math.Quaternion.conjugate(initial, initial);

  var difference = [];

  // The difference
  math.Quaternion.concat(cam, initial, difference);

  var rotationMatrix2 = [];

  // Get a rotation matrix from the difference
  math.Quaternion.toRotationMatrix4(difference, rotationMatrix2);

  var finalMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1];

  // Finally calculate final matrix that rotates in the same vector space
  math.Mat4.translate(finalMatrix, pivot[0], pivot[1], pivot[2]);
  math.Mat4.multMat(finalMatrix, rotationMatrix2, finalMatrix);
  math.Mat4.translate(finalMatrix, -pivot[0], -pivot[1], -pivot[2]);

  // And apply it
  math.Mat4.multMat(node.worldMatrix, finalMatrix, node.worldMatrix);
}

Not sure if this matters, but this is the code that moves the camera (it simply orbits around the center when you grab your mouse, in the same direction your mouse moves):

if (mouse.left) {
  var anglez = math.toRad(mouse.x - mouse.x2);
  var nrotz = math.Quaternion.fromAxisAngle([0, 0, 1], anglez, [0, 0, 0, 0]);

  math.Quaternion.concat(camera.rotate, nrotz, camera.rotate);

  var anglex = math.toRad(mouse.y - mouse.y2);
  var nrotx = math.Quaternion.fromAxisAngle([1, 0, 0], anglex, [0, 0, 0, 0]);

  math.Quaternion.concat(nrotx, camera.rotate, camera.rotate);

  transform.rotation[2] += mouse.x - mouse.x2;
  transform.rotation[0] += mouse.y - mouse.y2;
}

Efficient tile map rendering and editing in different OpenGL versions

12 December 2012 - 04:00 PM

I am starting to work on a 2D tile map editor (and yes, I know there are existing ones), and wondering about techniques to make both rendering and real-time editing fast and efficient.

With OpenGL 3+ I don't have a problem.
A combination of instanced rendering, texture buffer and texture arrays, means I can efficiently render only the visible tiles in instanced mode (which I believe is damn fast) and every tile only takes 1 byte, for the texture index (assuming 256 textures are enough), which is amazing for memory consumption.
Editing-wise, changing tiles is merely changing a byte per tile in the texture buffer, which should be very fast (I can't imagine tiny changes like these take much time, yes?).
So all in all, OpenGL 3 is amazing for big tile maps rendering and editing.

OpenGL 2 (assuming the features needed are not available in extensions) is a little more problematic.
The texture array can be replaced with a procedurally generated texture atlas, which is a bit more work, but fine.
The lack of instancing, though, is a problem - do you store the whole vertex data per tile, or do you store indices into constant vectors in the shaders?
The first approach is more straightforward and relies less on shader tricks, the second takes far less memory and bandwidth.
The second way uses the fact that every tile is just a unit-sized rectangle, so it can be stored in a constant 2x4 matrix (or an array, not really sure if it matters), and the actual tile data given from the buffers will be formed of an index of the tile (this emulates instancing), the index of the vertex itself in the tile (0, 1, 2 or 3), and the index of the texture - 6 bytes per vertex, or 24 bytes per tile. The texture coordinates are also a unit-rectangle, so the same data is used for them, but with some additional uniforms that give information about the texture atlas, you can easily calculate the real coordinates to get the right texture.
In the first more obvious approach, every vertex is 16 bytes (x, y, s, t), for a total of 64 bytes per tile.
Editing is where another big bonus of approach 2 is - to edit a tile you need only edit 4 bytes (the texture index in each vertex). With the first approach, you need to edit the actual texture coordinates, so 32 bytes per tile.

OpenGL 1 has no shaders to begin with (again, assuming the extensions aren't there), so I am wondering if it's even better to use buffers in the first place, rather than just old and slow glBegin/End. Buffers in OpenGL 1 will work the same way as approach 1 in OpenGL 2 would work.

If you have other neat suggestions for any GL version, feel free to suggest Posted Image

Also I realize this question isn't very realistic, since I would probably need gigantic tile maps (hundreds of thousands and more tiles) to actually notice a difference beside how easy it is to code each approach, but I am kind of interested in this anyway.

Thanks for any answer!

PARTNERS