
Advertisement
Chananya Freiman
Member
Content Count
16 
Joined

Last visited
Community Reputation
140 NeutralAbout Chananya Freiman

Rank
Member

How to do animation and IK with OpenGL?
Chananya Freiman replied to lucky6969b's topic in Graphics and GPU Programming
That isn't misusing OpenGL, that's using properly your GPU. The whole idea of shaders is to give you the programmable ability to do anything you want with it. Your GPU is by far better at these things than your CPU, and by "these things" I mean applying redundant calculations on big vectorlike data sets. Bones (matrices) happen to fit that description. GPGPU just makes this more obvious (either through OpenCL/CUDA, or compute shaders, if they exist already?). Updating the animation skeletons, and using IK (which isn't really related to rendering in any way, but rather to physics), though, is usually done on the CPU. There will probably be a time when that, too, wont be true anymore, but we are still not there. 
app crashes when binding buffers and call draw function
Chananya Freiman replied to Prognatus's topic in Graphics and GPU Programming
Sorry if this offends you, but it's crashing because there is hardly one line of code there that isn't an error. I'll add a full reply once I get back to my computer if nobody answers by then. 
You will use all the vertices. The idea is that you don't use the vertices to describe your mesh. You use indices to describe your mesh. At the end, to actually render a rectangle, you have to give OpenGL two triangles, which are six vertices. So you can either give OpenGL the six vertices up front, or you can give it the four vertices that actually define the rectangle, and tell GL "I gave you four vertices, I want you to make two triangles out of them like this index buffer tells you". When using indices, GL simply grabs the vertices from your vertex buffer. Your first value in the index buffer is 0? ok, let's grab the first value in the vertex buffer. The second value is 1? grab the second value in the vertex buffer. 2? grab the third. Now we grabbed three vertices, which form the first triangle. Then it continues with indices 0, 2 and 3. Note that we reused two vertices  0, and 2. Even though we sent them only once, we actually used them twice.

What if I have more models per level than available VBO memory?
Chananya Freiman replied to Labrasones's topic in Graphics and GPU Programming
Regarding the above comment. Drivers are free to swap your buffers from RAM to video RAM (VRAM has two meanings in this context) and the other way around at any point of time. While giving them hints as to the usage of buffers might (and again, might not) make them initialize the buffer where you want, drivers are free to do realtime analysis as to the actual usage of your buffers, and swap them if they want to, and in fact I read years ago that they indeed do this. 
Suppose you have a 2D rectangle of size [2, 2]. These are the vertices it uses: [1, 1] [1, 1] [1, 1] [1, 1] While we only need four vertices to represent this rectangle, the graphics cards wants triangles, each one having three vertices. So if we were to split the above rectangle into triangles, we would need to send six coordinates instead of four. Another way to do this, is send only those four vertices, but together with them also tell the graphics card how to form triangles from them. This is where the index (element in OpenGL) buffer comes in. The element buffer has numbers that index your vertices. E.g. 0 would be the first vertex, 1 the second, and so on. So with an element buffer, to form the triangles needed for the rectangle, we need to send these indices: [0, 1, 2, 0, 2, 3]. If you replace the numbers with the actual vertices they index, you will see you get the original six vertices to form the triangles. Indexing reduces memory and bandwidth (except for very uncommon worst case scenarios), which in turn help rendering speed. Most file formats (to which you export from Blender, 3ds Max, etc.) support indexing, but there are two variants of indexing for file formats. In modeling tools (and in fact, in your OpenGL code too!), a "vertex" isn't a position, it's a combination of a position, a normal vector, a color, a texture coordinate, and so on. Every one of these things is called a vertex attribute, and is only one part of the whole vertex. OpenGL (and Direct3D) only allow 1D indexing, or in other words  you have one index that points to all the vertex attributes. For example, if you have an array of vertex positions and another array of vertex normals, then index 0 would be the first position and the first normal. This might seem obvious, but some file formats don't actually store their data this way. In most cases, a model doesn't actually need the same amount of positions, normals, and so on. If there are many vertices that have the same normal, the file might store only one normal, and let them all share it. You then have different indices for each vertex attribute, which you can't directly use for rendering. In this case, you will have to "flatten" the arrays and fill them up with all the shared data. This can be seen in the completely obsolete, bad format *.OBJ (it's the most terrible format in existence, but for some reason it's used everywhere).

You don't need a loop... Every frame check once if the sound finished running. Only if it finished, THEN you can delete the buffer and source. alGetSourcei(source, AL_SOURCE_STATE, &state); if (state != AL_PLAYING) { // Delete the buffer and source here // But you probably want to do something smarter like not calling alGetSourcei for this source anymore, that's up to you }

Why would you delete the buffers and source before the sound finishes? That's the whole point of the state argument. Only when the state is not AL_PLAYING do you delete them.

Probably by just not running a dowhile? Every frame call it once while checking if it finished playing.

OpenGL WebGL: Pseudo instancing being slow
Chananya Freiman posted a topic in Graphics and GPU Programming
I am working on a 3D viewer with WebGL, and want to optimize the particle emitters that it supports. The information for each particle is its rectangle in world space, texture coordinates, and a color, all of which can change over time. At first I did it the simple way, with a big buffer where each vertex had all the above information, for a total of 54 floats per particle (2 triangles, 6 vertices, 9 floats per vertex: [X, Y, Z] [U, V] [R, G, B, A]). Note that the vertex positions here are already in world space and form a rectangle. This works fine, but is a bit on the slow side on the CPU, simply because updating the buffer each frame takes a lot of work. So the next stage was to make pseudo instancing. WebGL doesn't support anything beside 2D textures, so I have to use that for arbitrary storage. The idea is, then, to make a 2D texture that can hold all the data for each particle (so now only 9 floats are needed for the perparticle data), and for the vertex attributes use just the instance ID, and vertex ID. For example, the first particle is [0, 0, 0, 1, 0, 2, 0, 0, 0, 2, 0, 3], where each pair is the instance and vertex IDs. Instead of sending the world positions that already form a rectangle, I just send the center, and size of each particle. A normalized rectangle is computed once and sent as a uniform, and then all the particles add it to their position scaled by their scale. Instead of using the computed texture coordinates, I instead send an ID that says where in the texture this particle is, and the actual coordinates are computed in the shader. So every particle has a total of 21 floats instead of 54 (for some reason I can't use nonfloat attributes, is this only in WebGL? it has been quite some time since I touched OpenGL), and out of those only 9 need updates every frame. For a start, I wanted to get it done quickly and not waste too much time on further optimizations, so I just picked a square power of 2 texture size that fit my needs for a test model, which happened to be 32x32 pixels. While only 9 floats were really needed for each particle, I just chose a 4x4 matrix format for now, and padded the data with zeroes. So a 32x32 RGBA texture, in this scenario, can hold 256 particles (32*32/4). Even though I chose to not make it optimized, it still requires far less bandwidth and updates than the original design. Every frame after updating all the particles, I upload the new texture data, and render all the particles. But here's the issue: for some reason, this is a whole lot slower than just using the flat, much bigger buffer. I simply can't understand how that's even possible, I am uploading much less data, and doing a lot less work on CPU. Is it possible that glTexSubImage2D is somehow much slower than glBufferSubData? These are the most relevant pieces of code, and after that the vertex shader: // Setup this.textureColumns = 32; this.textureRows = 32; this.particleTexture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, this.particleTexture); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this.textureColumns, this.textureRows, 0, gl.RGBA, gl.FLOAT, null); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.bindTexture(gl.TEXTURE_2D, null); ... // After updating all the particles gl.activeTexture(gl.TEXTURE3); gl.bindTexture(gl.TEXTURE_2D, this.particleTexture); // hwarray is a Float32Array object with all the particle data gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, 0, this.textureColumns, this.textureRows, gl.RGBA, gl.FLOAT, this.hwarray); // Bind it to the sampler uniform viewer.setParameter("u_particles", 3); // This buffer holds all the instance and vertex IDs, it never changes gl.bindBuffer(gl.ARRAY_BUFFER, this.buffer); // viewer is my WebGL wrapper gl.vertexAttribPointer(viewer.getParameter("a_instanceID"), 1, gl.FLOAT, false, 8, 0); gl.vertexAttribPointer(viewer.getParameter("a_vertexID"), 1, gl.FLOAT, false, 8, 4); // Finally draw all the particles, each one has 6 vertices gl.drawArrays(gl.TRIANGLES, 0, 256 * 6); The vertex shader: uniform mat4 u_mvp; // Modelviewprojection matrix uniform mat4 u_plane; // This is the plane that each particle uses uniform float u_cells; // This is the number of subtextures inside the particle image uniform float u_pixel_size; // This is the size of each pixel in relation to the texture size, so 1 / 32 in this scenario. uniform float u_pixels; // This is the number of pixels for each row in the particle texture, 32 in this scenario uniform sampler2D u_particles; // The actual particle data uniform mat4 u_uvs; // This holds a normalized UV rectangle, every column's XY values are a coordinate attribute float a_instanceID; attribute float a_vertexID; varying vec2 v_uv; varying vec4 v_color; // Gets the indexth particle as a matrix mat4 particleAt(float index) { float x = u_pixel_size * mod(index, u_pixels); float y = u_pixel_size * floor(index / u_pixels); return mat4(texture2D(u_particles, vec2(x, y)), texture2D(u_particles, vec2(x + u_pixel_size, y)), texture2D(u_particles, vec2(x + u_pixel_size * 2., y)), texture2D(u_particles, vec2(x + u_pixel_size * 3., y))); } void main() { mat4 particle = particleAt(a_instanceID); vec3 position = particle[0].xyz; // Particle's position vec3 offset = u_plane[int(a_vertexID)].xyz; // The plane's vertex for the current vertex ID float index = particle[1][0]; // This is the subtexture index for the texture coordinates float scale = particle[1][1]; // The size of this particle vec4 color = particle[2]; // The color of this particle vec2 uv = u_uvs[int(a_vertexID)].xy; // The texture coordinate for the current vertex ID // Final texture coordinate calculations vec2 cell = vec2(mod(index, u_cells), index / u_cells); // v_uv = cell + vec2(1.0 / u_cells, 1.0 / u_cells) * uv; v_color = color; // And set the vertex to the particle's position offset by the plane and scaled to its size gl_Position = u_mvp * vec4(position + offset * scale, 1.0); } Thanks for any help! 
What if I have more models per level than available VBO memory?
Chananya Freiman replied to Labrasones's topic in Graphics and GPU Programming
Shaders and uniforms are not bound by VAOs. I suggest you to read this page, it will explain the subject much better than I could. 
What if I have more models per level than available VBO memory?
Chananya Freiman replied to Labrasones's topic in Graphics and GPU Programming
There are versions of the instanced drawing functions that take a range of vertices. The general idea (I believe) is to cache static objects together, ones that you know will never move anyway. But, this also hinders you with culling them. I assume google can give more information, I never had to handle big scenes as of yet. Sending positions, transformations, etc. to batches (and instanced draws) is really an applicationspecific thing, but you'll usually use something to identify each mesh (done for you in instanced rendering), and based on that select the correct data from a uniform buffer / texture buffer / whatever. VAOs are mainly for convenience, I am not sure if they actually improve performance, and if so it's probably by a little. They are used to store the current state of your context related to rendering (so vertex and element VBO bindings, shader attribute bindinds, etc.), so that when you want to draw something you need to only bind the VAO and it binds the context in it for you. 
WebGL: hardware skinning with a bone texture
Chananya Freiman replied to Chananya Freiman's topic in Graphics and GPU Programming
I did not know of the existence of that inspector, I'll check it out. Thanks. /Edit Thanks a lot, that inspector really helped. Got the skinning working this time! 
WebGL: hardware skinning with a bone texture
Chananya Freiman replied to Chananya Freiman's topic in Graphics and GPU Programming
I don't have access to files, JavaScript In any case, the data is the same data I otherwise use in the matrix uniform array (where it works as expected), so it's correct. There's probably something really stupid and obvious I am doing and not noticing. 
WebGL: hardware skinning with a bone texture
Chananya Freiman posted a topic in Graphics and GPU Programming
I have WebGL code running hardware skinning in a vertex shader for animations, using a uniform array of matrices for my bones. The problem arises when I don't have enough vector slots, which happens when there are more than 62 bones (I don't control the models or the number of bones, and I've already seen a model with 173 bones, which is crazy). I tried using a float texture to store all my bones in, and fetch them in the shader, but I can't seem to do that correctly. There is no texelFetch in WebGL's version of GLSL, no 1D textures and obviously no texture buffers or uniform buffers. What I tried was creating a X on 1 2D float texture, where X is the number of floats required for all the matrices, feeding it with all the matrices. I send to the shader the size of each matrix and the size of each vector, relative to the size of the texture, so I can get to any matrix with a normal texture fetch. I believe this should work...in theory. But it doesn't. This is the texutre initialization code: var buffer = new Float32Array(...); ... gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, buffer.byteLength / 16, 1, 0, gl.RGBA, gl.FLOAT, buffer); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); And the part of the vertex shader that constructs a matrix from the texture, given a matrix index: ... uniform sampler2D u_bone_map; uniform float u_matrix_fraction; uniform float u_vector_fraction; ... mat4 boneMatrix(float bone) { return mat4(texture2D(u_bone_map, vec2(u_matrix_fraction * bone, 0)), texture2D(u_bone_map, vec2(u_matrix_fraction * bone + u_vector_fraction, 0)), texture2D(u_bone_map, vec2(u_matrix_fraction * bone + u_vector_fraction * 2.0, 0)), texture2D(u_bone_map, vec2(u_matrix_fraction * bone + u_vector_fraction * 3.0, 0))); } ... u_matrix_fraction and u_vector_fraction are the relative sizes I wrote above. E.g., if the texture is 512x1 each matrix is 16/512, and each vector is 4/512, so to get the ith matrix, the code would need to go to i*u_matrix_fraction and grab 4 texels, each the size of u_vector_fraction. These matrices result in my meshes going crazy, so something is wrong, but I am just not sure what. Got any ideas? Thanks for any help. 
Making a node attached to a node hierarchy face the camera (billboard)
Chananya Freiman posted a topic in Math and Physics
I have a node hierarchy, and I need to make some nodes inside it billboards. Every time I update the hierarchy, I get the local translation and rotation of each node. I then use it to get the world matrix and world rotation of this node. The rotation is stored as a quaternion, and it gets multiplied by the parent's rotation if a parent exists. After this, the billboarding part begins. What I tried is getting the difference between the camera's quaternion and the world rotation of the current node, and then simply adding that to the node via a rotation matrix. This sounds like it should work to me, but it didn't. No matter what other variations I tried, I simply can't get this to work. Can you spot some errors of any sort in my code? (or is this calculation not even right in the first place?) Thanks! // The posotion of the node var pivot = node.pivot; // Local translation and rotation var translation = this.localTranslation(node, time); var rotation = this.localRotation(node, time); var rotationMatrix = []; var localMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]; math.Quaternion.toRotationMatrix4(rotation, rotationMatrix); // Create this node's local transformation matrix math.Mat4.translate(localMatrix, translation[0], translation[1], translation[2]); math.Mat4.translate(localMatrix, pivot[0], pivot[1], pivot[2]); math.Mat4.multMat(localMatrix, rotationMatrix, localMatrix); math.Mat4.translate(localMatrix, pivot[0], pivot[1], pivot[2]); // World matrix + accumulated rotation if (node.parent) { math.Mat4.multMat(node.parent.worldMatrix, localMatrix, node.worldMatrix); math.Quaternion.concat(rotation, node.parent.worldRotation, node.worldRotation); } else { node.worldMatrix = localMatrix; node.worldRotation = Object.copy(rotation); } if (node.billboarded) { // This is the camera's quaternion var cam = Object.copy(camera.rotate); // And the node's quaternion var initial = Object.copy(node.worldRotation); // Negate it because I want the difference which is Camera * Node^1 math.Quaternion.conjugate(initial, initial); var difference = []; // The difference math.Quaternion.concat(cam, initial, difference); var rotationMatrix2 = []; // Get a rotation matrix from the difference math.Quaternion.toRotationMatrix4(difference, rotationMatrix2); var finalMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]; // Finally calculate final matrix that rotates in the same vector space math.Mat4.translate(finalMatrix, pivot[0], pivot[1], pivot[2]); math.Mat4.multMat(finalMatrix, rotationMatrix2, finalMatrix); math.Mat4.translate(finalMatrix, pivot[0], pivot[1], pivot[2]); // And apply it math.Mat4.multMat(node.worldMatrix, finalMatrix, node.worldMatrix); } Not sure if this matters, but this is the code that moves the camera (it simply orbits around the center when you grab your mouse, in the same direction your mouse moves): if (mouse.left) { var anglez = math.toRad(mouse.x  mouse.x2); var nrotz = math.Quaternion.fromAxisAngle([0, 0, 1], anglez, [0, 0, 0, 0]); math.Quaternion.concat(camera.rotate, nrotz, camera.rotate); var anglex = math.toRad(mouse.y  mouse.y2); var nrotx = math.Quaternion.fromAxisAngle([1, 0, 0], anglex, [0, 0, 0, 0]); math.Quaternion.concat(nrotx, camera.rotate, camera.rotate); transform.rotation[2] += mouse.x  mouse.x2; transform.rotation[0] += mouse.y  mouse.y2; }

Advertisement