Arjan B

Members
  • Content count

    113
  • Joined

  • Last visited

Community Reputation

1136 Excellent

About Arjan B

  • Rank
    Member
  1. 'Remove' direction from velocity

    Just to add to Alvaro's comment: projection of A onto B gives you all of A in the direction of B. This is why he/she subtracts that projection from A. The Wikipedia page on vector projection calls this the rejection of A from B: https://en.wikipedia.org/wiki/Vector_projection
  2. Wow, thanks a lot guys!   I ended up doing what Juliean said, which brought me right to the glUniform1f() function. And exactly as Nanoha stated, it generated a GL_INVALID_OPERATION error, which was fixed by simply replacing the 'f' with an 'i'. Wish I'd posted here sooner, I spent tons of hours sadly staring at my screen as well.   Thanks again!
  3. Hi! I first create a lookup-table for a transfer function and then try to upload it as a 1D texture as follows: for (unsigned i = 0; i < 1024; i++) tfm0[i] = qfeGetProbability(tf0, (float)i/1023.f); glActiveTexture(GL_TEXTURE17); if (glIsTexture(tfmTex0) glDeleteTextures(1, &tfmTex0); glGenTextures(1, &tfmTex0); glBindTexture(GL_TEXTURE_1D, tfmTex0); glTexImage1D(GL_TEXTURE_1D, 0, GL_R16F, 1024, 0, GL_RED, GL_FLOAT, tfm0); Right before the rendering call I make sure all the textures I need are bound to the right texture units: glActiveTexture(GL_TEXTURE17); glBindTexture(GL_TEXTURE_1D, tfmTex0);  Then, I set my uniform variable for the 1D texture: glUniform1f(glGetUniformLocation(shaderProgram, "tf0"), 17); And this is how the 1D texture is defined in the fragment shader, where sampleNorm is a value between 0 and 1: uniform sampler1D tf0; vec4 tfValue = texture1D(tf0, sampleNorm); Somehow, all of the tfValues end up being (0, 0, 0, 1), which I suspect is a default fallback value.   To be sure that I uploaded the values to the graphics card correctly, I also have this check right before the draw call: float values[1024]; glActiveTexture(GL_TEXTURE17); glGetTexImage(GL_TEXTURE_1D, 0, GL_RED, GL_FLOAT, values); It retrieves the values in the texture I uploaded back to "normal" memory, and they show up to be exactly the values I expect them to be.   Does anyone have an idea of where things might be going wrong? What would cause the sampler in the fragment shader to return (0, 0, 0, 1), when it should be returning my values in the R-channel?   Thank you in advance, Arjan
  4. Is ray tracing hard or is it just me?

    I think it's appropriate here to link to Bacterius' journal: http://www.gamedev.net/blog/2031-ray-tracing-devlog/. I think he does a good job at thoroughly explaining the process of writing a raytracer.
  5. The Rendering Equation

    Loving this blog. Keep up the good work!
  6. Depth of field My friend had added depth of field to the path tracer, which shows some nice results. Without DoF: With DoF: This effect was achieved by picking a focal point on the focal plane for every pixel, and then jittering our camera rays to go through this focal point. Finished report After some significant revisions on our two reports for the two subjects for which we did this project, we are finally finished. I feel like I've learned an awful lot more about rendering, mostly due to looking at it from a different angle than the approach I'm used to (rasterization). Working on this project has been a joy for me and I'm happy with the results. Having finished the report does not mean that we're finished with this project. We do intend to find some time to add more features. But, in reality, time might be sparse. My interests have shifted to learning how to implement these kind of effects (AA, DoF, GI) in a rasterization setting. I hope people enjoyed having a look at this series of blog posts. Maybe there'll be more. Thanks for reading!
  7. For a school project, I'm implementing SPH fluid simulation, according to the paper by Müller et al in 2003: "Particle-Based Fluid Simulation for Interactive Applications". I've implemented the calculation of density and pressure, the pressure forces, viscosity forces and gravity. Now that I'm adding a bounding box, things start going wrong.   My response to a collision is to move the particle back to the contact point, reflect its velocity around the normal of the box at the contact point, and damp the magnitude a bit by some bounce factor.   Now I have the following scenario. Two particles, p2 above p1, start by floating somewhere in the bounding box. They are too far away from each other for the pressure or viscosity forces to work on them. So gravity starts pulling them both down. p1 reaches the bottom of the bounding box, bounces a little bit and then stays on the bottom. Now, p2 is still too far away for pressure/viscosity forces and then, within one timestep, p2 hits the bounding box as well and is placed at the bottom. Now p1 and p2 are incredibly close to each other, causing the pressure force to be incredibly large. This makes the particles propel away from each other with extreme speed.   What kind of solution would you suggest? Just decrease the timestep? Use penalty forces instead of projection?   Thanks in advance!
  8. Beginning GLSL - Quick Question

    Good luck! This online book helped me greatly: http://www.arcsynthesis.org/gltut/index.html
  9. Beginning GLSL - Quick Question

    You calculate your view matrix and you calculate your projection matrix. Instead of telling your shader: "Here's the both of them", you simply multiply them once on the CPU and tell your shader: "Here's the view*projection matrix". There's no need to work out how to calculate the both of them in one go.   Since the result of that matrix multiplication is the same for one draw call anyway, you might as well do it just once on the CPU, instead of doing it again and again for every vertex in the shader.   Yes, every time you update your view matrix, you will have to multiply it with the projection matrix again and then feed that to your shaders.
  10. Beginning GLSL - Quick Question

    Well, uniform variables in GLSL remain constant over a drawing call, like glDrawArrays() for example. So with a separate call, I meant that you have your uniform matrix set for drawing the background tiles, draw them, set the uniform matrix for your character and then draw that one.   If you were to use a different shader for drawing your character, which uses an extra matrix that you don't want to have in the drawing of your background tiles, then you would have to create a separate shader and program for this. You then just bind the right program right before you draw.   Now my initial suggestion of using an extra matrix might be overkill for simple quads (I assume you draw 2D sprites on quads). If all you plan sprites to have is different positions, then you could use some uniform offset/position variable. For all these different characters, you would first set the uniform position variable, then draw that character. This might be very similar to what you are doing now, I guess.   I'd like to add that I'm by no means an expert, I'm just putting my thoughts out there.
  11. Beginning GLSL - Quick Question

    Instead of altering a buffer with positions, wouldn't it be a good idea to draw your character in a separate call and alter the matrix that specifies where it will be drawn?   Usually, when working with a 3D mesh, the vertex positions are specified in its own model coordinate system. Using a set of matrices, you can transform those positions from its model space to a place in the world, and then possibly perspective projection and so on. The perspective projection stays mostly the same for all meshes, while the matrix that moves the mesh from model space to world space changes from mesh to mesh.   For your 2D game, you could have an extra matrix that specifies a translation and a rotation for the sprite that you are drawing. It's made up of the position and orientation of your character, for example.
  12. The future of VR in gaming

    I'm sorry to say that we did not work with the Vrui Toolkit. The CAVE was at a neighbouring college, and I believe the libraries we used were developed in-house. It was called Cavelib3, if I remember correctly. Here's a link to some information about the CAVE: http://www.fontysvr.nl/facilities-and-systems-in-vrlab/virtual-reality-cave. If you go to the wiki, there should be some information about Cavelib3 too. I hope most of it is in English, the site did not seem to be very consistent with language.   That library made things very simple for us, since it handled pretty much like GLUT, but with extras. One of those extras was trigger volumes, which we used to detect whether someone hit an axe or a hole.   I think we made the models using a trial version of 3ds Max.   @slicer4ever That Omni thing looks awesome!