• Content count

  • Joined

  • Last visited

Community Reputation

138 Neutral

About Kepakiano

  • Rank
  1. Thank you for your answers.   "Of course, it all depends on your exact situation, but are you sure that it is worth precalculating and storing the values at all rather than just calculating them on the fly per fragment?"   My 'boss' asked the same question :D   Okay, this is what I'm doing: Short description: "grid" defines a distorted 2D grid of points like this one: (-1.0 1.0) (0.0 1.0) (1.0 1.0) (-1.0 0.0) (-0.2 0.0) (1.0 0.0) (-1.0 -1.0) (0.0 -1.0) (1.0 -1.0) Note: These four quads are not enough, I need at least 16x16, maybe even 24x24 or 32x32. So, for every fragment, I need to find out in which one of the quads the current fragment lies (lines 88 - 105, pretty naive implementation with linear complexity) by dividing the current quad into two triangles and calculating the point's barycentric coordinates in this triangle (lines 19 - 30). If the correct quad is found, the point will be bilinearly interpolated and the value will be returned. The resulting vec2 is what I want to store for every fragment.   I'm pretty sure a texture/SSBO lookup would be faster than 1-16 (assuming I come up with a faster way to find the correct quad) point-in-triangle checks and one bilinear interpolation, but I have no experience in that matter. What do you think?   @samoth: The second is the case. I chose the SSBO, because of its ability to store arrays of arbitrary length, which would be nice. My target platform (a virtual reality lab) fully supports OpenGL 4.3 :) But you are probably right: The drawbacks outweigh the benefits compared to a texture.   The lab's projectors have a resolution of 1400x1050 pixels, so the buffer would need 1400*1050*8 Bytes (two floats per pixel) = 11,760,000 Bytes = 11.21 MiB of space which is probably too much for the L1 cache of a GTX 460 :/   So, (assuming I really precalculate) you think I should go for a texture?
  2. Hey there,   I am writing a fragment shader, which needs two float values with some additional information for every fragment it calculates. These values will be calculated at the beginning of the program and won't change any time, so they could be stored in a 2D or 1D lookup table/array.   My first two choices for this would be: 1. rendering the values in a separate fragment shader to a 2D GL_RG32F texture or 2. calculating the values in a compute shader and store them in a vec2 array in a shader storage buffer object (the size of table is not known at compile time!).   What is the best/fastest way to store them and look them up? Is there another fast method?   Thank you for your help.
  3. This is what I would do: (needs C++11) 1. Generate a random unit vector 2. Make it orthogonal to your given vector by using the Gram-Schmidt Process ( Edit: The snippet does not check, whether v and random u are linearly independent.
  4. This thread can be closed.   If someone comes across the same problems and finds this thread, here is what I did: My previous post suggested that I wanted a perspective projection of the texture onto the quad, which I don't. This is what I wanted:   -> A rectangular texture mapped onto a quadrilateral via bilinear interpolation. Since OpenGL does not provide any means to do this in the FFP (afaik), it is achieved in a fragment shader using the calculations from this thread:
  5. Hi there, after two days of research and trial and error, I give up and need help. I'm trying to map a texture (not necessarily quadratic or a size of 2^x) to an arbitrary convex quadrilateral. However, since the OpenGL pipeline divides the quad into two triangles, there is an ugly seam Original texture: Texture, if the upper left point is set to (0, 0.5): Currently, the texture coordinates are mapped in the most naive way:305 glTexCoord4f(0.0f, 1.0f, 0.0f, 1.0f);306 glVertex3f(_wall_properties.matrix[y][x].first, _wall_properties.matrix[y][x].second, 0.0f);307 308 glTexCoord4f(0.0f, 0.0f, 0.0f, 1.0f );309 glVertex3f(_wall_properties.matrix[y + 1][x].first, _wall_properties.matrix[y + 1][x].second, 0.0f);310 311 glTexCoord4f(1.0f, 0.0f, 0.0f, 1.0f);312 glVertex3f(_wall_properties.matrix[y + 1][x + 1].first, _wall_properties.matrix[y + 1][x + 1].second, 0.0f);313 314 glTexCoord4f(1.0f, 1.0f, 0.0f, 1.0f);315 glVertex3f(_wall_properties.matrix[y][x + 1].first, _wall_properties.matrix[y][x + 1].second, 0.0f);I have already read tons of stuff on the topic, including Cass' explanation at Strangely, changing the q-coordinate of the textures does not have any effect on the texture. Besides, I wouldn't know, how to calculate the correct q. I also tried changing the texture matrix following this post: with unsatisfying results. Would a correct texture matrix solve the problem as this post proposes?
  6. I know this is a VERY old thread, but I encountered this problem as well and found the only solution here. Unfortunately, I think erissian's formula is flawed. There is a small error which kept me busy for a while, but I solved it and thought I could share the solution: erissian wrote: M[sub]x[/sub] = (1-s)A[sub]x[/sub] + st(A-D+C-B)[sub]x[/sub] - t(A-D)[sub]x[/sub] M[sub]y[/sub] = (1-s)A[sub]y[/sub] + st(A-D+C-B)[sub]y[/sub] - t(A-D)[sub]y[/sub] But it should be: M[sub]x[/sub] = [color=#ff0000]sB[sub]x[/sub][/color]+(1-s)A[sub]x[/sub] + st(A-D+C-B)[sub]x[/sub] - t(A-D)[sub]x[/sub] M[sub]y[/sub] = [color=#ff0000]sB[sub]y[/sub][/color]+(1-s)A[sub]y[/sub] + st(A-D+C-B)[sub]y[/sub] - t(A-D)[sub]y[/sub] This simplifies to these coefficients of the quadratic equation at[sup]2[/sup] + bt + ct = 0 a = C?D+D?B+A?C+B?A b = M?C+D?M+B?M+M?A+C?A+B?D+2(A?B) c = M?B+A?M+B?A I don't know if these are the same values diegovar calculated in the end, but they work