
Advertisement
lallish
Member
Content Count
14 
Joined

Last visited
Community Reputation
191 NeutralAbout lallish

Rank
Member

OpenGL Adaptive tessellation of the same object
lallish posted a topic in Graphics and GPU Programming
Hello! I'm wondering if it is possible to have an adaptive tessellation of the same object. Not levelofdetail in the sense that you switch vertex buffers. But rather that the GPU interpolate new vertices to increase the detail. For example if I move closer to a big sphere, I want the polygons closest to the camera to subdivide only locally, not on the other side of the sphere. 1. Is this possible and how is it technically done in OpenGL/WebGL if so? 2. What is the correct terminology for this technique? Thanks in advance! 
OpenGL Creating a mosaic texture, doing offsetted lookups in WebGL/OpenGL shader.
lallish posted a topic in Graphics and GPU Programming
I have a block of geometry that will need a mosaic texture. Parts of the geometry will need to offset their UV coordinates to fetch the correct texture within this mosaic. I'm reusing the geometry so it will need to change texture and offsets a couple of times during runtime. How is this best solved? Send texture through uniform, and offset attributes to each vertex? Or change UVcoordinates on the geometry accordingly? Does it matter if the mosaic texture is 1d or 2d? Thanks 
Computing big numbers in shaders of WebGL/OpenGL
lallish replied to lallish's topic in Graphics and GPU Programming
Ah cool. Yea I worded it a bit badly. But what I meant was when we do this: float cosPhi = cos(index_mod_2pi * inc_mod_2pi); the inner multiplication: index_mod_2pi * inc_mod_2pi doesn't this get saved to a 32bit memory address stored temporarily until we do cos(this_stored_address) hence losing precision, or am I completely wrong? Because I seem to get much higher precision if I calculate index_mod_2pi * inc_mod_2pi on the CPU with 64bit addresses and then send it in as uniform or attribute. 
Computing big numbers in shaders of WebGL/OpenGL
lallish replied to lallish's topic in Graphics and GPU Programming
Thanks for replies guys, appreciate it. But... (A * B) mod C != (A mod C) * (B mod C)....it is = ( (A mod C) * (B mod C) ) mod C But how is this compiled when you multiply two floats like this and it exceeds 32bit in the multiplication, can they go over that precision until you save it in a float? I still have precision errors 
OpenGL Computing big numbers in shaders of WebGL/OpenGL
lallish posted a topic in Graphics and GPU Programming
Hello. I'm in need of computing floats larger than 32bit in my vertexshaders. float phi = index* inc; // big number, "5476389.695241543"biggish float cosPhi = cos(phi); phi is unnecessary big, contains a lot of trigonometric loops and the important decimal precision are removed as it's a 32bit float. Is there a way to remove all the unnecessary 2*PI loops on phi before it get saved to the float? 
Javascript's “Math.sin/Math.cos” and WebGL's “sin/cos” give different results
lallish replied to lallish's topic in Graphics and GPU Programming
I have an Icosahedron sphere, which is divided into almost equally big triangles, or hexagons with an occasional pentagon, so they create a tile system for you if you're inside the sphere. And each triangle have assigned indices to them. And why I've precomputed indices and not the actual points is a good question! It's a matter of reducing the size of the files, especially if it's a file that a client will download from your webserver, reducing the file transfer by half might be a better choice than the small performance hit your client will get from computing the points him/herself. I'm still experimenting, Just now, the indices in the interval [0, 2500000] so far take 18 mb. But, as the indices grow higher, the points will remain with a consistent precision, and size. So in the future when I know more about Javascript compression, I'll probably consider precomputing the points instead. 
Javascript's “Math.sin/Math.cos” and WebGL's “sin/cos” give different results
lallish replied to lallish's topic in Graphics and GPU Programming
Yep, that's correct. Thanks! It would work, except I don't do it iterative, I fetch precomputed indices from surrounding and only generate the needed points. 
Javascript's “Math.sin/Math.cos” and WebGL's “sin/cos” give different results
lallish replied to lallish's topic in Graphics and GPU Programming
I want to distribute points equally over a sphere and I don't know another way of doing so, do you know of a better formula? They are static yep. What I will try now then is to use Web Workers to calculate the points to off load the main thread in javascript, only thing I'm afraid of is how many attributes I can update at the same time without any major performance hit, as it is three floats now instead of just a single index float. 
Javascript's “Math.sin/Math.cos” and WebGL's “sin/cos” give different results
lallish replied to lallish's topic in Graphics and GPU Programming
Thanks for that information Cornstalks. Do you suggest calculating the fibPoint on the CPU instead and throw the vec3 in as an attribute? Because I'll be dealing with much larger numbers later, index might go as high as 400 000 000 Yes, I could do that, with something like: float tphi = phi  twoPI*ceil(phi/twoPI  1.0); But I would still need the precision of phi to do that, and "yy" and "off" too if I expand my index in the future. 
Javascript's “Math.sin/Math.cos” and WebGL's “sin/cos” give different results
lallish posted a topic in Graphics and GPU Programming
I'm trying to generate fibonacci distributed points on a sphere. However when I try to do so in the shader on the GPU, the calculation of sin/cos becomes very different than if I would have calculated Math.sin/Math.cos on the CPU. Here is part of the vertex shader of the fibonacci function: attribute float index; //array with numbers between [1, 2500000] float inc = 3.141592653589793238462643383279 * (3.0  sqrt(5.0)); float off = 2.0 / 2500000; float yy = index * off  1.0 + (off / 2.0); float rr = sqrt(1.0  yy * yy); float phi = index* inc; // big number, "5476389.695241543"biggish vec3 fibPoint = vec3(cos(phi) * rr, yy, sin(phi) * rr); // calculates sin/cos wrong This part gives the wrong fibPoint vectors of locations that looks like this: http://i.imgur.com/Z1crisy.png And when I calculate the Math.sin(phi) and Math.cos(phi) of the fibPoint vector in javascript, and throw these values in as attributes into the shader instead, so the code looks like: /* short version of code from javascript */ var y = index * off  1 + (off / 2.0); var r = Math.sqrt(1  y * y); var phi = index * inc; var cosphi = Math.cos(phi); var sinphi = Math.sin(phi); .... .. /* throw cosphi/sinphi into the shader as attributes along with the index */ /* vertex shader */ attribute float index; attribute float sinphi; attribute float cosphi; float inc = 3.141592653589793238462643383279 * (3.0  sqrt(5.0)); float off = 2.0 / 2500000; float yy = index * off  1.0 + (off / 2.0); float rr = sqrt(1.0  yy * yy); //float phi = index* inc; vec3 fibPoint = vec3(cosphi * rr, yy, sinphi * rr); This gives the correct fibPoint vectors. Now the locations looks like this: http://i.imgur.com/DeRoXkL.png So my question is, why doesn't WebGL's sin/cos give same or similar result as javascript's Math.sin/cos? As far as I know, both input is radians, and both output [1, 1]. May it be because "phi" are too large numbers so some part get truncated in the cos/sin? Thanks for reading. 
Rendering a billion objects with Geoclipmaps or Geomipmaps
lallish replied to lallish's topic in Graphics and GPU Programming
The middle plane will be hard to make perfect. And how often you'll have to grab a new mosaic texture will be up for experimentation. Why I fear to grab the mosaic texture during real time rendering is because each time we do it, we're moving around a big chunk of geometry maybe even as high as a thousand times to fill a whole cube map, with priority of filling the far frustum of your current 60 fov maybe, and then the textures out of view. And at each translation we're displacing the 4 roof vertices for new individual height values for the buildings, most likely coming from a huge height map. However the height map may be compressed to save memory, but it's still a texture lookup to do. At far far distance there will be no height lookup since the buildings will be over your head with their roof normals almost pointing at you. So we'll have a near frustum, a far frustum and a static frustum at far far distance. Do you think it's doable? I'm no expert in calculating complexity on parallel computing power of graphic cards 
Rendering a billion objects with Geoclipmaps or Geomipmaps
lallish replied to lallish's topic in Graphics and GPU Programming
Alright, I understand. At the moment this is the only way I can wrap myself around solving this problem, so it may be the way I go. Since the buildings will be pointing up inside of a sphere and we'll be inside it, I will probably need to create these cubemaps before running realtime. Since we need a cubemap for all kinds of different perspectives right? Let's say I divide this sphere into volume tiles. And several volumes going between the center of the sphere and to the full radius. All these tiles will individually generate a cubemap by moving a large building block around, taking smaller snapshot textures and mosaicing it into the larger cubemap texture. Then instead of the bumpmap effect maybe even interpolate between the surrounding cubemap tiles so the transition of camera moving through tiles becomes smoother. Like this, but imagine in 3d, buildings will be pointing up on the inside of the sphere surface: Where you'll only load the geometry of the surface tile underneath camera, for collision detection and what not. I don't think the cubemaps have to be extremely high resolution since the buildings will be so far off anyway. Are we on the same idea or am I completely crazy? 
Rendering a billion objects with Geoclipmaps or Geomipmaps
lallish replied to lallish's topic in Graphics and GPU Programming
It is becoming more evident I need some kind of fog, but it would take away a bit of the wowfeeling of this project. I've had the billboards and mipmapping in mind, they are what is coming up next on my list if I don't find another solution. But I don't think they would reduce the performance enough to the level I want. Mipmapping either way is really good to have. My buildings will actually be extruded inside of a sphere, pointing inwards. So if you would look up (being inside the sphere) you would not see the geometry sticking out much, rather just a night sky of tiny stars in far distance. So at far far distance I'll just do a texture lookup and not actually create the objects unless I fly there. Is that something you mean with curvature? All these buildings are one same vertex buffer. One draw call. Geometrical clipmaps (as intended here) don't do that. Geometrical clipmapping does not "merge" anything but its different LOD levels, which are themselves a representation of a single, coherent, 2D > 1D function. If geometry is static and you can afford the extra memory budget, just pretransform everything to a single batch. Nonetheless, "far" rendering is serious trouble, especially when there must be a connection between the "near" and "far" geometry. I'm currently inclined towards rendertexture effects to somehow "splat" the "far" rendering on the background but I'm afraid doing that robustly would require some effort to work in the general case. The clipmapping would make the buildings as mountains instead right? If I were to somehow link different scenes together between farplane and the background, I would need to create the background with some geometry. You don't think I could take the same geometry that are underneath me, let's say a block of 80k buildings, move it forward, snap a texture, move it to the side, snap a texture, move it a bit further back, snap a texture etc etc. Until you fill the horizon and you link the textures together? And possibly create a bumpmap so that if you were to move the camera you would see the distant objects slightly change their shading. And these textures wouldn't have to be full resolution. Kind of like a depth mipmapping. Or too inefficient? Thanks a lot for replying so fast. 
OpenGL Rendering a billion objects with Geoclipmaps or Geomipmaps
lallish posted a topic in Graphics and GPU Programming
Hello! I need to render an obscure amount of building shaped objects, and I came to pass the terms geoclipmapping and geomipmapping for handling seemingly endless terrains. I was wondering if it would be possible for strict shapes like this: A building here only has 8 vertices, the limit on my graphic card so far has been 200k buildings, but I need hundreds of millions. And my very optimistic hopes were for geoclipmapping to kind of merge the vertices and triangles together at far distance so it become possible to render with decent fps. But I'm sceptical to my approach as I've never tried it before. If you have any hints of how I begin or better ideas it would be much appreciated. Main language I'm using is WebGL/OpenGL Thanks. Lalle

Advertisement