ArKano22

Members
  • Content count

    305
  • Joined

  • Last visited

Community Reputation

650 Good

About ArKano22

  • Rank
    Member
  1. I´ve never used voxel technology, but i´ve been thinking of UD and an efficient way to store voxels and i´ve come up with this. Might be an stupidity, but oh well... Let´s suppose we are drawing an infinite array on voxels, all in the same direction: positive x axis, for example. We can use one bit to tell if the next voxel is at x+1, or not. If not, it must be at y+1,y-1, z+1 or z-1. So we use 2bits for that (2^2 = 4). Now looking at a model it seems fairly common to have long arrays of voxels in the same direction: walls, trees, bricks, etc. We can use n bits to tell how many voxels ahead of us are in the same direction, and just don´t store any information for them. if voxel starts with 0: n bits to represent 2^n sucessive voxels after this one. (total n+1 bits) if voxel starts with 1: 2 bits to indicate a new direction (total 3 bits) Using n = 4: In the worst case (every voxel changes direction), we can store 1 million voxels in 10^6*3/8/1024 = 366 kb. In the best case (every voxel has up to 2^n =16 neighbours facing the same direction) we can store 1 million voxels in just 38 kb. If we know beforehand the best value for n, it could be lower. It would be possible to preproccess a surface and find an optimal representation of it in terms of "n" (bits used for average number of voxels in the same direction) and path followed through the voxels. Color info could be stored in a similar way, adding bits to indicate relative displacement over a palette. Drawbacks: n and the path must be chosen very carefully or you might end up wasting space like crazy. The "worst case" is not the WORST case in which you have small arrays of just two voxels, meaning that half the voxels are wasting (n+1) bits just to tell you that the next one does not need any info. Traversing this structure to render it is not efficient (load it into an octree at runtime?). Well, lots of other things. What do you think? EDIT: On second thought, this really is stupid (lol). Just have one bit per voxel to indicate if the next changes direction or not. 3 bits at worst and 1 at best per voxel, average 2 bits per voxel: 1 million in 244kb.
  2. This is total WIN. Is the framerate shown in the video the actual framerate? If so, congrats on making an unbelievable job. If not, congrats on making a believable but still amazing job :).
  3. GLSL Droste

    Here´s the code. Pass your UV <0,1> coordinates to the droste() function and it will deliver a color. There are three adjustable parameters: scale factor between self-similar images, number of branches the spiral has (0.0 for discrete droste effect, >0.0 for continuous) and animation speed. Note that you must pass "time" as a float uniform for animation (infinite zoom) to work. This version has rectangular self-similarity, but making a circular one should not be difficult. Hope someone finds this useful. const float TWO_PI = 3.141592*2.0; //ADJUSTABLE PARAMETERS: const float branches = 1.0; const float scale = 0.25; const float speed = 2.0; //Complex Math: vec2 complexExp(in vec2 z){ return vec2(exp(z.x)*cos(z.y),exp(z.x)*sin(z.y)); } vec2 complexLog(in vec2 z){ return vec2(log(length(z)), atan(z.y, z.x)); } vec2 complexMult(in vec2 a,in vec2 b){ return vec2(a.x*b.x - a.y*b.y, a.x*b.y + a.y*b.x); } float complexMag(in vec2 z){ return float(pow(length(z), 2.0)); } vec2 complexReciprocal(in vec2 z){ return vec2(z.x / complexMag(z), -z.y / complexMag(z)); } vec2 complexDiv(in vec2 a,in vec2 b){ return complexMult(a, complexReciprocal(b)); } vec2 complexPower(in vec2 a, in vec2 b){ return complexExp( complexMult(b,complexLog(a)) ); } //Misc Functions: float nearestPower(in float a, in float base){ return pow(base, ceil( log(abs(a))/log(base) )-1.0 ); } float map(float value, float istart, float istop, float ostart, float ostop) { return ostart + (ostop - ostart) * ((value - istart) / (istop - istart)); } vec4 droste(in vec2 co){ //SHIFT AND SCALE COORDINATES TO <-1,1> vec2 z = (co-0.5)*2.0; //ESCHER GRID TRANSFORM: float factor = pow(1.0/scale,branches); z = complexPower(z, complexDiv(vec2( log(factor) ,TWO_PI), vec2(0.0,TWO_PI) ) ); //RECTANGULAR DROSTE EFFECT: z *= 1.0+fract(time*speed)*(scale-1.0); float npower = max(nearestPower(z.x,scale),nearestPower(z.y,scale)); z.x = map(z.x,-npower,npower,-1.0,1.0); z.y = map(z.y,-npower,npower,-1.0,1.0); //UNDO SHIFT AND SCALE: z = z*0.5+0.5; return texture2D(Texture0,z); }
  4. GLSL Droste

    Thanks guys, i got it working. And most important i understand why it works :). The droste math i was looking at is not really the droste effect itself, but a transform which makes the droste(feedback) effect look like a spiral. I got the transform right thanks to Pragma and then coded a quick n dirty feedback effect to test it on. Once i optimize the code a little bit i´ll post it here just in case someone needs it. Here´s a pic: Uploaded with ImageShack.us
  5. GLSL Droste

    Quote:Original post by Pragma If you know the forward transformation, then there are two ways to proceed: 1. Get out a pen and paper, and calculate the inverse transformation. This doesn't look like it will be too difficult, since it's just made up of rotation, scaling, log and exp which are all easily invertible. Then you can do the whole effect in a pixel shader. Hope that is helpful. Looks like a cool effect, now I'm tempted to try it ... Well, i did the inverse transform (first exp(z), then transform, then log(z)). From this article (pag4 diagram): http://www.ams.org/notices/200304/fea-escher.pdf If i only do complex exp then log, the image stays the same, which means that transforming forth and back works. However to do the rotation transform i think i should multiply "z" with 1+1i (45º). If i do it alone, it works, but multiplying after the complexExp() only translates the image, and when untransforming with complexLog() i find my original image, but offsetted. vec2 complexExp(in vec2 z){ return vec2(exp(z.x)*cos(z.y),exp(z.x)*sin(z.y)); } vec2 complexLog(in vec2 z){ return vec2(log(length(z)), atan(z.y, z.x)); } vec2 complexMult(in vec2 a,in vec2 b){ return vec2(a.x*b.x - a.y*b.y, a.x*b.y + a.y*b.x); } vec4 droste(in vec2 co){ vec2 z = (co-0.5)*2.0; z = complexExp(z); z = complexMult(z,vec2(1.0,1.0)); z = complexLog(z); z = z*0.5+0.5; vec4 final = texture2D(Texture0,fract(z)); return final; } Is there something obvious that i´m missing here? I think the transforms are correct and it should work, but...
  6. GLSL Droste

    I´m trying to achieve a Droste effect in GLSL, like this: http://www.josleys.com/article_show.php?id=82 (scroll down past the explanation to see some beautiful examples) However the math explained there isn´t valid as-is as they assume you can write to arbitrary positions in the image. I´ve been trying to use alternative transforms and i´ve managed to create a pseudo-droste using an archimedean spiral instead of a logarithmic one (convert cartesian to polar, adjust radius, convert back to cartesian). Not to my surprise, it doesn´t look good. Can anyone point me to more Droste effect material? Are there GLSL droste effects out there that i can take a look at?
  7. If you need to render artifact-free translucid objects with lots of triangles, the easiest way is to use an order independent blending mode, i.e additive blending. It´s not transparency but if done correctly can look like it.
  8. http://beta.codeproject.com/KB/graphics/Basic_Illumination_Model.aspx?msg=2167060 In that page, you can see the result of ambient+diffuse+specular, and each one isolated from the rest. First of all, they are not uniformly distributed across the surface (only ambient), so while there are points where they sum up 1.0, most of the time they do not. btw, i don´t know what you mean by "pleasant dark spots and edges" produced by the diffuse term, but if your lighting is correctly implemented it should not produce dark spots or edges in the model...
  9. Generating a Depth Map

    You can not bake that information to a texture since it is dependant on light direction. Your best bet is to render the scene from the light´s point of view with front face culling, grab the z-buffer, then render again with backface culling, grab the new z-buffer and then subtract both. The result is the distance traveled by the light inside the object, which is what you´re after.
  10. DX11 nurbs vs subdivision surfaces

    I vote for subdivision surfaces. I use face extrusion intensively while modelling, so i suppose subdivision fits my style. It´s just a matter of taste, but i think most people would prefer subdivision to patches.
  11. realtime ao

    yep, looks wrong. However it looks pretty fast for what it is, keep refining it!
  12. 2D Lighting in Java (Android)

    Casting rays is probably going to be slow as hell. I´m not an Android developer and i know nothing about SurfaceViews, but chances are you can do this: http://www.gamedev.net/reference/programming/features/2dsoftshadow/ without having to use directly the OpenGL API.
  13. MultiAPI Renderer

    Well, i don´t see anything wrong with having an abstract base renderer, then extending it. Even if you end up using HLSL or GLSL, that should not affect at all the renderer interface to the rest of the engine. In my engine i´ve done it exactly as you describe it, keatoon. I´m using only OpenGL atm but if someday i decide to give DirectX a try, having all rendering code in one class is nice because i won´t need to hunt down gl* calls all over the entire engine. If you want to use a new API, you won´t have to change all your code as long as you can reimplement your renderer using the new API. There might be better ways to do it, but abstracting the renderer is a clear future-proof win for me.
  14. packet fragmentation

    Quote:Original post by hplus0603 Quote:Original post by ArKano22 Ok. So, if a fragment is considered a new packet, and there is no hierarchy, each fragment has offset data relative to the original packet, not to the last time they fragmented, right? That's correct! Thank both of you very much, now it´s crystal clear :)
  15. packet fragmentation

    Quote:Original post by Antheus Quote:Original post by ArKano22 Ok. So the router does not glue together anything, that´s B´s responsibility.It can't - packets travel at arbitrary routes, so same router isn't guaranteed to receive all fragments. Quote:I must assume that a fragment can get fragmented again to pass trough a net with smaller MTU? (you´d have a fragment tree instead of a fragment list...) IP packets get fragmented. These fragments are IP packets again. There is no hierarchy. Fragments are marked as indices into array. If all elements of array arrived at destination, original packet can be reassembled. Ok. So, if a fragment is considered a new packet, and there is no hierarchy, each fragment has offset data relative to the original packet, not to the last time they fragmented, right? if P is the original packet: P--Asubnet--> P1,P2 P1--Bsubnet--> P11, P12 P2--Bsubnet--> P2 (no fragmentation) gluing together again: P = P11+P12+P2 Is this right?