Jump to content
  • Advertisement

IsItSharp

Member
  • Content Count

    93
  • Joined

  • Last visited

Community Reputation

369 Neutral

About IsItSharp

  • Rank
    Member

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The vertices are places in the shape of a sphere (as you can see in attached images but I used GL_POINTS instead of GL_TRIANGLE_STRIP so I don't get a closed surface). Is there any other way to get a closed surface with the above algorithm and OpenGL?
  2. I would like to create a sphere with variable tesselation (in both horizontal and vertical aspect), this is my algorithm: const int amount = (sectorCount + 1) * (stackCount + 1); M3DVector3f *verts = (M3DVector3f*)malloc(sizeof(float) * 3 * amount); M3DVector4f *colors = (M3DVector4f*)malloc(sizeof(float) * 4 * amount); float sectorStep = 2 * GL_PI / sectorCount; float stackStep = GL_PI / stackCount; float sectorAngle, stackAngle; int vCount = 0; for (int i = 0; i <= stackCount; ++i) { stackAngle = GL_PI / 2 - i * stackStep; // starting from pi/2 to -pi/2 float xy = radius * cosf(stackAngle); // r * cos(u) float z = radius * sinf(stackAngle); // r * sin(u) // add (sectorCount+1) vertices per stack // the first and last vertices have same position and normal, but different tex coords for (int j = 0; j <= sectorCount; ++j) { sectorAngle = j * sectorStep; // starting from 0 to 2pi // vertex position (x, y, z) float x = xy * cosf(sectorAngle); // r * cos(u) * cos(v) float y = xy * sinf(sectorAngle); // r * cos(u) * sin(v) m3dLoadVector3(verts[vCount], x, y, z); m3dLoadVector4(colors[vCount], 0, 1, 0, 1); vCount++; } } This code works fine in terms of correct vertice coordinates. Unfortunately the vertices aren't procuded in "correct order" so that GL_TRIANGLE_STRIP (or any other kind of triangle placement) produces a smooth sphere, it more looks like this (see attached images). Unfortunately I also have no clue about how to change the algorithm to make GL_TRIANGLE_STRIP work. Any ideas?
  3. I am very new to OpenGL so I am using GLTools to learn the basics. For the start I would like to create a circle with adjustable tesselation (e.g. the number of triangles which form the circle). I have a toolbar with a variable tesselation: TwBar *bar; void InitGUI() { bar = TwNewBar("TweakBar"); TwDefine(" TweakBar size='200 400'"); TwAddVarRW(bar, "Tesselation", TW_TYPE_UINT32, &tesselation, ""); } Then I have these two functions and global variables which calculate the vertices for the circle: GLBatch circle; unsigned int tesselation = 8; void topAndBottomGeometry(M3DVector3f *vertices, M3DVector4f *colors, const int vertOffset) { m3dLoadVector3(vertices[0], 0, 0, vertOffset); m3dLoadVector4(colors[0], 0, 0, 1, 1); int i = 1; int color = 1; for (int step = 0; step <= tesselation; step++) { const double angle = step * ((2.0 * GL_PI) / tesselation); const double x = width * sin(angle); const double y = width * cos(angle); if ((color % 2) == 0) { m3dLoadVector4(colors[i], 0, 0, 1, 1); } else { m3dLoadVector4(colors[i], 0, 1, 0, 1); } m3dLoadVector3(vertices[i], x, y, vertOffset); color++; i++; } } void CreateGeometry() { const int amount = tesselation + 2; M3DVector3f *vertices = (M3DVector3f*)malloc(sizeof(float) * 3 * amount); M3DVector4f *colors = (M3DVector4f*)malloc(sizeof(float) * 4 * amount); topAndBottomGeometry(vertices, colors, 0); circle.Begin(GL_TRIANGLE_FAN, amount); circle.CopyVertexData3f(vertices); circle.CopyColorData4f(colors); circle.End(); free(vertices); free(colors); } Now, everything works fine in my GUI if the tesselation value is below or equal to the initial value (in this case 8). But if the value is any higher than 8 (doesn't matter which value), the GUI "freezes" and the circle stays the way it was like when tesselation is set to 8: Tesselation at 8 Tesselation at 6 https://i.stack.imgur.com/2lzvi.png
  4. I don't understand how this could be helpfull? I would apply the same algorithms I implemented in my code, so there would be no difference in my debugged ray and the one the program calculates, right? Maybe there is something wrong with the diffuse reflection, there is no way the lower sphere could be so bright on the bottom with the light being above it. https://picload.org/view/dapopdwr/test.png.html Edit: I don't know why but somehow removing this two lines of code: if(dot(rnd,nrm) < 0.0): #wrong hemisphere rnd = mult(rnd,-1.0); fixed the lighting problem: https://picload.org/view/dapopwaw/test.png.html But now I am wondering why this results lacks anti-aliasing even though I use the halton sequence: const double fov = 160.0 * M_PI / 180.0; const double zdir = 1.0 / tan(fov); const double aspect = (double)h / (double)w; const double jitterX = (halton(2, samples)) - 0.5; const double jitterY = (halton(3, samples)) - 0.5; const double xH = x + jitterX; const double yH = y + jitterY; const double xdir = (xH / (double)w) * 2.0 - 1.0; const double ydir = ((yH/ (double)h) * 2.0 - 1.0) * aspect; const struct Point dir = norm((struct Point){.x = xdir, .y = ydir, .z = zdir}); return (struct Ray){.origin = c.pos, .dir = dir}; double halton(int base, int ix) { double r = 0; double f = 1.0 / base; int i = ix; while(i > 0) { r += f * (i % base); f /= base; i = (int)floor((double)i / base); } return r; }
  5. Yea, you are right. I switched width and height in my for-loops. That's kinda embarrassing. But I think there is still something wrong with the rendering because this looks just wrong: https://picload.org/view/dapagoil/test.png.html My intersection code: double intersectSphere(const struct Sphere s, const struct Ray r) { double a = dot(r.dir, sub(r.origin, s.center)); double delta1 = a * a; double delta2 = length(sub(r.origin,s.center)) * length(sub(r.origin,s.center)); double rsqr = s.radius * s.radius; double delta = delta1 - delta2 + rsqr; if(delta < 0) return -1.0; else { double left = -a; double d1 = left + sqrt(delta); double d2 = left - sqrt(delta); return fmin(d1,d2); } }; (from here: https://en.wikipedia.org/wiki/Line–sphere_intersection) I guess fmin(d1,d2); could be wrong?
  6. Okay, I now noticed that this interlacing problems appears if WIDTH != HEIGHT
  7. This is my generateCameraRay() - function: const double fov = 105.0 * M_PI / 180.0; const double zdir = 1.0 / tan(fov); double aspect = (double)h / (double)w; double xH = x + (halton(2, samples)) - 0.5; double yH = y + (halton(3, samples)) - 0.5; double xdir = (xH / (double)w) * 2.0 - 1.0; double ydir = ((yH/ (double)h) * 2.0 - 1.0) * aspect; const struct Point dir = norm((struct Point){.x = xdir, .y = ydir, .z = zdir}); return (struct Ray){.origin = c.pos, .dir = dir}; I checked only the first hit and even this has this interlacing effect.
  8. Thanks for your answers! Does anyone have an idea about what is going wrong here? It's kinda hard do debug a pathtracer (for me at least): https://picload.org/view/dapdacrr/test.png.html My trace-function: struct RGB trace(const struct Ray ry, int tdepth) { if(tdepth == TRACEDEPTH) return (struct RGB){.r = 0, .g = 0, .b = 0}; double hitDistance = 1e20f; struct Sphere hitObject = {}; for(int i = 0; i < (sizeof(spheres) / sizeof(spheres[0])); i++) { double dist = intersectSphere(spheres[i], ry); if(dist > -1.0 && dist < hitDistance) { hitDistance = dist; hitObject = spheres[i]; } } if(hitDistance == 1e20f) return (struct RGB){.r = 0, .g = 0, .b = 0}; if(hitObject.isEmitter) return hitObject.color; const struct Point hitPoint = add(ry.origin, mult(ry.dir, hitDistance * 0.998)); const struct Point nrml = sphereNormal(hitObject, hitPoint); struct Point rnd = diffuse(); /* Wrong hemisphere */ if(dot(rnd, nrml) < 0.0) rnd = mult(rnd, -1.0); const struct Ray reflectionRay = (struct Ray){ .origin = hitPoint, .dir = norm(rnd) }; struct RGB returnColor = trace(reflectionRay, tdepth + 1); int r = hitObject.color.r * returnColor.r; int g = hitObject.color.g * returnColor.g; int b = hitObject.color.b * returnColor.b; r /= 255.0; g /= 255.0; b /= 255.0; return (struct RGB){ .r = r, .g = g, .b = b}; }
  9. Thanks for your reply! I don't think that approach checks if the point is actually outside the intersected sphere? What's about u and v? Wikipedia told me they are calculated like this: double u = p.x / length(p); double v = p.y / length(p); But what is the intersected Sphere used for?
  10. For my simple pathtracer written in C I need a function which calculates a random, diffuse reflection vector (for diffuse / lambert material). This is my function header: struct Ray diffuse(const struct Sphere s, const struct Point hitPoint) where s is the intersected Sphere and hitPoint is the point a ray intersected the sphere. Of course there are several projectes and their source code available on GitHub but most of the time they are doing way more things than I can understand at once. I don't want to just copy their code (I want to understand it myself) and I did not get any usefull results from google. I don't know where to start.
  11. I implemented a new class called "Sampler" which generates either random values, halton (2,3) values or sobol values and stores them before in an array before rendering. I tried every one and compared the results at 10 samples, and i am kind of disappointed by halton and sobol, they both look horrible, why is that?   This is my computeSamples(x,y)-function: def computeSample(x,y): samplePoint = sampler.getSamplePoint(x,y) jitterX = samplePoint[0] jitterY = samplePoint[1] x += jitterX y += jitterY xdir = (x / width) * 2.0 - 1.0 ydir = ((y / height) * 2.0 - 1.0) * aspect direction = Point3D(xdir,ydir,zdir).normalize() ray = Ray(camera,direction) return trace(ray,1) And this is my "Sampler": class Sampler: def __init__(self,type,width,height): self.type = type self.width = width self.height = height self.samplePoints = [] self.counter = 0 for x in range(width): for y in range(height): if(self.type is SampleType.Random): self.samplePoints.append([random(),random()]) elif(self.type is SampleType.Halton): self.samplePoints.append([Halton(2,self.counter),Halton(3,self.counter)]) elif(self.type is SampleType.Sobol): sobolValue = i4_sobol(2,self.counter) self.samplePoints.append([sobolValue[0][0],sobolValue[0][1]]) elif(self.type is SampleType.NoneS): self.samplePoints.append([0,0]) self.counter += 1 def getSamplePoint(self,x,y): return self.samplePoints[x*self.height+y]
  12. Thanks the floor function fixed it. I am wondering how i could use the halton sequence to sample a pixel? Would it be better to store the precalculated values in an array and choose the value according to the current samples / pixel? For example if i am in the 2nd sample i would choose the 2nd value of the 2- and 3-base halton series?
  13. I try to implement the Halton sequence with this simple code: def Halton(base,ix): r = 0 f = 1.0 / base i = ix while(i > 0): r += f * (i%base) f /= base i /= base return r i thought the output values are limited to [0..1] but sometimes i get values like "1.01953125" or "1.1861979166666667", why is that and how can i limit the Halton sequence to the given interval?
  14. Hi,   i would like to implement glass material in my pathtracer. I tried to copy the code from "PBR - From Theorie to Implementation" but i am not sure if i did it right. My current approach looks like this: The index of refraction outside glass is 1.0, inside it is 1.3. First i convert the direction vector to an angle called "cosThetaI": def getAngle(direction,normal): dotPr = direction.dot(normal) angle = math.acos(dotPr / (direction.length() * normal.length())) return angle If cosThetaI is greater than 0, i switch the refractive indices. etaI = 1.0 etaT = 1.3 cosThetaI = Point3D.getAngle(ray.direction,normal) cosThetaI = Point3D.clamp(cosThetaI,-1,1) if(cosThetaI > 0.0): etaI = 1.3 etaT = 1.0 cosThetaT = Point3D.snellsLaw(cosThetaI,etaI,etaT) Rparl = ((etaT * cosThetaI) - (etaI * cosThetaT)) / ((etaT * cosThetaI) + (etaI * cosThetaT)) Rperp = ((etaI * cosThetaI) - (etaT * cosThetaT)) / ((etaI * cosThetaI) + (etaT * cosThetaT)) dir = Point3D.getPoint((Rparl * Rparl + Rperp * Rperp) / 2) reflectionRay = Ray(hitPoint,dir.normalize()) def getPoint(angle): return Point3D(math.cos(angle),math.sin(angle),1) def snellsLaw(cosThetaI,etaI,etaT): sinThetaI = math.sqrt(max(0,1-cosThetaI*cosThetaI)) sinThetaT = etaI / etaT * sinThetaI cosThetaT = math.sqrt(max(0,1-sinThetaT * sinThetaT)) return cosThetaT I am not quite sure how to convert an angle into an 3D vector though...
  15. Hi, i am currently doing ray-circle intersection and i want to reflect the ray off a circle with a random vector if they intersect. This is the picture I have in my mind:     The orange point is the intersection point, the blue line is the normal from the center of the circle to the intersection point, the red dots are possible random points, how could i accomplish this?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!