# OrangyTang

Member

6170

1298 Excellent

• Rank
Legend

## Personal Information

• Website
• Role
Technical Director
• Interests
Art
Design
Programming

@OrangyTang
1. ## Perspective correct barycentric coordinates

Oh, I also found this presentation, which superficially makes more sense, but again the actual formulas escape me, probably because it seems to introduce new variables / letters without actually defining them. >_< If anyone can give me some hints as to how to understand the 'edge vectors' in it then that would be helpful. Thanks.
2. ## Perspective correct barycentric coordinates

As title, I'm trying to do perspective correct texturing in a software renderer. So far I have non-perspective correct texturing by looping over the bounding box of a triangle, calculating the barycentric coords, then either rejecting the pixel or using the coords to do texturing. Simple (and slow) but works, and I'm trying not to worry *too* much about performance right now. In particular, I'm trying to implement this as described by Erik Rufelt here, that is, some magic barycentric coord formula that I can't quite find/grasp that generates perspective correct interpolants, instead of going the long way and correcting them manually for every value to be interpolated. Maxest's thesis lists pseudo code for this, but I can't figure out what the distance() pseudo instruction does, nor how it relates to my existing barycentric coord calculation, as it appears to be using a completely different approach. So far, I have this: public void rasteriseTriangle(Vector3f ndc0, Colour4f c0, Vector2f uv0, Vector3f ndc1, Colour4f c1, Vector2f uv1, Vector3f ndc2, Colour4f c2, Vector2f uv2) { Vector2f p0 = ndcToScreen(ndc0); Vector2f p1 = ndcToScreen(ndc1); Vector2f p2 = ndcToScreen(ndc2); // First find screen rect (clamped to screen bounds) final int screenXMin = Math.max((int)Math.floor( Math.min(Math.min(p0.x, p1.x), p2.x) ), 0); final int screenYMin = Math.max((int)Math.floor( Math.min(Math.min(p0.y, p1.y), p2.y) ), 0); final int screenXMax = Math.min((int)Math.ceil( Math.max(Math.max(p0.x, p1.x), p2.x) ), backbuffer.getWidth()-1); final int screenYMax = Math.min((int)Math.ceil( Math.max(Math.max(p0.y, p1.y), p2.y) ), backbuffer.getHeight()-1); // Iterate final float detT = (p1.y - p2.y) * (p0.x - p2.x) + (p2.x - p1.x) * (p0.y - p2.y); // If detT == 0, then triangle is degenerate, so skip if (detT != 0.0f) { // Loop over screen bounds for (int y=screenYMin; y<=screenYMax; y++) { for (int x=screenXMin; x<=screenXMax; x++) { // l1 = (y2 - y3)(x - x3) + (x3 - x2)(y - y3) / detT // l2 = (y3 - y1)(x - x3) + (x1 - x3)(y - y3) / detT // l3 = 1 - l2 - l1 final float l1 = ( (p1.y - p2.y) * ((float)x - p2.x) + (p2.x - p1.x) * ((float)y - p2.y) ) / detT; final float l2 = ( (p2.y - p0.y) * ((float)x - p2.x) + (p0.x - p2.x) * ((float)y - p2.y) ) / detT; final float l3 = 1.0f - l2 - l1; // If barycentric coords are all in the range [0, 1] then we lie in the triangle // - we add a small epsilon on the compare with zero check to fill the edges correctly // - we don't actually have to compare againct one because with the equation above none // all of the coords sum to one final float epsilon = -0.000000059604645f; if (l1 >= epsilon // && l1 <= 1.0f && l2 >= epsilon //&& l2 <= 1.0f && l3 >= epsilon //&& l3 <= 1.0f ) { // Within triangle! // Find depth by lerping vertex depths final float z = ndc0.z * l1 + ndc1.z * l2 + ndc2.z * l3; // Depth buffer just has raw depth values // Test depth buffer final int pixelLoc = x + y * backbuffer.getWidth(); if (z <= depthBuffer[pixelLoc]) // TODO: Also test for z<1 for far plane clipping { // Write depth buffer depthBuffer[pixelLoc] = z; // Lerp vertex colours final float r = c0.r * l1 + c1.r * l2 + c2.r * l3; final float g = c0.g * l1 + c1.g * l2 + c2.g * l3; final float b = c0.b * l1 + c1.b * l2 + c2.b * l3; final float a = c0.a * l1 + c1.a * l2 + c2.a * l3; // Lerp texture coords final float u = uv0.x * l1 + uv1.x * l2 + uv2.x * l3; final float v = uv0.y * l1 + uv1.y * l2 + uv2.y * l3; // Sample texture Colour4f textureColour = testTexture.sample(u, v); final int red = (int)((r * 255.0f) * (textureColour.r)); final int green = (int)((g * 255.0f) * (textureColour.g)); final int blue = (int)((b * 255.0f) * (textureColour.b)); final int alpha = (int)((a * 255.0f) * (textureColour.a)); final int argb = packRgba(red, green, blue, alpha); int[] pixels=((DataBufferInt)(backbuffer.getRaster().getDataBuffer())).getData(); pixels[pixelLoc] = argb; } } } } } } Any pointers as to what I'm conceptually not getting would be helpful. Thanks.
3. ## Camera setup for panorama image

I must be being a little slow - how does the pick_from_cubemap go from an x,y,z to a cube face and an x,y position?
4. ## OpenGL Camera setup for panorama image

[font="Arial,"][size="4"]I have a 3d landscape that I'd like to convert into an equirectangular panorama image (more specifically, one that I can plug into google maps like this: http://code.google.c...eatingPanoramas ). [size="4"]Normally I'd just render out a cubemap. However google maps doesn't want a cubemap, and I can't see a good way of converting from a cubemap to an equirectangular projection. [/font][font="Arial,"][size="4"]Does anyone know what camera positions I'll need to render out an equirectangular image? Thanks. [/font] [font="Arial,"]Edit: should probably say that I'm doing the rendering with plain old fixed-function opengl, so I'd like to figure out how to do this without needing shaders. It doesn't have to be 100% accurate though.[/font]
5. ## Clipping lines and triangles in a software renderer

So I also found this reference, which seems to be the same technique. My understanding is that the equation for the right plane in clip space is (w + x = 0). So using the rearranged equation I can find 'a' which gives me the intersection distance of the plane against my line segment to be clipped. However when I do this, my 'a' ends up being a negative number. Surely this isn't correct as it means the intersection point is not actually on the line. Test code currently looks like this: [font=Monaco][size=2] if ((startClassification & RIGHT_CODE) != 0[/font] || (endClassification & RIGHT_CODE) != 0) { // For right plane: final float top = clipStart.w + clipStart.x; final float bottom = (clipStart.w + clipStart.x) - (clipEnd.w + clipEnd.x); final float a = top / bottom; final float invA = 1f - a; // Clip start or end? final float dx = clipEnd.x - clipStart.x; final float dy = clipEnd.y - clipStart.y; final float dz = clipEnd.z - clipStart.z; final float dw = clipEnd.w - clipStart.w; final float len = (float)Math.sqrt(dx*dx + dy*dy + dz*dz + dw*dw); clipEnd.x = clipStart.x * invA + clipEnd.x * a; clipEnd.y = clipStart.y * invA + clipEnd.y * a; clipEnd.z = clipStart.z * invA + clipEnd.z * a; clipEnd.w = clipStart.w * invA + clipEnd.w * a; Is my understanding of this 'a' value correct? And is a negative number valid or does it mean I've done something else wrong?
6. ## Clipping lines and triangles in a software renderer

I'm trying to get clipping and culling working for my software renderer, but am having a bit of trouble understanding how to do the actual clipping. I've been following this paper on clipping but can't quite wrap my head around what space the clipping needs to be performed in. I was expecting to clip in normalised device coords, with their nice easy -1 to 1 range, but apparently that throws up some unpleasant edge cases, so it's better to clip in homogeneous space before the w divide to NDC space. Am I right in thinking this? I'm starting with lines, so I figure I've got to do something like: 1. Transform vertices from world to homogeneous space by transforming by model, view and projection matrices. 2. Classify each vertex 3. Reject the whole line, accept the whole line or clip the line 4. Rasterise 1 and 4 I've got, but how do I start on 2 and 3? What planes am I actually classifying/clipping against? Pointer's appreciated, I'm a unsure where I'm supposed to be going next...
7. ## Resources for writing a software renderer?

Thanks guys, I've got a triangle rasterised now with interpolated vertex colours. I can certainly see the advantage of the barycentric approach, it makes a lot of the preceding steps much more elegant.
8. ## Resources for writing a software renderer?

(ignoring perspective correction for now) So if I'm understanding you, I don't need to explicitly calculate the 4th point, I can go directly from the three triangle verts to a barycentric coord via that equation? (although there seems to be several constants that could be extracted from the inner loop). Then I'd use the barycentric coords to interpolate my colour/uvs and from that calculate my actual pixel colour? I'm still slightly worried by looping over the screen-space bounding box for a triangle, that seems like i'll be looping over a lot of pixels and doing the costly barycentric coord calculation only to find that the point is actually outside of the triangle. [color=#1C2837][size=2]maxest: your project (and slides) look interesting. I'll have to have a look over the rest of it when I've got time. Thanks.
9. ## Resources for writing a software renderer?

I seem to have fallen at the first hurdle- how do I calculate the 4th point for barycentric coords? I'm afraid the notation on the wikipedia article is somewhat beyond me right now.
10. ## Resources for writing a software renderer?

How does the [color=#1C2837][size=2]barycentric coordinates approach gel with perpective correction? I'd like to support both ortho and perspective projections, which from what I've been told would traditionally require two triangle rasterisers (one with perpective correction, one without).
11. ## OpenGL Resources for writing a software renderer?

I'd like to write a minimal software renderer - nothing fancy, just flat and single-textured polys, ideally matching the output from OpenGL (in terms of input positions and matrix setup) as closely as possible. I've used OpenGL loads, but never actually drilled down into the guts of a rasteriser before, so I know bits of the theory but could do with some resources to help me get started. What's a good place to start?
12. ## Donationware distribution and donation collection

Could be, feel free to move it if you think it's in the wrong place. At the moment I'm fighting with the technical aspects though. Turns out JustGiving has a REST api, so I'm trying to use that at the moment, but butting up against cross-site-scripting security measures.
13. ## Donationware distribution and donation collection

I'd like to change one of my programs from freeware to donationware - it'd still be free to download and use, but users could go to a webpage and donate (to a charity I'd choose). Users who donate would get a special 'supporter' icon shown next to their avatar in the program. Ideally this would all be automated, so someone else (like just giving) is handling the icky cash side, and providing some kind of web feed I can query to see who's donated before. Has anyone any experience with this? Is there a good website/service for this? Thanks.
14. ## IDE without WORKSPACES/SOLUTIONS

If you're using Eclipse, then you're looking for scrapbook pages, which let you type and run test code without creating a project. For anything bigger, I do what Key_46 said and have a 'testbed' project for quick tests.