OrangyTang

Members
  • Content count

    6170
  • Joined

  • Last visited

Community Reputation

1298 Excellent

About OrangyTang

  • Rank
    Contributor
  1. Oh, I also found [url=http://fileadmin.cs.lth.se/cs/Education/EDAN35/lectures/L2-rasterization.pdf]this presentation[/url], which superficially makes more sense, but again the actual formulas escape me, probably because it seems to introduce new variables / letters without actually defining them. >_< If anyone can give me some hints as to how to understand the 'edge vectors' in it then that would be helpful. Thanks.
  2. As title, I'm trying to do perspective correct texturing in a software renderer. So far I have non-perspective correct texturing by looping over the bounding box of a triangle, calculating the barycentric coords, then either rejecting the pixel or using the coords to do texturing. Simple (and slow) but works, and I'm trying not to worry *too* much about performance right now. In particular, I'm trying to implement this as described by [url=http://www.gamedev.net/topic/595604-resources-for-writing-a-software-renderer/page__view__findpost__p__4775563]Erik Rufelt here[/url], that is, some magic barycentric coord formula that I can't quite find/grasp that generates perspective correct interpolants, instead of going the long way and correcting them manually for every value to be interpolated. [url=http://maxest.gct-game.net/vainmoinen/bachelor_thesis.pdf]Maxest's thesis[/url] lists pseudo code for this, but I can't figure out what the distance() pseudo instruction does, nor how it relates to my existing barycentric coord calculation, as it appears to be using a completely different approach. So far, I have this: [code] public void rasteriseTriangle(Vector3f ndc0, Colour4f c0, Vector2f uv0, Vector3f ndc1, Colour4f c1, Vector2f uv1, Vector3f ndc2, Colour4f c2, Vector2f uv2) { Vector2f p0 = ndcToScreen(ndc0); Vector2f p1 = ndcToScreen(ndc1); Vector2f p2 = ndcToScreen(ndc2); // First find screen rect (clamped to screen bounds) final int screenXMin = Math.max((int)Math.floor( Math.min(Math.min(p0.x, p1.x), p2.x) ), 0); final int screenYMin = Math.max((int)Math.floor( Math.min(Math.min(p0.y, p1.y), p2.y) ), 0); final int screenXMax = Math.min((int)Math.ceil( Math.max(Math.max(p0.x, p1.x), p2.x) ), backbuffer.getWidth()-1); final int screenYMax = Math.min((int)Math.ceil( Math.max(Math.max(p0.y, p1.y), p2.y) ), backbuffer.getHeight()-1); // Iterate final float detT = (p1.y - p2.y) * (p0.x - p2.x) + (p2.x - p1.x) * (p0.y - p2.y); // If detT == 0, then triangle is degenerate, so skip if (detT != 0.0f) { // Loop over screen bounds for (int y=screenYMin; y<=screenYMax; y++) { for (int x=screenXMin; x<=screenXMax; x++) { // l1 = (y2 - y3)(x - x3) + (x3 - x2)(y - y3) / detT // l2 = (y3 - y1)(x - x3) + (x1 - x3)(y - y3) / detT // l3 = 1 - l2 - l1 final float l1 = ( (p1.y - p2.y) * ((float)x - p2.x) + (p2.x - p1.x) * ((float)y - p2.y) ) / detT; final float l2 = ( (p2.y - p0.y) * ((float)x - p2.x) + (p0.x - p2.x) * ((float)y - p2.y) ) / detT; final float l3 = 1.0f - l2 - l1; // If barycentric coords are all in the range [0, 1] then we lie in the triangle // - we add a small epsilon on the compare with zero check to fill the edges correctly // - we don't actually have to compare againct one because with the equation above none // all of the coords sum to one final float epsilon = -0.000000059604645f; if (l1 >= epsilon // && l1 <= 1.0f && l2 >= epsilon //&& l2 <= 1.0f && l3 >= epsilon //&& l3 <= 1.0f ) { // Within triangle! // Find depth by lerping vertex depths final float z = ndc0.z * l1 + ndc1.z * l2 + ndc2.z * l3; // Depth buffer just has raw depth values // Test depth buffer final int pixelLoc = x + y * backbuffer.getWidth(); if (z <= depthBuffer[pixelLoc]) // TODO: Also test for z<1 for far plane clipping { // Write depth buffer depthBuffer[pixelLoc] = z; // Lerp vertex colours final float r = c0.r * l1 + c1.r * l2 + c2.r * l3; final float g = c0.g * l1 + c1.g * l2 + c2.g * l3; final float b = c0.b * l1 + c1.b * l2 + c2.b * l3; final float a = c0.a * l1 + c1.a * l2 + c2.a * l3; // Lerp texture coords final float u = uv0.x * l1 + uv1.x * l2 + uv2.x * l3; final float v = uv0.y * l1 + uv1.y * l2 + uv2.y * l3; // Sample texture Colour4f textureColour = testTexture.sample(u, v); final int red = (int)((r * 255.0f) * (textureColour.r)); final int green = (int)((g * 255.0f) * (textureColour.g)); final int blue = (int)((b * 255.0f) * (textureColour.b)); final int alpha = (int)((a * 255.0f) * (textureColour.a)); final int argb = packRgba(red, green, blue, alpha); int[] pixels=((DataBufferInt)(backbuffer.getRaster().getDataBuffer())).getData(); pixels[pixelLoc] = argb; } } } } } }[/code] Any pointers as to what I'm conceptually not getting would be helpful. Thanks.
  3. OpenGL Camera setup for panorama image

    I must be being a little slow - how does the pick_from_cubemap go from an x,y,z to a cube face and an x,y position?
  4. [font="Arial,"][size="4"]I have a 3d landscape that I'd like to convert into an equirectangular panorama image (more specifically, one that I can plug into google maps like this: [url="http://code.google.com/apis/maps/documentation/javascript/services.html#CreatingPanoramas"]http://code.google.c...eatingPanoramas[/url] ).[/size] [size="4"]Normally I'd just render out a cubemap. However google maps doesn't want a cubemap, and I can't see a good way of converting from a cubemap to an equirectangular projection.[/size] [/font][font="Arial,"][size="4"]Does anyone know what camera positions I'll need to render out an equirectangular image? Thanks.[/size] [/font] [font="Arial,"]Edit: should probably say that I'm doing the rendering with plain old fixed-function opengl, so I'd like to figure out how to do this without needing shaders. It doesn't have to be 100% accurate though.[/font]
  5. So I also found [url=http://omega.di.unipi.it/web/IUM/Waterloo/node51.html]this reference[/url], which seems to be the same technique. My understanding is that the equation for the right plane in clip space is (w + x = 0). So using the rearranged equation I can find 'a' which gives me the intersection distance of the plane against my line segment to be clipped. However when I do this, my 'a' ends up being a negative number. Surely this isn't correct as it means the intersection point is not actually on the line. Test code currently looks like this: [code][font=Monaco][size=2] if ((startClassification & RIGHT_CODE) != 0[/size][/font] || (endClassification & RIGHT_CODE) != 0) { // For right plane: final float top = clipStart.w + clipStart.x; final float bottom = (clipStart.w + clipStart.x) - (clipEnd.w + clipEnd.x); final float a = top / bottom; final float invA = 1f - a; // Clip start or end? final float dx = clipEnd.x - clipStart.x; final float dy = clipEnd.y - clipStart.y; final float dz = clipEnd.z - clipStart.z; final float dw = clipEnd.w - clipStart.w; final float len = (float)Math.sqrt(dx*dx + dy*dy + dz*dz + dw*dw); clipEnd.x = clipStart.x * invA + clipEnd.x * a; clipEnd.y = clipStart.y * invA + clipEnd.y * a; clipEnd.z = clipStart.z * invA + clipEnd.z * a; clipEnd.w = clipStart.w * invA + clipEnd.w * a;[/code] Is my understanding of this 'a' value correct? And is a negative number valid or does it mean I've done something else wrong?
  6. I'm trying to get clipping and culling working for my software renderer, but am having a bit of trouble understanding how to do the actual clipping. I've been following [url=http://research.microsoft.com/pubs/73937/p245-blinn.pdf]this paper[/url] on clipping but can't quite wrap my head around what space the clipping needs to be performed in. I was expecting to clip in normalised device coords, with their nice easy -1 to 1 range, but apparently that throws up some unpleasant edge cases, so it's better to clip in homogeneous space before the w divide to NDC space. Am I right in thinking this? I'm starting with lines, so I figure I've got to do something like: 1. Transform vertices from world to homogeneous space by transforming by model, view and projection matrices. 2. Classify each vertex 3. Reject the whole line, accept the whole line or clip the line 4. Rasterise 1 and 4 I've got, but how do I start on 2 and 3? What planes am I actually classifying/clipping against? Pointer's appreciated, I'm a unsure where I'm supposed to be going next...
  7. Thanks guys, I've got a triangle rasterised now with interpolated vertex colours. I can certainly see the advantage of the barycentric approach, it makes a lot of the preceding steps much more elegant.
  8. [quote name='Erik Rufelt' timestamp='1297970264' post='4775529'] Are you trying to do perspective correction or simple 2D barycentric coordinates? The 2D coordinates are the formulas on the page that look like this: l1 = [ (y2 - y3)(x - x3) + (x3 - x2)(y - y3) ] / [ (y2 - y3)(x1 - x3) + (x3 - x2)(y1 - y3) ] l2 = ... l3 = 1 - l1 - l2 Triangle points xi,yi, and point in triangle x,y Then if you have a color [b]ci[/b] for each triangle point the color at x,y is [b]l1 * c1 + l2 * c2 + l3 * c3[/b]. Any float value can be interpolated in the same way, so in the color you would do it for R, G and B, if each channel is a float. [/quote] (ignoring perspective correction for now) So if I'm understanding you, I don't need to explicitly calculate the 4th point, I can go directly from the three triangle verts to a barycentric coord via that equation? (although there seems to be several constants that could be extracted from the inner loop). Then I'd use the barycentric coords to interpolate my colour/uvs and from that calculate my actual pixel colour? I'm still slightly worried by looping over the screen-space bounding box for a triangle, that seems like i'll be looping over a lot of pixels and doing the costly barycentric coord calculation only to find that the point is actually outside of the triangle. [color=#1C2837][size=2]maxest: your project (and slides) look interesting. I'll have to have a look over the rest of it when I've got time. Thanks.[/size][/color]
  9. I seem to have fallen at the first hurdle- how do I calculate the 4th point for barycentric coords? I'm afraid the notation on the wikipedia article is somewhat beyond me right now.
  10. How does the [color=#1C2837][size=2]barycentric coordinates approach gel with perpective correction? I'd like to support both ortho and perspective projections, which from what I've been told would traditionally require two triangle rasterisers (one with perpective correction, one without).[/size][/color]
  11. I'd like to write a minimal software renderer - nothing fancy, just flat and single-textured polys, ideally matching the output from OpenGL (in terms of input positions and matrix setup) as closely as possible. I've used OpenGL loads, but never actually drilled down into the guts of a rasteriser before, so I know bits of the theory but could do with some resources to help me get started. What's a good place to start?
  12. Could be, feel free to move it if you think it's in the wrong place. At the moment I'm fighting with the technical aspects though. Turns out JustGiving has a REST api, so I'm trying to use that at the moment, but butting up against cross-site-scripting security measures.
  13. I'd like to change one of my programs from freeware to donationware - it'd still be free to download and use, but users could go to a webpage and donate (to a charity I'd choose). Users who donate would get a special 'supporter' icon shown next to their avatar in the program. Ideally this would all be automated, so someone else (like just giving) is handling the icky cash side, and providing some kind of web feed I can query to see who's donated before. Has anyone any experience with this? Is there a good website/service for this? Thanks.
  14. IDE without WORKSPACES/SOLUTIONS

    If you're using Eclipse, then you're looking for scrapbook pages, which let you type and run test code without creating a project. For anything bigger, I do what Key_46 said and have a 'testbed' project for quick tests.
  15. Quote:Original post by davepermen if your antivir is crap you can't blame windows. but no, it doesn't get slower over time on it's own. I feel like this is my main gripe with Windows, and highlights the fundamental difference in philosophy. *nix tends to assume that all programs are hostile, so access rights are limited by default, and everything is nicely sandboxed. The whole setup (not just the system tools, but the whole stack) is designed to keep running in the face of rouge programs. Windows tends to assume all programs are benign and perfectly written, and that if a program wants to do something (even if it's something incredibly stupid) then it's going to let it. This means that windows is incredibly sensitive to badly-written programs. And since most programs are badly written in some way (due in no small amount to MS 1. not defining proper house rules up front and 2. changing those house rules arbitrarily between major versions) this results in windows feeling much less stable. Installing a new piece of software feels like walking on egg shells, in case it updates some critical shared library or similar and hoses your whole system - or worse, silently degrades your performance but in a way which is near impossible to track down. As an example - on an old work machine we'd have to have several bits of crap installed by IT which couldn't be removed. One of those (probably the anti-virus) was so bad, that when it kicked in it could freeze up the whole machine for up to several minutes. So badly in fact that the mouse cursor would stop responding, none of the keyboard system commands would work, and it would stop updating the screen. I don't care how badly written any given program is - a good OS should *never* let a rouge program lock up a computer that badly.