Help with vertex pipeline for 3d software renderer

Started by
5 comments, last by lasse42 1 year, 7 months ago

Hello, i'm trying to write a simple 3d software renderer in c# .net but have run into problems with the matrices/projection part.

Below i have included the relevant part of my code, it's capable of projecting a triangle in 3d space and then rasterize it

public void RenderTriangle(System.Numerics.Vector4 v1, System.Numerics.Vector4 v2, System.Numerics.Vector4 v3, Color4 color, Camera camera, float interaction)

{
	
	// create matrices

	var viewMatrix = Matrix4x4.CreateLookAt(new System.Numerics.Vector3(camera.Position.X, camera.Position.Y, camera.Position.Z), new System.Numerics.Vector3(camera.Target.X, camera.Target.Y, camera.Target.Z), System.Numerics.Vector3.UnitY);

	var modelMatrix = Matrix4x4.CreateFromYawPitchRoll(0, 0, 0) * Matrix4x4.CreateTranslation(0f, 0f, 0f);

	var transformMatrix = modelMatrix * viewMatrix;

	var projectionMatrix = Matrix4x4.CreatePerspectiveFieldOfView(0.78f, (float)Width / Height, 0.01f, 1000.0f);

	// transform modelview

	var transformed1 = System.Numerics.Vector4.Transform(v1, transformMatrix);

	var transformed2 = System.Numerics.Vector4.Transform(v2, transformMatrix);

	var transformed3 = System.Numerics.Vector4.Transform(v3, transformMatrix);

	// transform projection

	var fv1 = System.Numerics.Vector4.Transform(transformed1, projectionMatrix);

	var fv2 = System.Numerics.Vector4.Transform(transformed2, projectionMatrix);

	var fv3 = System.Numerics.Vector4.Transform(transformed3, projectionMatrix);

	// perspective divide

	fv1.W = 1f / fv1.W;

	fv1 = new System.Numerics.Vector4(fv1.X * fv1.W, fv1.Y * fv1.W, fv1.Z * fv1.W, fv1.W);



	fv2.W = 1f / fv2.W;

	fv2 = new System.Numerics.Vector4(fv2.X * fv2.W, fv2.Y * fv2.W, fv2.Z * fv2.W, fv2.W);



	fv3.W = 1f / fv3.W;

	fv3 = new System.Numerics.Vector4(fv3.X * fv3.W, fv3.Y * fv3.W, fv3.Z * fv3.W, fv3.W);

	//viewspace transform

	fv1.X = fv1.X * Width + Width / 2.0f;

	fv1.Y = -fv1.Y * Height + Height / 2.0f;



	fv2.X = fv2.X * Width + Width / 2.0f;

	fv2.Y = -fv2.Y * Height + Height / 2.0f;



	fv3.X = fv3.X * Width + Width / 2.0f;

	fv3.Y = -fv3.Y * Height + Height / 2.0f;

	//



	// 1. sort vertices based on winding order and y

	// 2. line drawing algorithm to populate minx maxx arrays for each line for each y

	// 3. go over each minx maxx and draw line

	// 4. profit

}



mera = new Camera();

mera.Position = new Vector3(0, 0, 3.0f);

mera.Target = new Vector3(0, 0f, 0f);

TheDevice.RenderTriangle(new System.Numerics.Vector4(0.0f, 0.5f, 0f, 1f), new System.Numerics.Vector4(0.5f, -0.5f, 0f, 1f), new System.Numerics.Vector4(-0.5f, -0.5f, 0f, 1f), new Color4(1f, 1f, 1f, 1f), mera, interaction);

So far so good. The next two steps is clipping and texture mapping/colors interpolated across the triangle and herein lies my problem, this is the vertex pipeline for opengl https://www.scratchapixel.com/images/upload/perspective-matrix/vertex-transform-pipeline.png​,​ have i gotten the pipeline right? is my code between “//transform projection” and “//perspective divide” the homogenous clip space? and is the w of my v1 v2 v3 the correct one after my transforms that i must divide with for perspective correct texturing? My knowlede of matrices is superficial and gained from playing with opengl where these things are taken care of behind the scenes, i only really understand that matrices move coordinates from one space into another.

I've used the built in System.Numerics vector and matrix structs, the code for them is as such:

https://github.com/dotnet/runtime/blob/main/src/libraries/System.Private.CoreLib/src/System/Numerics/Matrix4x4.cs

https://github.com/dotnet/runtime/blob/main/src/libraries/System.Private.CoreLib/src/System/Numerics/Vector4.cs

The Matrix4x4 uses a right handed coordinate system where z moves towards you, and is row major.

Would very much appreciate any advice or help.

Advertisement

While I sadly don't have right now that much time to describe in more detail - I can provide a link to ‘yet another implementation’ (shameless self-promotion) - which I did: https://github.com/Zgragselus/SoftwareRenderer

It is NOT a production level high performance rasterizer, this was my attempt to write an understandable and easy to read one back in the day. It wasn't that bad though - I remember using it to render some old BSP files and animated MD5 models in it.

What you're interested in (in relation to perspective correct texturing) is _dev_rasterize_triangles function.

In short - the vertex transformations are done in vertex shader part, which is programmable (there is shaders.h file, which contain them). For each processed vertex a vertex shader is invoked. Then the _dev_rasterize_triangles is called. There is a rule that position is always the first ‘varying’ data channel (send from vertex shader further into pipeline).

This function first performs clipping - now, this is a tricky one - because you may either end up not rasterizing at all, rasterizing single triangle or rasterizing more triangles (as you have triangle and box viewport - you can end up with convex polygon having up to 7 vertices - therefore up tp 5 triangles). The process goes as following:

  1. Clip triangle against viewport box, obtaining polygon
  2. Triangulate polygon (which is later used for rendering)

For the data during clipping, you NEED to correctly perform perspective correction and interpolation during clipping for F.e. texture coordinates.

Then, halfspace rasterization is done. During rasterization a _dev_pixel_shader function is called, which executes whatever function for pixel shading is currently bound. There is an option to perform z-testing (I think only less-than-or-equal variant).

I hope it helps a bit. Let us know how it goes!

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

@lasse42 I haven't done this for 20 years, and probably this is the old-school variant, but if you are into simple solution, here is one:

For projection, you can use this simple formula from the 90's:
pos2dx = fovx * pos3dx / pos3dz + center2dx;
pos2dy = - fovy * pos3dy / pos3dz + center2dy;

This will “transform” a 3d world space to screen space. The values used above can be “extracted” from a d3d projection matrix. (I don't have my hobby-code here, but if you are interested I'll dig it for you. Your sample code is very similar btw)

Before projecting you should clip the triangles. You can either use the Sutherland-Hodgman algorithm in 3d against the frustum planes, or just clip it against the near plane only (that will produce 0,1 or 2 triangles) and during rasterization you can clip the rest in screen space.

Regarding UV interpolation, you can't do it in a linear fashion. The same applies to pixel depth.
But you can with tx/z, ty/z and 1/z, and you just have to do the “same” (inverse) math when the time comes (at every pixel)
pixeltx = {interpolated tx/z} / {interpolated 1/z}
This is what we used back in high school, I hope I remember correctly ?

@vilem otte

Thank you very much for the code, such straight forward code is exactly what i need to learn, i have downloaded it and will look through it when time permits.

@bmarci

Also thanks to you for the formula, when i have the time i will try play around with it, also i would love the code you speak of.

In the mean time i now have full perspective correct texture mapping for my engine, with matrices it's tx/W and 1/W , this is then linearly interpolated in screen space and then the two divided pr pixel. But i would still like to learn how to do it more “manually” if that makes sense.

lasse42 said:
Also thanks to you for the formula, when i have the time i will try play around with it, also i would love the code you speak of.

Sure, I use this to convert to screen coordinates:

void ConvertToScreenCoordinates(const TVector& pt,TVector2& res)
// pt					- Source position in camera space
// res					- Projected point in sceen coords
{
	if (pt.z<=0) return; // Behind the screen plane

	float cx=(float)screen_size_x/2;
	float cy=(float)screen_size_y/2;

	res.x=cx+( pt.x*projection_matrix.x.x/pt.z)*cx;
	res.y=cy+(-pt.y*projection_matrix.y.y/pt.z)*cy;
}

@bmarci It was really more the code you use to project from object/model space into world and then perspectiveprojection

This topic is closed to new replies.

Advertisement