Sign in to follow this  

OpenGL Material on C# 3D Software Renderer

This topic is 1229 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I want to create a 3D Software Renderer, with a few limitations - it can't use OpenGL or DirectX (shocker, huh?), and it has to be written in either C# or Java. I will be using C#. 

I've searched the web quite a lot, and failed to find any material that would actually help. Many of the links posted are outdated, and most of the material that still, well, exists online, is purely theoretical and mostly limited to "how to draw this line". 

Can you recommend a source online that explains this subject in a bit more detail, preferably something not too outdated?

Share this post


Link to post
Share on other sites

"Many of the links posted are outdated, and most of the material that still, well, exists online, is purely theoretical and mostly limited to "how to draw this line". "

Well, that's generally how you implement anything. You read material that explains the general methods you use to create a software renderer. Then you implement it yourself using the programming language of your choice.

 

The algorithms, math and underlying fundamentals haven't changed since the 90s. You could read about software rendering from 1990 and from that using your modern programming language and modern single and multi-core parallellism as well as GPU implement the very same thing, just more in parallell. That particular aspect might not be very interesting for you though, so if you want to keep it simple - stick to the original methods and try to understand them well.

 

Anyways, I googled for 5 mins and found a C# tutorial series:

http://blogs.msdn.com/b/davrous/archive/2013/06/13/tutorial-series-learning-how-to-write-a-3d-soft-engine-from-scratch-in-c-typescript-or-javascript.aspx

 

A more detailed series here:

http://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index/

 

Also, keep in mind that if you are looking for strictly C# code, you are going to have a bad time. You really need to be able to learn from and understand articles that explain things from a more academic perspective, that explains things in terms of algorithms, pseudo-code, algebra (and unfortunately sometimes logic.)

 

To make a software renderer you might want to make sure you have covered at least the basics in linear algebra first.

Edited by Kaptein

Share this post


Link to post
Share on other sites

"Many of the links posted are outdated, and most of the material that still, well, exists online, is purely theoretical and mostly limited to "how to draw this line". "

Well, that's generally how you implement anything. You read material that explains the general methods you use to create a software renderer. Then you implement it yourself using the programming language of your choice.

 

The algorithms, math and underlying fundamentals haven't changed since the 90s. You could read about software rendering from 1990 and from that using your modern programming language and modern single and multi-core parallellism as well as GPU implement the very same thing, just more in parallell. That particular aspect might not be very interesting for you though, so if you want to keep it simple - stick to the original methods and try to understand them well.

 

Anyways, I googled for 5 mins and found a C# tutorial series:

http://blogs.msdn.com/b/davrous/archive/2013/06/13/tutorial-series-learning-how-to-write-a-3d-soft-engine-from-scratch-in-c-typescript-or-javascript.aspx

 

A more detailed series here:

http://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index/

 

Also, keep in mind that if you are looking for strictly C# code, you are going to have a bad time. You really need to be able to learn from and understand articles that explain things from a more academic perspective, that explains things in terms of algorithms, pseudo-code, algebra (and unfortunately sometimes logic.)

 

To make a software renderer you might want to make sure you have covered at least the basics in linear algebra first.

I wanted something a little bit more advanced than just drawing a line, because I don't really know how to properly scale this line into an entire scene, and I'm looking for something on a larger scale. 

The thing is, it's a school project, and the requirements are either Java or C#, and no OpenGL or DirectX. The first link uses WPF (which uses DirectX for rendering) and SharpDX. 

Edited by iYossi

Share this post


Link to post
Share on other sites

 

 

I wanted something a little bit more advanced than just drawing a line, because I don't really know how to properly scale this line into an entire scene, and I'm looking for something on a larger scale. 

The thing is, it's a school project, and the requirements are either Java or C#, and no OpenGL or DirectX. The first link uses WPF (which uses DirectX for rendering) and SharpDX. 

 

 

Well, in your original post you said you wanted to "create a software renderer," which is the same thing in any language. The no OpenGL or DirectX requirement also doesn't change anything. The underlying fundamentals are still the same. All that changes is how you present the image on the screen.

Imagine that I was allowed to use OpenGL, what changes? Well, I could rasterize to a buffer in memory and present the finished image using OpenGL. In other words, I let OpenGL be my in-program image-viewer.

 

Just get started. It doesnt matter what you draw on. You could draw to a buffer that you write to a .bmp file and show in windows image viewer. If the goal is a software renderer, the rest doesn't matter. Just focus on the rasterizing part. You can worry about what you draw on later.

 

The links I posted will tell you everything you need to know to get started.

From the link I posted, is a good article about triangle rasterization:

http://fgiesen.wordpress.com/2013/02/08/triangle-rasterization-in-practice/

 

And for the "scaling" - that's where linear algebra, lots of theory and many hours comes in.

http://www.arcsynthesis.org/gltut/Basics/Intro%20Graphics%20and%20Rendering.html

and

http://www.arcsynthesis.org/gltut/Positioning/Tut05%20Boundaries%20and%20Clipping.html#d0e5543

This gives an overview of the process, which should help you get started. You need to start at the beginning just like everyone else.

 

And a gamedev forum post about this subject:

http://www.gamedev.net/topic/595604-resources-for-writing-a-software-renderer/

 

A more advanced series, on perspective texture mapping:

http://chrishecker.com/Miscellaneous_Technical_Articles

As you can see, it gets abit harder when you move on to texture mapping. Which is why my advice is to keep it simple.

 

Start with a 2D renderer. It will simplify things. Rasterize some lines.

Move on to a 3D wireframe rasterizer. Then rasterize solid triangles with a single color. Then flat shaded materials (eg. each triangle or side of a cube has different colors).

Edited by Kaptein

Share this post


Link to post
Share on other sites

This topic is 1229 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now