• Advertisement
Sign in to follow this  

OpenGL 3D algorithm

This topic is 1874 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey everyone, has been a while I've programmed but I'm back. smile.png

I know that re-writing existing code is useless. But I don't mind to re-write some code, I want to learn how it's done.
I want to make a 3D polygon and draw it in C++, without using DirectX or OpenGL.
I've been looking around after source codes or tutorials but haven't found a single tutorial that helps me.

Very simple example of what I want to do:

m_Polygon = new Polygon(vertex1, vertex2, vertex3, vertex4); // where the vertexes have a X,Y,Z
m_Polygon->Draw();


So I want to draw the 3D polygon in my 2D screen. Can someone help me with this. unsure.png


Thank you and kind regards,
Jonathan

Share this post


Link to post
Share on other sites
Advertisement
Look up coordinate transformations, perspective projection, and rasterization. Basically, you'll be reimplementing parts of the GPU pipeline, in the form of a software renderer. Knowledge of linear algebra helps a lot if you want to understand half of the code you're writing.

It typically goes roughly like this:
- transform each vertex to world coordinates, if it's not already done, this is useful if you want to have multiple instances of the same mesh without duplicating vertices, for instance suppose you want to have two bunnies side to side, you could duplicate each vertex to make two separate bunnies, or you could instead just reuse the same mesh, but simply moving each vertex in the second bunny off to the side (translation)
- transform each vertex to camera coordinates (rotating the world around the camera, so that you're facing what you want to face)
- transform each of those vertices to perspective coordinates (so that vertices further away from the camera tend to the point at infinity, giving the illusion of depth)
- from this, work out where each vertex would appear on the screen, in normalized screen coordinates (with coordinates ranging from -1 to +1 in X and Y dimensions) this is the step where you project your 3D vertices on your screen
- upscale these vertex locations to your desired resolution (e.g. 1024 * 768)
- for each pixel on the screen, work out which triangle it belongs to (depending on how you defined triangles, e.g. triangle list or triangle strip, or even using indices) this is the rasterization step
- shade the pixel (often you interpolate important data like normals, etc.. from the three vertices of the triangle)

As you can see, it can be a lot of work, and that's a very high-level overview (I ignored the depth and stencil buffers, as well as a lot of other stuff) but it's a good learning exercise. Make sure to take it step by step so that you don't get overwhelmed! Edited by Bacterius

Share this post


Link to post
Share on other sites
A billion percent thank you!

In Win32 there are some standard functions for example: SetWorldTransform, SetGrapicsMode, etc. and structs like XFORM.
Should I make any use of those functions and structs or should I rather make my own ones?


Kind regards,
Jonathan

Share this post


Link to post
Share on other sites

In Win32 there are some standard functions for example: SetWorldTransform, SetGrapicsMode, etc. and structs like XFORM.
Should I make any use of those functions and structs or should I rather make my own ones?

I remember those functions, they seem to be remnants of some past era of software rendering using GDI (or they might be hardware-accelerated and used to draw desktop controls, I'm not sure). I wouldn't use them especially if you want to learn how transformation matrices work, since they hide all of that. All you really need is a form to display the results on, and a 2D array of pixels to render to, but of course you can use them if you find they are helpful. But you can absolutely do 100% of the rendering and computations without using built-in functions. You can even make your own vector and matrix class and roll with that, and I feel this is important if you've never done it before and are interested in low-level rendering.

It's up to you how deep you want to go. If you want to build on Win32 GDI to create your software renderer, you can absolutely do that and it's still fun (though you'll have to deal with the API, and you might not find as many tutorials as you'd like), or if you prefer to do everything from scratch, that's fun too but be ready to do a lot of linear algebra math!

By the way, re-writing existing code is not useless. It's important if you want to learn. Nobody would tell a beginner not to code Pong because it's already been done! It's only bad to reinvent the wheel if you want to get stuff done quickly, like a game. But If you're doing it for fun or for experience, reinventing the wheel is the right thing to do.

EDIT: The GDI API seems to be limited to 2D, so you might not find much use for GDI beyond accelerated pixel manipulation. I could be wrong though. Edited by Bacterius

Share this post


Link to post
Share on other sites
I'm looking up some information what GDI exactly is and this is a quote from wikipedia.
Simple games that do not require fast graphics rendering use GDI. However, GDI is relatively hard to use for advanced animation, and lacks a notion for synchronizing with individual video frames in the video card, lacks hardware rasterization for 3D, etc. Modern games usually use DirectX or OpenGL instead, which let programmers exploit the features of modern hardware.[/quote]

What makes DirectX a fast graphics renderer? How you decide the speed of rendering?
I just want to get low-level what programming concerns. I really want to get the knowledge into my head how it's done.
GDI only limited to 2D, how do I get the 3D in my program then? I'm assuming I need to calculate every pixel and draw them manually, please correct me if I'm wrong.

I want to achieve real-time rendering. Dragging a 3D model into your window and be able to translate, scale and rotate in your project.
But that is not for now, first the baby steps. But I can't do this alone sadly enough. Need some help from people who have experience in this because I can't find almost anything that helps me. maybe I'm not looking in the right place. unsure.png


Kind regards,
Jonathan

Share this post


Link to post
Share on other sites
Your graphics card already has all the steps I highlighted in my first post implemented, in hardware (not software). This means you can pump ridiculous amounts of triangles in it, and shade them in complex ways with multiple texturing and stuff and it'll still run in realtime. Furthermore, your graphics card is the component that's connected to your display monitor, and since the frame that's about to be rendered exists on the graphics card, it's very easy to send it to the screen. On the other hand, if you're using software rendering, then the frame is in system memory, and must be sent to the graphics card before being displayed, which is quite expensive (CPU to GPU transfers and vice versa are quite slow).

If you want to achieve realtime rendering, that is possible to some extent on a software renderer, but you certainly will not achieve the same amount of speed that can be achieved on a dedicated graphics card, and not with ease. The hardware is just superior in terms of performance. Unfortunately, you cannot really learn from hardware, so if you really want to know how it's done, you want to look at software rendering, which is great for learning, but remember it'll probably be quite slow without using a lot of optimization tricks and not using too many triangles.

Now it depends on your definition of "real-time" and "3D model". If you're talking about white triangles on black screen, on a model with like 10000 triangles, then sure - easily real-time. But if you want to draw a nice 1M polygon model with texturing and lighting... it's going to be harder for your software renderer than for the graphics card.

This is not to discourage you - graphics cards are designed with graphics performance in mind and are meant to be faster than processors for these kinds of operations, that's why you don't see a lot of software rendering around. But to learn the inner workings of graphics cards, there's nothing better.

Share this post


Link to post
Share on other sites
I would like to write a small copy of DirectX.smile.png How do they handle the vertices and rendering of 3D models? How should it be handled for an optimal framework/engine?
If my GPU can handle all the math, how can I render my 3D model with it? I still need to use some math from your first post or don't I?

I'm looking up coordinate transformations and it's quite fun. happy.png
And how about particles? Like when I have 1 million particles. How should I best handle them? Do I need to do all calculations and rendering on the GPU? Or are there better/faster ways?

You are helping me already but I'm still a bit confused about how it all works with the gpu / cpu thing. I want to get an optimal render system for low poly models with materials, shaders, lightning, bouncings, etc. I know the pipeline for the graphics rendering but I can't go any further than the first step at the moment. For example I want to be able to load a whole environment like COD with all players, bullets, effects in it.


Kind regards,
Jonathan

Share this post


Link to post
Share on other sites

I would like to write a small copy of DirectX.smile.png How do they handle the vertices and rendering of 3D models? How should it be handled for an optimal framework/engine?
If my GPU can handle all the math, how can I render my 3D model with it? I still need to use some math from your first post or don't I?

I'm looking up coordinate transformations and it's quite fun. happy.png
And how about particles? Like when I have 1 million particles. How should I best handle them? Do I need to do all calculations and rendering on the GPU? Or are there better/faster ways?

You are helping me already but I'm still a bit confused about how it all works with the gpu / cpu thing. I want to get an optimal render system for low poly models with materials, shaders, lightning, bouncings, etc. I know the pipeline for the graphics rendering but I can't go any further than the first step at the moment. For example I want to be able to load a whole environment like COD with all players, bullets, effects in it.


Kind regards,
Jonathan


So basically you want to write a software renderer which runs on the GPU right?
Remember that DirectX and OpenGL can communicate with your graphics driver, and I'm afraid you as a developer won't have that luxury (if you want to call it that). This only leaves you with the option to resort to GPGPU solutions, and while it's completely possible to write a software renderer with these, you probably won't be able to beat or even come near the performance a library like DirectX or OpenGL will give you. The reason for this is that DirectX and OpenGL are able to use the rasterizer hardware available on your GPU, while GPGPU solutions aren't able to do so.


Now the essential question here (before I ramble on) is: Why do you want do write your own renderer?

If this is just for learning I'd say try to do a very simple CPU-based renderer and leave the GPU out of the picture (except for maybe presenting your final result to the screen), this will teach you a lot about how the entire rasterization process works without having to worry about CPU-GPU communication and all that stuff. Try to implement each of the steps Bacterius laid out for you in his first post completely by yourself, from writing your own structures for managing vertices and indices to writing systems which can transform and shade these for you to get a final texture which you can then present to the screen.
Writing a simple software renderer which can do all of this is a very rewarding and fun experience which will teach you a lot. If you implement this well you could even use it do some very basic 3D games.

If this is about writing a production-quality renderer I'm going to be harsh and say: don't bother.
You're talking about rendering millions of particles, and rendering a huge amount of objects, so I'm guessing that this is actually what you want to do. I know re-inventing the wheel can be a lot of fun sometimes, but trying to implement something like this will put you in a lot of nasty sticky situations, and in the end you'll have a system which tries to sub-optimally solve a problem for which we already have 2 fast and proven solutions (being DirectX and OpenGL), if you even manage to complete your renderer at all (and I'm going to be harsh again and say that this is very unlikely as you don't have any previous experience writing software renderers). Edited by Radikalizm

Share this post


Link to post
Share on other sites
If you want to do simple 3d lines and such, an easy way to get going is to try simple perspective. This is what I played around with before I went into matrix math. You will probably get into matrices at some point, but if you just want something basic, you can start with this.

3d without rotation or translation:

A single perspective transform is as easy as this:

//sw is screen width, sh is screen height
//zcut is z plane cutoff (don't render anything closer to or behind z) WHY? z=0 is forbidden. z > 1 means point is in front of camera. z=-1 means point is behind camera. zcutoff of .1, .01 etc are reasonable.


int perspective(float x, float y, float z, int* sx, int* sy)
{
if (z < zcutoff)
return 0; //refuse to transform

sx = x* sw / z; //scale X by screen height and distance.
//So, an object sw pixels wide at Z=1 is the width of the screen.
//Z=2, half the widht of the screen. Z=.5, twice the width of the screen. etc
sy = y* sh / z;
return 1;
}


To draw a line in 3d:

void line3d( float x, float y, float z, float x2, float y2, float z2)
{
int sx, sy, sx2, sy2;
//only draws if both points are in front of camera
//later, if you want to get fancy, if one is in front, and the other is behind, you clip at z=zcutoff

if (perspective(x,y,z, &sx, &sy) && perspective(x2,y2,z2,&sx2, &sy2))
draw_line(sx,sy,sx2,sy2);
}


With the above snippets, you should be able to draw a 3d perspective object from the point of view of the origin.

To move the camera around, just subtract the camera position from the coordinates:


int perspective(float x, float y, float z, int* sx, int* sy)
{

x -= camera_x;
y -= camera_y
z -= camera_z;

if (z < zcutoff)
return 0; //refuse to transform
sx = x* sw / z; //scale X by screen height and distance.
//So, an object sw pixels wide at Z=1 is the width of the screen.
//Z=2, half the widht of the screen. Z=.5, twice the width of the screen. etc
sy = y* sh / z;
return 1;
}


With that, you should be able to move around the 3d environment, but view is constrained to always looking down the Z axis. But, its a start.

The last thing you can do, is allow camera rotation along the y axis (like wolf3d). It's been a while but if http://www.siggraph....tran/2drota.htm is correct, then:


int perspective(float x, float y, float z, int* sx, int* sy)
{

float xr, yr, zr;
//translate to camera position

x -= camera_x;
y-= camera_y
z -= camera_z;


//rotate 2D about y axis:

xr = x * cos(camera_angle) - z * sin(camera_angle);
zr = z* cos(camera_angle) + x * sin(camera_angle);
yr = y; // height does not change



if (zr < zcutoff)
return 0; //refuse to transform
sx = xr* sw / zr; //scale X by screen height and distance.
//So, an object sw pixels wide at Z=1 is the width of the screen.
//Z=2, half the widht of the screen. Z=.5, twice the width of the screen. etc
sy = yr* sh / zr;
return 1;
}



That should give you 5 degrees of freedom: You can move up/down, left/right, forward/back and rotate about Y. So, it's 'DOOM' controls. You can add another rotation to look up/down, but at the point you should consider trying to understand matrices. Edited by DracoLacertae

Share this post


Link to post
Share on other sites
I wrote a software renderer to the OpenGL 1 Spec last year, compiled it and linked it to QuakeGL. It worked pretty well.

This is the book that made software rendering click for me: http://www.amazon.com/Building-3D-Game-Engine-C/dp/0471123269/ref=sr_1_6?ie=UTF8&qid=1354824709&sr=8-6&keywords=3d+game+engine+programming+C%2B%2B

It's pretty concise and straight forward. Not one of those books that you want to copy the code verbatim out of due to it's age. You really need to read it and understand it. This is what i had after a few weeks with the book:

Share this post


Link to post
Share on other sites
That's pretty cool. Writing a software renderer can be fun. I wrote my first 3d programs in dos qbasic using the simple perspective and rotation transforms I posted up above. Just lines and such. Made little wireframe cube mazes to walk through. Later I moved onto Borland C and messed around with filling triangles with solid colors, and tried some texturing, which I could get on the walls but never worked properly on the floors. I made the jump to OpenGL, and had to learn matrices. I did eventually play around with software rendering again, and figured out the floors, but it was so much slower than hardware.

I resisted matrices for a while because they didn't make sense: 4x4 for 3d? 4 dimensions? It didn't make sense. When I first started using matrices, I started with translation + 3x3 matrices. A lot of my code still uses 3x3 matrices to track rotations. They make sense for 3d: for each axis in the rotated frame, there are 3 components in 'world' coordinates that make up those axes vectors. Now add in a translation. Now you have a 3x3 matrix and an awkward random 'add' for the translation. But, it just so happens if you put that 3x3 matrix inside a 4x4 matrix, put the translation on the side, and then pad out the rest of it with 0's or 1's, doing the 4x4 matrix multiply will do the same thing as the 3x3 matrix multiply, and then add the translation on. It's just a really cheap trick. Now you just get your CPU or GPU todo 4x4 matrix multiplies really fast, and then all your transform operations can be done by one heavy optimized math routine.

Share this post


Link to post
Share on other sites
Thanks everyone!

Radikalizm, I'm not trying to make a commercial software renderer laugh.png DirectX and OpenGL can't be beaten so I won't try to.
It's all learning purpose. You say that I first need to work with the CPU, how do I do that? How can I choose which one I can use to get my project running? I know the difference between both tho, cpu only does 1 thing at the time while a gpu does multiple things at the same time. But no idea when I'm using a cpu or a gpu.

DracoLacertae, thanks for the examples! smile.png So drawing lines isn't that hard to get in 3D laugh.png Before I get on to it, I bought a math book which covers every mathematics what game programming concerns. So I hope to get a line or a 3D polygon on my screen today!

uglybdavis, nicely done! smile.png

Share this post


Link to post
Share on other sites
It's all learning purpose. You say that I first need to work with the CPU, how do I do that? How can I choose which one I can use to get my project running? I know the difference between both tho, cpu only does 1 thing at the time while a gpu does multiple things at the same time. But no idea when I'm using a cpu or a gpu.[/quote]
If you're not sure, you're definitely using a CPU :)

You only "use" the GPU (at least the current generation) by programming little programs called shaders (in HLSL, or GLSL) which perform very specific tasks (e.g. transform vertices, or shade pixels). All the rest is done by the driver automatically. You don't actually write complete software on it. Everything that you code in C++ or C# or Java or whatever language, really, is done on your CPU.

Share this post


Link to post
Share on other sites

It's all learning purpose. You say that I first need to work with the CPU, how do I do that? How can I choose which one I can use to get my project running? I know the difference between both tho, cpu only does 1 thing at the time while a gpu does multiple things at the same time. But no idea when I'm using a cpu or a gpu.

If you're not sure, you're definitely using a CPU smile.png

You only "use" the GPU (at least the current generation) by programming little programs called shaders (in HLSL, or GLSL) which perform very specific tasks (e.g. transform vertices, or shade pixels). All the rest is done by the driver automatically. You don't actually write complete software on it. Everything that you code in C++ or C# or Java or whatever language, really, is done on your CPU.
[/quote]
Damn, I messed up my mind then. Thank you for clearing this out! smile.png

Share this post


Link to post
Share on other sites
That's why they introduced technologies like C++ AMP and CUDA which are basically extensions on top of C++, so that you don't have to bother with DirectX and OpenGL and their shaders. With these "extensions" you can just write your whole code in C++ and launch parallel kernels which are executed on the GPU.

But just start with a standard programming language running on solely the CPU and get your software rasterizer working on the CPU before you bother using a GPU to increase the performance. Edited by CryZe

Share this post


Link to post
Share on other sites
Is Direct3D a separate set of API calls still or is it just all part of the same interface to DirectX? I would think getting a polygon on the screen in DirectX would be fairly simple.

I do recognize "SetGraphicsMode" as possibly a WinG call from back when keyboards were made with dinosaur bones. biggrin.png

Share this post


Link to post
Share on other sites

I know the difference between both tho, cpu only does 1 thing at the time while a gpu does multiple things at the same time.


A CPU can do multiple things at a time when using multiple cores (and it can 'fake' doing multiple things on only one core), but it does indeed do this differently than a GPU :)
Modern GPUs are designed to handle wide SIMD instructions (doing the same operation on a larger set of data in parallel), hence their ability to do extremely fast floating point operations when compared to CPUs. Modern CPUs have SIMD instructions too though (like SSE on x86 architectures), but these are for smaller sets of data (mostly 128 bits).

Keep us posted on your progress by the way, I'm always very interested in these kinds of projects.

Share this post


Link to post
Share on other sites
I will gladly post my progress but it will take a while, today I realized I better learn matrices through 2D and when I understand everything I'll change to 3D.

I'm making my own matrix class which will be supported with the function SetWorldTransform. This function works with XFORM but I'm not using this one. It's better to write my own calculations self so I can learn from it. When I got it all calculated I just put everything into an XFORM and transform the world. So when I understand the matrix and transform concept I go to 3D. smile.png

Or is there a better way to transform my object(rectangle, bitmap, etc)?
I know matrices needs to be used to transform such an object, but I mean, is there something else to transform my world with?

Share this post


Link to post
Share on other sites
You might want to consider decoupling your renderer from systems such as GDI and use it purely as a back-end to present your final image, this will give you a cleaner system which is somewhat future-proof as GDI can be considered old right now.
You could eventually use a system like D2D as a back-end to present your image to the screen as well without having to make any changes to your renderer.

Have you ever written shaders before? If you haven't I'd advise to try and do some simple shaders in DirectX or OpenGL to get a feel for transformations and rasterization in general. They're a great instant-gratification kind of way to learn about all of this stuff, and it will probably make it easier for your to understand the steps you should take for a software rasterizer.
With a knowledge of shaders the concept of how to build systems for setting your own transformation matrices for your renderer becomes a piece of cake as well.

I would however be careful about writing your own matrix and vector implementations. They can be a great learning experience, but it's very easy to make small errors here an there when it comes to operations like matrix-matrix multiplications or matrix-vector multiplications, and those errors can introduce huge bugs in your renderer.
So if you intend to take the DIY route when it comes to math, be sure to write a ton of test cases before you actually start working with it, you want to make sure your math library completely works as intended.

Share this post


Link to post
Share on other sites
Thanks for the help, I've made a colorshader last year, but I didn't understood a single line of code what I was writing. Shaders are quite hard to understand for me.

How does DirectX render his transformations?
I just made a matrix class that can translate, scale and rotate. Tested it and it works.
Don't mind how I use the methods. It's just a testing project.

Matrix matTranslate, matRotate, matScale, matWorld;
matTranslate.SetAsTranslate(-0.7f, -0.4f);
matRotate.SetAsRotate(80.0);
matScale.SetAsScale(0.5,0.5);
matWorld = matRotate * matScale * matTranslate;
Matrix::SetAsWorld(hDC, matWorld);

Rectangle(hDC, -1, -1, 1, 1);


This is what I use for my transformations:

Matrix::Matrix():
eM11(0.0f), eM12(0.0f), eM13(0.0f),
eM21(0.0f), eM22(0.0f), eM23(0.0f),
eM31(0.0f), eM32(0.0f), eM33(0.0f)
{
}

void Matrix::SetAsTranslate(float x, float y)
{
eM11 = 1.0f;
eM12 = 0.0f;
eM13 = 0.0f;
eM21 = 0.0f;
eM22 = 1.0f;
eM23 = 0.0f;
eM31 = x;
eM32 = y;
eM33 = 1.0f;
}

void Matrix::SetAsRotate(float radians)
{
eM11 = (float)cos(radians);
eM12 = (float)sin(radians);
eM13 = 0.0f;
eM21 = (float)-sin(radians);
eM22 = (float)cos(radians);
eM23 = 0.0f;
eM31 = 0.0f;
eM32 = 0.0f;
eM33 = 1.0f;
}

void Matrix::SetAsRotate(double degrees)
{
float radians = (float)(degrees/180 * M_PI);
eM11 = (float)cos(radians);
eM12 = (float)sin(radians);
eM13 = 0.0f;
eM21 = (float)-sin(radians);
eM22 = (float)cos(radians);
eM23 = 0.0f;
eM31 = 0.0f;
eM32 = 0.0f;
eM33 = 1.0f;
}

void Matrix::SetAsScale(float x, float y)
{
eM11 = x;
eM12 = 0.0f;
eM13 = 0.0f;
eM21 = 0.0f;
eM22 = y;
eM23 = 0.0f;
eM31 = 0.0f;
eM32 = 0.0f;
eM33 = 1.0f;
}

Matrix operator*(const Matrix& ref1, const Matrix& ref2)
{
Matrix mat;
mat.eM11 = ref1.eM11 * ref2.eM11 + ref1.eM12 * ref2.eM21 + ref1.eM13 * ref2.eM31;
mat.eM12 = ref1.eM11 * ref2.eM12 + ref1.eM12 * ref2.eM22 + ref1.eM13 * ref2.eM32;
mat.eM13 = ref1.eM11 * ref2.eM13 + ref1.eM12 * ref2.eM23 + ref1.eM13 * ref2.eM33;
mat.eM21 = ref1.eM21 * ref2.eM11 + ref1.eM22 * ref2.eM21 + ref1.eM23 * ref2.eM31;
mat.eM22 = ref1.eM21 * ref2.eM12 + ref1.eM22 * ref2.eM22 + ref1.eM23 * ref2.eM32;
mat.eM23 = ref1.eM21 * ref2.eM13 + ref1.eM22 * ref2.eM23 + ref1.eM23 * ref2.eM33;
mat.eM31 = ref1.eM31 * ref2.eM11 + ref1.eM32 * ref2.eM21 + ref1.eM33 * ref2.eM31;
mat.eM32 = ref1.eM31 * ref2.eM12 + ref1.eM32 * ref2.eM22 + ref1.eM33 * ref2.eM32;
mat.eM33 = ref1.eM31 * ref2.eM13 + ref1.eM32 * ref2.eM23 + ref1.eM33 * ref2.eM33;

return mat;
}

void Matrix::SetAsWorld(HDC hDC, const Matrix& mat)
{
XFORM form;
form.eM11 = mat.eM11;
form.eM12 = mat.eM12;
form.eM21 = mat.eM21;
form.eM22 = mat.eM22;
form.eDx = mat.eM31;
form.eDy = mat.eM32;

SetWorldTransform(hDC, &form);
}


The calculations I've written on paper first with help from a book. So I didn't copied anything from the internet. I'm getting the hang of matrices very fast with this book, next chapter is linear transformations! biggrin.png

But indeed, I'm using GDI and it's getting pretty classic. But I really don't want to use a single external library. Because it's too "easy" then.. laugh.png I'm not trying to be stubborn. Any suggestions how I avoid the classic functions? DirectX also must use a classic way or are they working very low level and code all their rendering themselves?

Share this post


Link to post
Share on other sites
Take your mind off of GDI for now, completely forget it exists as you won't be needing it if you want to build your renderer from the ground up (unless you want to use it to present your final image to a window).

Let's imagine this situation:
You have a 3D model you want to render, let's make it a cube to keep things simple. Let's assume your model is completely made up out of vertices, we're leaving indices behind for the time being. To make things even simpler we're just going to assume that your vertices are just plain old positions (being 3D vectors), we're not going to worry about things such as normals, UV-coords, colors, etc., just plain old positions.

Now let's say you want to render this model at a certain position with a certain rotation and a certain scale. You also want all the pixels occupied by this model on-screen to have a certain color, let's take red for example.

Let's have a look at the requirements to realize all of this, you'll need:

  1. A data structure which can contain your model's info. In our simple setting here this can be a plain old array containing vertices, and as mentioned our vertices are regular positions right now. We're going to assume each 3 following vertices will make up a triangle.
  2. A way of representing where you want your model to end up relative to the world center, and which rotation and scale it should have. This is your world transformation as you have probably already figured out by yourself. This world transformation is a single matrix containing all of these 3 aspects at once.
  3. A way of representing where you are in the world and how you're looking at the world. This is mostly abstracted away behind the concept of a camera, which is a data structure which holds its own world position, look-at vector and a vector which defines which way is up. These 3 vectors will be used to generate our second important transformation matrix, being the view transformation.
  4. A way of projecting what we 'see' through our camera onto our final image. Projections can be done in a lot of ways to achieve different results, but what you probably want is a perspective projection. Such a perspective projection is defined mostly by a field of view (FoV) and an aspect ratio. This information will be stored in our final important transformation matrix, being the projection transformation. This projection transformation will map a 3D position to a 2D position where each component ranges between [-1, 1]
  5. A way of storing our final image in full color. This can be done by creating a texture data structure, which basically is just a 2D array of arbitrary values with a resolution of your choosing (this will be the resolution of your final image). How wide these values are, and how many values you need to define a single color-element is defined by the color format you're going to be using. For simple applications the R8G8B8 format, which is a format which defines 3 color channels per color element (red, green and blue) each with 8-bits (being 1 byte) per channel, will do just fine. You'll be creating such a texture which will act as a screen buffer for you to render to.


Ok so, now we've defined our requirements, the only thing we need now is an overview of how we're going to use these to get our final image from our model.
I'll provide you with a simplified overview of what you should do:

  • Tell our renderer that we want to render to our screen buffer (see #5 in our previous overview). The renderer could've created this screen buffer itself, or you could create it yourself and pass it on (eg. renderer->setRenderTarget(some_texture))
  • We now want to get our screen-coordinates for each vertex in our model, this in itself happens in a few steps. I'm going to give you some pseudo-code to explain the process:

    [source lang="plain"]for each vertex in our model do
    position = transform(vertex_position, world_transformation) // This transforms our vertex from local space to world space
    position = transform(position, view_transformation) // This transforms our world space position to a view space (camera or eye space) position
    position = transform(position, projection_transformation) // This transforms our view space position to a screen space position
    [/source]

    • This gives us a bunch of positions of which we only want the first 2 components (X and Y) right now for our simple setup. As mentioned X and Y will both be in the range [-1, 1], but this won't do if we want to determine which pixels we want to write to. To fix this we're going to apply 2 simple transformations. The first one will transform our range from [-1, 1] to [0, 1]; you do this by applying this formula: n = (n + 1) * 0.5. Our second transformation will scale up our [0, 1] range to our chosen screen buffer resolution, so this is just a simple multiplication of your X and Y components by your screen buffer width and height respectively.
    • We now have a bunch of screen coordinates which directly map to pixels in our screen buffer, this means we can now set colors for the pixels we want to write to. We assumed that our vertices would be ordered so they would make up triangles, so for each group of 3 positions we'll first have to create a triangle. Once we have this triangle we only need to go over it's surface to determine which pixels the triangle should cover. Remember that we just wanted to color everything red, so for all pixels making up our triangle we'll set the red channel to 255 (we're working in RGB8!) in our screen buffer.
    • Once you've done this for every group of 3 transformed vertices your screen buffer will now contain your projected image of your cube model. The only thing left to do now is to present it to the screen, which is where a library like GDI or D2D can come into play.


      That about covers it I think, could be that I left out a few details or that I made some errors, but please don't shoot me for that.

      EDIT:

      I want to make note of some things I left out which were not needed for such an extremely simple example, but which will play a major part once you get further in your renderer. To name a few:
      -Usage of a Z-buffer for depth testing (really important for ordering and avoiding overdraw when rendering multiple objects)
      -Indices (all kinds of uses, from determining triangle winding order to reducing vertex buffer footprints)
      -Materials, lighting, texturing and all that stuff
      -Culling and clipping
      -Probably a million more things which I can't immediately think of right now Edited by Radikalizm

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By EddieK
      Hello. I'm trying to make an android game and I have come across a problem. I want to draw different map layers at different Z depths so that some of the tiles are drawn above the player while others are drawn under him. But there's an issue where the pixels with alpha drawn above the player. This is the code i'm using:
      int setup(){ GLES20.glEnable(GLES20.GL_DEPTH_TEST); GLES20.glEnable(GL10.GL_ALPHA_TEST); GLES20.glEnable(GLES20.GL_TEXTURE_2D); } int render(){ GLES20.glClearColor(0, 0, 0, 0); GLES20.glClear(GLES20.GL_ALPHA_BITS); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT); GLES20.glBlendFunc(GLES20.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA); // do the binding of textures and drawing vertices } My vertex shader:
      uniform mat4 MVPMatrix; // model-view-projection matrix uniform mat4 projectionMatrix; attribute vec4 position; attribute vec2 textureCoords; attribute vec4 color; attribute vec3 normal; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { outNormal = normal; outTexCoords = textureCoords; outColor = color; gl_Position = MVPMatrix * position; } My fragment shader:
      precision highp float; uniform sampler2D texture; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { vec4 color = texture2D(texture, outTexCoords) * outColor; gl_FragColor = vec4(color.r,color.g,color.b,color.a);//color.a); } I have attached a picture of how it looks. You can see the black squares near the tree. These squares should be transparent as they are in the png image:

      Its strange that in this picture instead of alpha or just black color it displays the grass texture beneath the player and the tree:

      Any ideas on how to fix this?
       
      Thanks in advance
       
       
    • By DiligentDev
      This article uses material originally posted on Diligent Graphics web site.
      Introduction
      Graphics APIs have come a long way from small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. Next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older hardware. An application targeting wide range of platforms needs to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed.
      There are at least four APIs (Direct3D11, Direct3D12, OpenGL/GLES, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident. The following is the list of requirements that I believe such layer needs to satisfy:
      Lightweight abstractions: the API should be as close to the underlying native APIs as possible to allow an application leverage all available low-level functionality. In many cases this requirement is difficult to achieve because specific features exposed by different APIs may vary considerably. Low performance overhead: the abstraction layer needs to be efficient from performance point of view. If it introduces considerable amount of overhead, there is no point in using it. Convenience: the API needs to be convenient to use. It needs to assist developers in achieving their goals not limiting their control of the graphics hardware. Multithreading: ability to efficiently parallelize work is in the core of Direct3D12 and Vulkan and one of the main selling points of the new APIs. Support for multithreading in a cross-platform layer is a must. Extensibility: no matter how well the API is designed, it still introduces some level of abstraction. In some cases the most efficient way to implement certain functionality is to directly use native API. The abstraction layer needs to provide seamless interoperability with the underlying native APIs to provide a way for the app to add features that may be missing. Diligent Engine is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. Full source code is available for download at GitHub and is free to use.
      Overview
      Diligent Engine API takes some features from Direct3D11 and Direct3D12 as well as introduces new concepts to hide certain platform-specific details and make the system easy to use. It contains the following main components:
      Render device (IRenderDevice  interface) is responsible for creating all other objects (textures, buffers, shaders, pipeline states, etc.).
      Device context (IDeviceContext interface) is the main interface for recording rendering commands. Similar to Direct3D11, there are immediate context and deferred contexts (which in Direct3D11 implementation map directly to the corresponding context types). Immediate context combines command queue and command list recording functionality. It records commands and submits the command list for execution when it contains sufficient number of commands. Deferred contexts are designed to only record command lists that can be submitted for execution through the immediate context.
      An alternative way to design the API would be to expose command queue and command lists directly. This approach however does not map well to Direct3D11 and OpenGL. Besides, some functionality (such as dynamic descriptor allocation) can be much more efficiently implemented when it is known that a command list is recorded by a certain deferred context from some thread.
      The approach taken in the engine does not limit scalability as the application is expected to create one deferred context per thread, and internally every deferred context records a command list in lock-free fashion. At the same time this approach maps well to older APIs.
      In current implementation, only one immediate context that uses default graphics command queue is created. To support multiple GPUs or multiple command queue types (compute, copy, etc.), it is natural to have one immediate contexts per queue. Cross-context synchronization utilities will be necessary.
      Swap Chain (ISwapChain interface). Swap chain interface represents a chain of back buffers and is responsible for showing the final rendered image on the screen.
      Render device, device contexts and swap chain are created during the engine initialization.
      Resources (ITexture and IBuffer interfaces). There are two types of resources - textures and buffers. There are many different texture types (2D textures, 3D textures, texture array, cubmepas, etc.) that can all be represented by ITexture interface.
      Resources Views (ITextureView and IBufferView interfaces). While textures and buffers are mere data containers, texture views and buffer views describe how the data should be interpreted. For instance, a 2D texture can be used as a render target for rendering commands or as a shader resource.
      Pipeline State (IPipelineState interface). GPU pipeline contains many configurable stages (depth-stencil, rasterizer and blend states, different shader stage, etc.). Direct3D11 uses coarse-grain objects to set all stage parameters at once (for instance, a rasterizer object encompasses all rasterizer attributes), while OpenGL contains myriad functions to fine-grain control every individual attribute of every stage. Both methods do not map very well to modern graphics hardware that combines all states into one monolithic state under the hood. Direct3D12 directly exposes pipeline state object in the API, and Diligent Engine uses the same approach.
      Shader Resource Binding (IShaderResourceBinding interface). Shaders are programs that run on the GPU. Shaders may access various resources (textures and buffers), and setting correspondence between shader variables and actual resources is called resource binding. Resource binding implementation varies considerably between different API. Diligent Engine introduces a new object called shader resource binding that encompasses all resources needed by all shaders in a certain pipeline state.
      API Basics
      Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. Graphics APIs usually have a native object that represents linear buffer. Diligent Engine uses IBuffer interface as an abstraction for a native buffer. To create a buffer, one needs to populate BufferDesc structure and call IRenderDevice::CreateBuffer() method as in the following example:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); While there is usually just one buffer object, different APIs use very different approaches to represent textures. For instance, in Direct3D11, there are ID3D11Texture1D, ID3D11Texture2D, and ID3D11Texture3D objects. In OpenGL, there is individual object for every texture dimension (1D, 2D, 3D, Cube), which may be a texture array, which may also be multisampled (i.e. GL_TEXTURE_2D_MULTISAMPLE_ARRAY). As a result there are nine different GL texture types that Diligent Engine may create under the hood. In Direct3D12, there is only one resource interface. Diligent Engine hides all these details in ITexture interface. There is only one  IRenderDevice::CreateTexture() method that is capable of creating all texture types. Dimension, format, array size and all other parameters are specified by the members of the TextureDesc structure:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); If native API supports multithreaded resource creation, textures and buffers can be created by multiple threads simultaneously.
      Interoperability with native API provides access to the native buffer/texture objects and also allows creating Diligent Engine objects from native handles. It allows applications seamlessly integrate native API-specific code with Diligent Engine.
      Next-generation APIs allow fine level-control over how resources are allocated. Diligent Engine does not currently expose this functionality, but it can be added by implementing IResourceAllocator interface that encapsulates specifics of resource allocation and providing this interface to CreateBuffer() or CreateTexture() methods. If null is provided, default allocator should be used.
      Initializing the Pipeline State
      As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors. With many individual calls tweaking various GPU pipeline settings it is very easy to forget to set one of the states or assume the stage is already properly configured when in fact it is not. Using pipeline state object helps avoid these problems as all stages are configured at once.
      Creating Shaders
      While in earlier APIs shaders were bound separately, in the next-generation APIs as well as in Diligent Engine shaders are part of the pipeline state object. The biggest challenge when authoring shaders is that Direct3D and OpenGL/Vulkan use different shader languages (while Apple uses yet another language in their Metal API). Maintaining two versions of every shader is not an option for real applications and Diligent Engine implements shader source code converter that allows shaders authored in HLSL to be translated to GLSL. To create a shader, one needs to populate ShaderCreationAttribs structure. SourceLanguage member of this structure tells the system which language the shader is authored in:
      SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source language matches the underlying graphics API: HLSL for Direct3D11/Direct3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter, so this value should only be used for OpenGL and OpenGLES modes. There are two ways to provide the shader source code. The first way is to use Source member. The second way is to provide a file path in FilePath member. Since the engine is entirely decoupled from the platform and the host file system is platform-dependent, the structure exposes pShaderSourceStreamFactory member that is intended to provide the engine access to the file system. If FilePath is provided, shader source factory must also be provided. If the shader source contains any #include directives, the source stream factory will also be used to load these files. The engine provides default implementation for every supported platform that should be sufficient in most cases. Custom implementation can be provided when needed.
      When sampling a texture in a shader, the texture sampler was traditionally specified as separate object that was bound to the pipeline at run time or set as part of the texture object itself. However, in most cases it is known beforehand what kind of sampler will be used in the shader. Next-generation APIs expose new type of sampler called static sampler that can be initialized directly in the pipeline state. Diligent Engine exposes this functionality: when creating a shader, textures can be assigned static samplers. If static sampler is assigned, it will always be used instead of the one initialized in the texture shader resource view. To initialize static samplers, prepare an array of StaticSamplerDesc structures and initialize StaticSamplers and NumStaticSamplers members. Static samplers are more efficient and it is highly recommended to use them whenever possible. On older APIs, static samplers are emulated via generic sampler objects.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader );
      Creating the Pipeline State Object
      After all required shaders are created, the rest of the fields of the PipelineStateDesc structure provide depth-stencil, rasterizer, and blend state descriptions, the number and format of render targets, input layout format, etc. For instance, rasterizer state can be described as follows:
      PipelineStateDesc PSODesc; RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; RasterizerDesc.AntialiasedLineEnable = False; Depth-stencil and blend states are defined in a similar fashion.
      Another important thing that pipeline state object encompasses is the input layout description that defines how inputs to the vertex shader, which is the very first shader stage, should be read from the memory. Input layout may define several vertex streams that contain values of different formats and sizes:
      // Define input layout InputLayoutDesc &Layout = PSODesc.GraphicsPipeline.InputLayout; LayoutElement TextLayoutElems[] = {     LayoutElement( 0, 0, 3, VT_FLOAT32, False ),     LayoutElement( 1, 0, 4, VT_UINT8, True ),     LayoutElement( 2, 0, 2, VT_FLOAT32, False ), }; Layout.LayoutElements = TextLayoutElems; Layout.NumElements = _countof( TextLayoutElems ); Finally, pipeline state defines primitive topology type. When all required members are initialized, a pipeline state object can be created by IRenderDevice::CreatePipelineState() method:
      // Define shader and primitive topology PSODesc.GraphicsPipeline.PrimitiveTopologyType = PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; PSODesc.GraphicsPipeline.pVS = pVertexShader; PSODesc.GraphicsPipeline.pPS = pPixelShader; PSODesc.Name = "My pipeline state"; m_pDev->CreatePipelineState(PSODesc, &m_pPSO); When PSO object is bound to the pipeline, the engine invokes all API-specific commands to set all states specified by the object. In case of Direct3D12 this maps directly to setting the D3D12 PSO object. In case of Direct3D11, this involves setting individual state objects (such as rasterizer and blend states), shaders, input layout etc. In case of OpenGL, this requires a number of fine-grain state tweaking calls. Diligent Engine keeps track of currently bound states and only calls functions to update these states that have actually changed.
      Binding Shader Resources
      Direct3D11 and OpenGL utilize fine-grain resource binding models, where an application binds individual buffers and textures to certain shader or program resource binding slots. Direct3D12 uses a very different approach, where resource descriptors are grouped into tables, and an application can bind all resources in the table at once by setting the table in the command list. Resource binding model in Diligent Engine is designed to leverage this new method. It introduces a new object called shader resource binding that encapsulates all resource bindings required for all shaders in a certain pipeline state. It also introduces the classification of shader variables based on the frequency of expected change that helps the engine group them into tables under the hood:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. Shader variable type must be specified during shader creation by populating an array of ShaderVariableDesc structures and initializing ShaderCreationAttribs::Desc::VariableDesc and ShaderCreationAttribs::Desc::NumVariables members (see example of shader creation above).
      Static variables cannot be changed once a resource is bound to the variable. They are bound directly to the shader object. For instance, a shadow map texture is not expected to change after it is created, so it can be bound directly to the shader:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new Shader Resource Binding object (SRB) that is created by the pipeline state (IPipelineState::CreateShaderResourceBinding()):
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Note that an SRB is only compatible with the pipeline state it was created from. SRB object inherits all static bindings from shaders in the pipeline, but is not allowed to change them.
      Mutable resources can only be set once for every instance of a shader resource binding. Such resources are intended to define specific material properties. For instance, a diffuse texture for a specific material is not expected to change once the material is defined and can be set right after the SRB object has been created:
      m_pSRB->GetVariable(SHADER_TYPE_PIXEL, "tex2DDiffuse")->Set(pDiffuseTexSRV); In some cases it is necessary to bind a new resource to a variable every time a draw command is invoked. Such variables should be labeled as dynamic, which will allow setting them multiple times through the same SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); Under the hood, the engine pre-allocates descriptor tables for static and mutable resources when an SRB objcet is created. Space for dynamic resources is dynamically allocated at run time. Static and mutable resources are thus more efficient and should be used whenever possible.
      As you can see, Diligent Engine does not expose low-level details of how resources are bound to shader variables. One reason for this is that these details are very different for various APIs. The other reason is that using low-level binding methods is extremely error-prone: it is very easy to forget to bind some resource, or bind incorrect resource such as bind a buffer to the variable that is in fact a texture, especially during shader development when everything changes fast. Diligent Engine instead relies on shader reflection system to automatically query the list of all shader variables. Grouping variables based on three types mentioned above allows the engine to create optimized layout and take heavy lifting of matching resources to API-specific resource location, register or descriptor in the table.
      This post gives more details about the resource binding model in Diligent Engine.
      Setting the Pipeline State and Committing Shader Resources
      Before any draw or compute command can be invoked, the pipeline state needs to be bound to the context:
      m_pContext->SetPipelineState(m_pPSO); Under the hood, the engine sets the internal PSO object in the command list or calls all the required native API functions to properly configure all pipeline stages.
      The next step is to bind all required shader resources to the GPU pipeline, which is accomplished by IDeviceContext::CommitShaderResources() method:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); The method takes a pointer to the shader resource binding object and makes all resources the object holds available for the shaders. In the case of D3D12, this only requires setting appropriate descriptor tables in the command list. For older APIs, this typically requires setting all resources individually.
      Next-generation APIs require the application to track the state of every resource and explicitly inform the system about all state transitions. For instance, if a texture was used as render target before, while the next draw command is going to use it as shader resource, a transition barrier needs to be executed. Diligent Engine does the heavy lifting of state tracking.  When CommitShaderResources() method is called with COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES flag, the engine commits and transitions resources to correct states at the same time. Note that transitioning resources does introduce some overhead. The engine tracks state of every resource and it will not issue the barrier if the state is already correct. But checking resource state is an overhead that can sometimes be avoided. The engine provides IDeviceContext::TransitionShaderResources() method that only transitions resources:
      m_pContext->TransitionShaderResources(m_pPSO, m_pSRB); In some scenarios it is more efficient to transition resources once and then only commit them.
      Invoking Draw Command
      The final step is to set states that are not part of the PSO, such as render targets, vertex and index buffers. Diligent Engine uses Direct3D11-syle API that is translated to other native API calls under the hood:
      ITextureView *pRTVs[] = {m_pRTV}; m_pContext->SetRenderTargets(_countof( pRTVs ), pRTVs, m_pDSV); // Clear render target and depth buffer const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); m_pContext->ClearDepthStencil(nullptr, CLEAR_DEPTH_FLAG, 1.f); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); Different native APIs use various set of function to execute draw commands depending on command details (if the command is indexed, instanced or both, what offsets in the source buffers are used etc.). For instance, there are 5 draw commands in Direct3D11 and more than 9 commands in OpenGL with something like glDrawElementsInstancedBaseVertexBaseInstance not uncommon. Diligent Engine hides all details with single IDeviceContext::Draw() method that takes takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); For compute commands, there is IDeviceContext::DispatchCompute() method that takes DispatchComputeAttribs structure that defines compute grid dimension.
      Source Code
      Full engine source code is available on GitHub and is free to use. The repository contains two samples, asteroids performance benchmark and example Unity project that uses Diligent Engine in native plugin.
      AntTweakBar sample is Diligent Engine’s “Hello World” example.

       
      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to multiple render targets, using compute shaders and unordered access views, etc.

      Asteroids performance benchmark is based on this demo developed by Intel. It renders 50,000 unique textured asteroids and allows comparing performance of Direct3D11 and Direct3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

      Finally, there is an example project that shows how Diligent Engine can be integrated with Unity.

      Future Work
      The engine is under active development. It currently supports Windows desktop, Universal Windows and Android platforms. Direct3D11, Direct3D12, OpenGL/GLES backends are now feature complete. Vulkan backend is coming next, and support for more platforms is planned.
    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
  • Advertisement