I would like to write a small copy of DirectX. How do they handle the vertices and rendering of 3D models? How should it be handled for an optimal framework/engine?
If my GPU can handle all the math, how can I render my 3D model with it? I still need to use some math from your first post or don't I?
I'm looking up coordinate transformations and it's quite fun.
And how about particles? Like when I have 1 million particles. How should I best handle them? Do I need to do all calculations and rendering on the GPU? Or are there better/faster ways?
You are helping me already but I'm still a bit confused about how it all works with the gpu / cpu thing. I want to get an optimal render system for low poly models with materials, shaders, lightning, bouncings, etc. I know the pipeline for the graphics rendering but I can't go any further than the first step at the moment. For example I want to be able to load a whole environment like COD with all players, bullets, effects in it.
So basically you want to write a software renderer which runs on the GPU right?
Remember that DirectX and OpenGL can communicate with your graphics driver, and I'm afraid you as a developer won't have that luxury (if you want to call it that). This only leaves you with the option to resort to GPGPU solutions, and while it's completely possible to write a software renderer with these, you probably won't be able to beat or even come near the performance a library like DirectX or OpenGL will give you. The reason for this is that DirectX and OpenGL are able to use the rasterizer hardware available on your GPU, while GPGPU solutions aren't able to do so.
Now the essential question here (before I ramble on) is: Why do you want do write your own renderer?
If this is just for learning I'd say try to do a very simple CPU-based renderer and leave the GPU out of the picture (except for maybe presenting your final result to the screen), this will teach you a lot about how the entire rasterization process works without having to worry about CPU-GPU communication and all that stuff. Try to implement each of the steps Bacterius laid out for you in his first post completely by yourself, from writing your own structures for managing vertices and indices to writing systems which can transform and shade these for you to get a final texture which you can then present to the screen.
Writing a simple software renderer which can do all of this is a very rewarding and fun experience which will teach you a lot. If you implement this well you could even use it do some very basic 3D games.
If this is about writing a production-quality renderer I'm going to be harsh and say: don't bother.
You're talking about rendering millions of particles, and rendering a huge amount of objects, so I'm guessing that this is actually what you want to do. I know re-inventing the wheel can be a lot of fun sometimes, but trying to implement something like this will put you in a lot of nasty sticky situations, and in the end you'll have a system which tries to sub-optimally solve a problem for which we already have 2 fast and proven solutions (being DirectX and OpenGL), if you even manage to complete your renderer at all (and I'm going to be harsh again and say that this is very unlikely as you don't have any previous experience writing software renderers).