Advertisement Jump to content
Sign in to follow this  
lipsryme

Design help on software rasterizer / renderer

This topic is 1843 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey guys, I've recently started working on a software renderer/rasterizer (for the fun of it / portfolio) and I've been thinking of trying to emulate the way the gpu pipeline works.

My questions are:

 

1. Is it even reasonable trying to emulate the gpu pipeline ? [e.g. IA-VS-PA-RS-PS-OM]

2. How would I design the shading part (vertex and pixel shading) to make it flexible enough that I can write shaders with different inputs and outputs

(currently having a hard time figuring that out rolleyes.gif ...) Or should I just settle for a static approach using only what I need for the type of scene I'm trying to display ?

Edited by lipsryme

Share this post


Link to post
Share on other sites
Advertisement
to answer your question, it would be useful to know why you're want to emulate the pipeline?!
for a software alternative? for learning? for debugging? for experimental profiling? to add alternative stages? to figure out what games are doing? ...

Share this post


Link to post
Share on other sites
Like I said for fun/learning I guess. But I take it from your answer that it seems like trying to emulate the gpu is a bad idea?

Share this post


Link to post
Share on other sites

You can follow that paradigm, especially if you are planning on learning.  Just keep in mind that you probably won't break any speed records.  I did something similar back in the D3D9 days (i.e. VS+PS).  It is a great learning exercise, and informs you about what is happening on the GPU when you do regular rendering.

Share this post


Link to post
Share on other sites

Like I said for fun/learning I guess. But I take it from your answer that it seems like trying to emulate the gpu is a bad idea?

no, I didn't want to imply it, but to answer you if it's 1. 'reasonable' an 2. whether should you use shader or hardcoded stuff, is not really possible not knowing what you aim for.

 

if it's just for fun, then start as small as possible and increase the scope of your project as you go. the field of a gpu pipeline, although seems simple, can be quite exhausting, learning stage by stage might help you and also allow you to validate your results on a tinier scope against the hardware.

 

so:

 

1. replace the Vertex assembly+vertexshader with your 'emulation'. you can start with a simple hardcoded matrix*vertex and hardcode more complex stuff e.g. some water wave simulation and skinning. then, if you feel like it, disassemble shader (there is a tool in the D3DX lib) and try to emulate them line by line (no matter how slow it is, it shall still be fast enough to draw some boxes). next step could be to create an optimized vectorized version before, as a last step, you transcompile that disassembly to native cpu code. as an alternative you could also try to 'compile' hlsl source to native cpu code with a help of e.g. http://ispc.github.io/

all the time you need some simple real vertexshader that just reads your 'output' and passes it to the next stages.

2. emulate the geometry shader also, you still end up with vertices that you pass-through with the previous shader.

3. emulate hull+domain shader, still pass vertices with the pass through shader.

4. replace the rasterizer. (if you feel like you want to start with this stage, you could use the 'stream out' feature of geometry shaders to write out all vertex shader transforms). there are tons of paper online how to write some, at the beginning nothing really matters as long as you get those triangles somehow visualized. start with maybe just outlines (wireframe), go on with filled flat shaded triangles (color might be some triangle ID), try to add interpolation of z, then make it perspective correct, add interpolation of every 'interpolator'. as a simple step, create a gbuffer that you shade as usually on gpu-hw.

 

I'm sure those points, if you don't have any previous experience/knowledge, can keep you busy for some years :) (if you really dive into all details).

Share this post


Link to post
Share on other sites

Everytime i see a question posed by a member, the first answer always seems to be why do it and then point to some library that does the same thing. There is reinventing the wheel because of ignorance ( by that I mean not knowing that there is something out there that does the same thing ) and there is the learning aspect. The blackbox approach to programming doesn't suite everyone and I admire your attempt to learn the fundamentals. I could go on about knowledge of the basics, but that wouldn't answer your question. So for your second question, you could approach your programmable shading pipeline the same way, any other the prominent 3D API, OpenGL/Direct X approach it. By that I mean:

1. Shading Language specification : Create you own shader language or resuse one thats already available ex. GLSL/HLSL. This step is akin to writing a compiler, so you would have to write your lexer/parser, compiler, linker. You would probably want to look into how compilers work in general and probably study how GLSL/HLSL get converted into bytecode.

2. Base on the design in 1, you would then tie you shader pipeline into the respective pipeline stages.

Share this post


Link to post
Share on other sites

2. if you write your own renderer, why you even want to write generic vertex and pixel shader support? you can just code your effects inside the renderer pipeline for your pleasure.

Share this post


Link to post
Share on other sites

I would recommend implementing the shaded portion in C++, as throwing your own language compiler into the mix doesn't seem like a useful addition in the beginning.  You could always add script based shaded programs later on.  Regarding your second question, you need to map how D3D does the boundaries - there are specific rules for each stage's input and output data.  You need to be able to mimic that in C++.

 

There are many ways to do that.  You could just accept a vector of attributes at the input of each stage, and map to those inputs based on the 'register' location within the vector.  Or you could use a template to define what your input and output structures will look like, and then have your shader program key off of members in your template parameters.  Does that make sense about how to get started?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!