Abstracting my renderer

Started by
28 comments, last by cozzie 7 years, 4 months ago

For those who read my earlier topic on managing meshes and buffers, probably will recognize this followup.

Otherwise:

- I've decided to 'correctly' abstract my renderer to be able to support multiple API's in the future

- I'm not that far in development of the engine yet (although quite some work in it), so I decided better do it now, then why the codebase grows larger

The basic lessons I've learned on 'abstraction', in this case, is as follows:

1. Create API independent Ixxxx classes (interfaces), for anything you'd like to abstract

(IRenderer, IDevice, IDeviceContext, IMeshBuffer, ICBuffer, IRenderTarget and so on)

2. Create the D3D11 versions, inherited from the Ixxx versions.

So ID3DDevice will be inherited from IDevice

3. Define which member functions of the I / abstract classes will need implementation API specific, independent or combination.

If API specific: 'virtual bool .... () = 0;

If API independent: 'bool ....();

If combined 'virtual bool ....();

With this goal in mind I went through the whole codebase and written down what and when I exactly use the (D3D11)device, context, renderer, buffers etc.

This list is the base for the v0.1 class definitions I've made, as you'll find in the code paste below.

My questions:

- am I on the right track on doing 'abstraction' in my renderer?

- what strange things do you see in the v0.1 setup?

Any input is appreciated as always.


// Classes D3DRenderer, D3DDevice, D3DDeviceContext will inherit from I classes
// functions without '= 0' will be implemented in parent + child
// functions with '= 0' will only be implemented in child/ inherited D3D11 classes

class IRenderer
{
public:
	// API independent
	bool LoadSettings();

	// API dependent
	virtual bool SetupRenderer() = 0;
	virtual bool CreateSwapchain() = 0;
	virtual bool CreateDepthStencil() = 0;

	virtual bool SetRenderTarget(const IRenderTarget &pRenderTarget) = 0;
	virtual bool OnResize() = 0;

	virtual bool PrepareRender() = 0;
	virtual bool Present() = 0;

	// update settings API independent, rest in D3D11Renderer (inherited)
	virtual bool SetFullscreen(const bool pFullscreen);
	virtual bool ChangeMSAASettings(const bool pEnabled, const uint pNrSamples);
	virtual bool SetVSync(const bool pEnabled);

private:
	CD3dSettings	mSettings;

	IDevice			mDevice;
	IDeviceContext	mDeviceContext;
};

#define	VERTEX_BUFFER	0
#define INDEX_BUFFER	1

class IDevice
{
public:
	// API dependent
	virtual IDevice() = 0;
	virtual ~IDevice() = 0;

	virtual bool Create() = 0;

	virtual ITexture*	CreateTexture(const std::string pFilename) = 0; // add more params here
	virtual IBuffer*	CreateMeshBuffer(const uint pBufferType, void *pBufferData, const size_t pBufferSize, const bool pDynamic, const bool pGPUWrite) = 0;
private:



};

#define TRIANGLE_LIST	0
#define TRIANGLE_STRIP	1
// etc.

class IDeviceContext
{
public:
	virtual IDeviceContext();
	virtual ~IDeviceContext();
	
	virtual bool SetShaderPack(const uint pId);
	
	virtual bool SetInputLayout(const uint pHandle) = 0;
	virtual bool SetPrimitiveTopology(const uint pTopology) = 0;

	virtual bool SetSamplerState(const ISamplerState &pSamplerState) = 0;
	virtual bool SetConstantBuffer(const ICBuffer &pCBuffer) = 0;
		
	virtual bool SetTexture(const ITexture &pTexture) = 0;

	virtual bool SetVertexBuffer(const IBuffer &pBuffer) = 0;
	virtual bool SetIndexBuffer(const IBuffer &pBuffer) = 0;

	virtual bool DrawIndexed(const uint pNrIndices, const uint pStartIndex, const uint pStartVtx) = 0;

	virtual bool SetViewPort(const IViewport &pViewport) = 0;

private:


};

// PRIMARY TODO'S

// 1. Add abstract/ Iclasses (or structs) for: ConstantBuffer, RenderTarget, SamplerState, RenderState, Texture, Viewport
// 2. decide how/where to place ShaderManager/ ShaderPack/ CBufferManager / RenderTargetManager (also abstract/ IClasses?)

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Advertisement

This is my RenderTask:

public final class RenderTask {
    public final byte type;
    public final int meshId;
    public final int instanceCount;
    public final int vertexCount;
    public final int baseVertex;
    public final int indicesOffset;
    public final int primitive;
    // Encoded texture params. id, slot, etc.
    public final int[] texParams;
    // Arbitrary data.
    private final Object[] resources;
    // Bit set with present resource types present in this task.
    private final int resourceBits;
 
  public final <T> T resource ( final byte type, final int index ) {
        return (T) resources[someBitTrickery(type, index)];
    }
 
 // And a lot of helper irrelevant stuff to build these tasks.
}

Now imagine you got a void** instead of that "resources Object[]". The important part is that its an array of pointers to stuff. Say that, that single task, is setup to draw 100 cubes. The resouces would have 100 pointers to instances of Transform for example. And 100 instances of whatever other resource is needed for drawing it. Its just stuff you pull out and upload to cbuffers at draw time.

The 'type' byte defines in which bucket that task ends up. RenderPasses draw tasks from a series of fixed buckets. Those buckets can be lights, statics, animated, etc. I add new ones as I need them.

As you can see, while I have to iron out the texturing stuff, there is no actual "glThing" there. Its all very generic. You got a meshId, which might be a VAO index in GL, or just an internal mapping in a Vulkan backend. Stuff that you probably want to support which is also missing is camera related. ie, in which camera do I draw this task.

So in the end the renderer interface becomes something like:

public interface RenderDevice {
    void render ();
    void reloadPipeline ();
    void addToQueue ( RenderTask task );
}

Very high level. You add stuff to the renderer, call render, all queues get drawn an cleared, then repeat. What render passes get created, and from which task bucket they draw things from is all specified in a configuration file that the concrete GL renderer reads when I create it. Its just transparent to the code that creates tasks and dispatches them. I call them submitters, and submitters know nothing about say, if the backend is using one cbuffer for all objects or one separate cbuffer for each of them. Its just stuff it doesnt cares about. On the other hand, the submitter is capable of coalescing instances into a single task, or to do frustum culling on them, and that would be transparent to whatever API you're using underneath to draw the tasks.

Now, my example is **very** incomplete, but the point is I think thats what you should aim to. A high level interface where you build tasks of stuff the engine wants to get to the screen, and hand them to an abstract renderer that internally knows what to do. The renderer itself doesn't comprises a lot of code, so in your example you'd be abstracting at a level that doesn't nets you that much gain.

Something like your IDeviceContext wouldn't translate well to a Vulkan or an OpenGL backend I believe.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Hey Again!! haha... Well I'm glad that you are still on the right track and learning. The good thing about abstracting your lower level api is that you don't have to have an abstracted renderer class. Anything that has do to with calling a d3d/opengl class should be in the lower level api abstraction. So you should have a "Viewport" class. This class is responsible for creating the swap chain , and presenting it. Because this class is a graphics resource, Your IDevice should have a function called "CreateViewport", where you pass in certain information, like the window handle, width, height, etc... and it returns an abstracted viewport. And then your viewport should have a function called "Present".. and etc... The easiest way to do this is like I said prior, look at the d3d11 api. ID3D11DeviceContext has a function called "VSSetConstantBuffer" , so you can either create a function in your "IDeviceContext" class for each permutation of XXSetConstantBuffer... and etc... or create a single function called "SetConstantBuffer" and pass in an enum, i.e ST_VertexShader, and etc.This function, just like the D3D11 api should have the constant buffer as a parameter, and the slot that it should go for. You also should really change the way you pass in parameters... you really should be passing in a reference to a base interface class... pass in the pointers. I don't know exactly what your purpose of having the bool functions and etc are, but your interface should be pure abstract classes, so all pure virtuals... for certain things, i.e Textures, Vertex/Index/Constant Buffers you can have the base class implemented the "GetWidth/GetHeight" functions and etc... but besides that, don't make it harder than it needs to be. Look at the D3D11 API, specifically Device/Context classes and all of the resource classes, and make a version of it for your engine.

Thanks both.

What I understand from this, is that it's better to try to keep away from 'how' exactly D3D11 achieves a task, and in the I classes, focus on what the task should do. Because i.e. Vulkan or D3D12 etc. might handle a task quite differently then D3D11.

@AThompson: clear on the viewport, I can nicely combine these things and will do so (IViewport, with setting it up, resizing etc., and also the prepare render and present functions). IViewport will be a member in the IRenderer, so it can access the IContext easily for the underlying functions.

@Chubu/ both: on the device vs device context, I'm not sure yet how to handle that.

When I add another API in the future, I could always keep to the idea that a device handles capabilities and resources, and the context handles the rest. In this case context is not perse a technical type within the API, but a functional context. Which for Vulkan or any other API, can have a different implementation in the inherited VulkanDeviceContext class, with it's own implementations of setting a vertexbuffer, indexbuffer, shader or whatsoever.

Would you agree to that?

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

While it is important to prepare for Vulkan and D3D12, many of the basic tenants , when it comes to resources, are the same. A lot of the benefits of these new APIs is reducing driver overhead and putting all of the resource management in the hands of the developer. Your engine and the API still has a concept of Textures / RenderTargetViews/ShaderResourceViews/DepthStencilViews... so while the backend may create them in a different manner... it should still have classes to support them.. the same can be said for many of the other features like vertex buffers...etc... One of the main features that you will need to support is resource transitions ... and command lists, but until you have a better grasp of abstraction, you probably shouldn't aim for an API that even many industry vets are having complications with.

- am I on the right track on doing 'abstraction' in my renderer?

this is your chance to design your dream API.

make your abstraction behave the way YOU want, not like dx11, 12, vucan, etc. then once you've defined your API, use dx 11, 12, etc to implement it.

might make working with your abstraction layer much simpler in the long run. most APIs are more complex than the needs of a given project or series of similar projects.

when i start using a new API, writing a wrapper is usually the first thing i do - to abstract away all that needless complexity.

my wrapper api for xaudio2 is only like 4 or 6 calls.

for d3d9 it can be as simple as:

Zinit_all // create window, start dx9 fullscreen, load meshes, textures, etc

Zclearscreen

Zclear_zbuf

Zclear_drawlist // clear render queue

Zbeginscene

Zd // draw a mesh, rigid body model, 2d billboard, or 3d billboard

Zdraw_drawlist // render the render queue

Zshowscene

Zshutdown

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Do you have an actual valid reason for wanting to abstract your renderer? Do you really need a D3D, Vulkan, OpenGL, etc... different renderers? What is the ability for having those different renderers going to get you? If it is to allow cross platform then just go OpenGL from the start.

Having an abstract renderer that is implemented on the back end differently still means each concrete implementation needs to produce the same visual results. So why not just do it once with OpenGL? Or D3D if you only care about Windows. I just never really understood the need for the abstract renderer and generally most people who are interested in doing it don't really have a good reason for it. I used to be one of them.

Maybe you have a great reason and if so go for it, but you should really think if it is necessary.

Firstly; this is C++ - kill the defines.
Scope your values, be it old style enums, const or C++11 'enum class' to sanely scope things.

Secondly; before you try to abstract ANYTHING write some code using the plain APIs - you need to understand how they work before you can even think about designing an API at a higher level which will work with them.

Thirdly; no.. really, do the above first. You'll quickly see where things differ, where you'll have to fudge things for one API vs another and how their various memory models and concepts work.

YOU WILL NOT BE ABLE TO DESIGN A GOOD ABSTRACTION WITHOUT UNDERSTANDING THE ABOVE.

Forth; design in terms of D3D12, Metal and Vulkan - OpenGL and D3D11 much easier to implement in terms of them than the other way around.

Fifth; Focus on what you classes need to do and only have them do one thing. Example; 'device' has a 'loadTexture' by name... no. no no no. 'device' shouldn't be loading shit. If you want a texture ask your resource system, your resource system can then use its loader to load in the data, the resource system then gets a texture from the device and provides it with data to fill it.

Also nix things like 'create depthstencil' and other specialised functions; if you need to create a depth-stencil surface then treat it like any other surface/texture and create it via the same code paths. It'll catch bugs.

The whole IRenderer thing also doesn't really make sense; your renderer should be a high level construct which drives a lower level API in some way - a renderer deals with objects that need to be rendered via some abstract concept.

The renderer is the top level of what the game sees; you don't want an interface which deals with things like depthstencil buffers, swap chains and other rendering specific details - when you initialise your renderer you give it the place it draws during initialisation and let it, via a config setup, figure out what back buffers it needs to create etc.


var win = new Window(....);
render.initialise(win);  // output to the supplied window
'win' could be your own abstract, a hWnd or any other thing you want which the renderer knows about.

Indeed, with the correct design you could have a single renderer drive multiple windows with different cameras bound to different output targets.

The point being a 'renderer' has no requirement to be a monolithic single class, instead it can be considered layers where the top layer is an interface which lets you add things to be rendered and the lower levels are hidden and cover various workloads.

[Renderer]
[D3D12/Vulkan/Metal]--[D3D11/OpenGL]
[D3D12][VK][Metal]-----[D3D11][OpenGL]

could be one such layering.

Sixth; keep in mind your disability levels - frankly most graphics related things don't need to leak outside of the graphic stack. Thos that do don't always need to expose their full interface to the outside world.

Seventh; 'C' prefixes on classes are ugly bullshit and gain you nothing. Stop it.
On a side note; as asked above why are you doing this?

Also; 'virtual' in your rendering hot path is terrible.
Just don't.

Either pick an API per supported platform and just compile the code to use it, or bundle up your backends in to dynamic libraries and when the user picks load up that library to use.

Virtual At The Rendering Level.
Not even once.

Nice to see the opinions on the matter differ in lots of areas

- do or don't abstract

- prepare for everything/API or just start and keep the top layer 'functional' (user bound, not API bound)

I personally believe that style should and will never be standardized, some like braces on new lines, some like C prefixes in class names or member variables vs parameter variables.

Thanks for all the opinions and input, which I'll use to make my decisons how to continue.

My primary driver is learning and having fun, on which abstracting a renderer can contribute. I don't have much expierence with other API's then D3D (9 and 11), so I'm aware that I have to make some assumptions when it comes to the way I abstract (when I choose to do so).

For now I'll give it a first attempt and see where it brings me.

I believe that keeping the front-end/ user calls as 'functional' as possible, I reduce the risk on the backend, because the API specific implementation can differ.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

This topic is closed to new replies.

Advertisement