Jump to content

  • Log In with Google      Sign In   
  • Create Account

Abstract away the API?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
13 replies to this topic

#1 Batzer   Members   -  Reputation: 259

Like
0Likes
Like

Posted 27 August 2014 - 11:28 AM

Hey guys,

 

with the upcoming Direct3D12 I want to get ready to integrate the new API into my framework, so I thought of adding a new layer of abstraction on top of the existing code. But is this really the "right" way of doing things? The other thing is that some things in my code can't really be abstracted that well. For example my Material & Texture class:

class Texture {
    Texture(ID3D11ShaderResource* theTexture);
    ...
};

class Material {
    void AttachDiffuseMap(const Texture& map);
    ...
    void Bind(ID3D11DeviceContext* ctx, ...);
};

How would I make Texture::Texture and Material::Bind more abstract, when they need something very specific, any patterns for this kind of design problem?


Edited by Batzer, 27 August 2014 - 11:29 AM.


Sponsor:

#2 Norman Barrows   Crossbones+   -  Reputation: 2369

Like
1Likes
Like

Posted 27 August 2014 - 02:08 PM

if you want your high level code (your framework) to work with different low level libraries (dx 11 or 12), the typical approach is to define your own generic low level graphics API, and write an abstraction layer that translates generic calls to library specific calls. you want the generic API to be generic enough to work with all libraries of a given type if possible. IE its generic enough that you could add openGL support, etc. then you write your high level code to use the generic API, and the translation layer does the rest.

 

the wrapper routines in my game development library sort of do this. parts of the generic API date back to the late 80's and remain unchanged to this day, despite the fact that its been implemented in 4 versions of borland pascal, watcom C, MS C++ versions from 4.0 though v2012, directx 8,9, and 11, dos 5, dos 6, and windows XP, 2000 pro, vista, and win7.  while the underlying OS, language, compiler version, and graphics library may change, the game library's API stays pretty much the same. getmouse(&x,&y,&b) gets the mouse position and button state, whether its using dos interrupts (as it originally did), or windows messages (as it does now) is irrelevant to the game code calling it.   


Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

 

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

 

 


#3 iedoc   Members   -  Reputation: 1061

Like
1Likes
Like

Posted 27 August 2014 - 02:50 PM

What i do is have an IGraphics interface. For each API I implement, i create a class which implements IGraphics interface, so like D3D11Graphics or D3D12Graphics.

 

When i initialize my app, i set a static global IGraphics object to an instance of one of the classes which implements that (i read from an xml file so the graphics api can be changed through an options screen)

 

The texture does not load the texture itself, it calls a method from the graphics interface to load it in. the texture only stores a string which identifies the texture. so instead of passing an ID3D11ShaderResource to the constructor of the texture, i pass in a string, which is the name of the texture (or location). The texture then tells the graphics interface to load the texture, which then tells the resource manager to load it in. the resource manager needs to have separate methods for each type of resource, so there would be a load resource for ID3D11ShaderResource  and for ID3D12ShaderResource. the resource manager keeps a hash table of the textures, so you just pass in the name of the texture to get a pointer to the actual data. the graphics interface knows which one it needs to call to load the resource.

 

You could do this without the resource manager though. just load and store the data inside the graphics interface. a resource manager is nice though.

 

Since we want to keep the graphics api separate from the rest of the app, only the graphics api needs to know about ID3D11ShaderResource (and resource manager if there is one). When you are binding a texture for example, you would pass the texture object containing the string of the texture (and any other information) to a method in the graphics interface class, which calls a method from the resource manager with that texture string to get a pointer to the actual texture resource.

// global
IGraphics graphics;

int main()
{
    graphics = new D3D11Graphics();
}

class IGraphics
{
    void LoadTexture(string path);
}


class D3D11Graphics : IGraphics
{
    void LoadTexture(string path)
    {
        resourceManager.D3D11LoadTexture(path); // loads the ID3D11ShaderResource and stores in a hash table
    }

    void BindTexture(Texture texture)
    {
        deviceContext.BindTexture(resourceManager.GetD3D11Texture(texture.name)); // however you bind the texture using the device context, all off the top of my head right now
    }
}


class D3D12Graphics : IGraphics
{
    void LoadTexture(string path)
    {
        resourceManager.D3D12LoadTexture(path); // loads the ID3D12ShaderResource and stores in a hash table
    }

    void BindTexture(Texture texture)
    {
        deviceContext.BindTexture(resourceManager.GetD3D12Texture(texture.name)); 
    }}

class Texture
{
    string name;

    Texture(string path)
    {
        graphics.LoadTexture(path);
        this.name = path;
    }
}

Edited by iedoc, 27 August 2014 - 02:55 PM.

Braynzar Soft - DirectX Lessons & Game Programming Resources!

#4 L. Spiro   Crossbones+   -  Reputation: 14434

Like
1Likes
Like

Posted 27 August 2014 - 03:09 PM

As mentioned, abstraction is the only sane thing to do in the face of multiple graphics API’s.
 

How would I make Texture::Texture and Material::Bind more abstract, when they need something very specific, any patterns for this kind of design problem?

Your “abstraction” looks to be merely a partial RAII wrapper around textures, and it is definitely not useful for porting to any other API because you rely an external code to create the actual resource for you and it is too low-level.

Abstract wrappers do more than just hold a pre-created resource—they create and destroy the resource themselves, and the whole wrapper system should have more relative data than just that. They are somewhat high-level.

The best solution I have found is a small tree of classes:
CTexture2dBase // Base class for all textures.  Holds the number of mipmaps, array slices, multi-sample count/quality, width, height, and unique ID.

//*********
//*********

// An API-specific class inherits from the base class and adds API-specific code and members.
CDirectX11Texture2d : public CTexture2dBase // ID3D11Texture2D *, ID3D11RenderTargetView *, ID3D11ShaderResourceView *
CDirectX9Texture2d : public CTexture2dBase // IDirect3DTexture9 *
COpenGlTexture2d : public CTexture2dBase // GLuint
COpenGlEsTexture2d : public CTexture2dBase // GLuint
// Each API class implements a protected set of methods such as CreateApiTexture( … ) and ActivateApi( ui32Slot ).

//*********
//*********

// The top-level class inherits from one of the API classes.
CTexture2d : public
#if defined( LSG_DX11 )
    CDirectX11Texture2d
#elif defined( LSG_DX9 )
    CDirectX9Texture2d
#elif defined( LSG_OGL )
    COpenGlTexture2d
#elif defined( LSG_OGLES )
    COpenGlEsTexture2d
#endif

// You only create CTexture2d classes.  They automatically generate, manage, and destroy their own API resources.
// You should never have to create a resource externally and pass it in to a texture etc.  It’s the texture’s job to handle its own resources.
Here is an example of Direct3D 11’s CreateApiTexture().
Spoiler

This is called by CTexture2d::CreateTexture( const CImage * _piImages, LSUINT32 _ui32Total ), after it does common texture-creation routines such as setting the width and height, and performing common checks to ensure at least 1 array slice, all array slices are the same size, etc.
 
 
Of note is that there is no virtual inheritance and no overhead related to virtual calls.  Only 1 API class can be in a run-time, so there is no need.
If a base API class doesn’t implement a necessary method, there will be compiler errors, so there’s no risk that any required method will be overlooked.
 
 
The same method should be used on index buffers, vertex buffers, and shaders.  You never create or access CDirectX11IndexBuffer directly, only CIndexBuffer.
 
 
L. Spiro

Edited by L. Spiro, 27 August 2014 - 07:48 PM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#5 imoogiBG   Members   -  Reputation: 1247

Like
2Likes
Like

Posted 27 August 2014 - 04:15 PM

In practice it is not that simple.

 

I'm wrapping D3D11 and GLES. And it is not that easy because:

 

A texture in GLES means:

a texture data + sampler state, which is directly bound able to the pipeline. 

 

A texture in D3D11 means:

the texture resource itself without any sampler state or information how you can bind the resource to the pipeline

 

Another difference is the way you bind it:

in OpenGL textures are bound as if is a regular uniform(keep in mind the sampler). in D3D11 Textures are bound to a specific slot for a specific shading stage. You have to manually crate a sampler and bind it to the proper location.

 

My advice is :

Do not blindly try to implement an abstraction, you need to know very all all the APIs that you want to use.


Edited by imoogiBG, 27 August 2014 - 04:17 PM.


#6 Ravyne   GDNet+   -  Reputation: 8187

Like
2Likes
Like

Posted 27 August 2014 - 04:48 PM

Abstraction is a sane approach, but a complicating factor here is that D3D12 is quite a lot different than D3D11 -- More different from D3D11 than OpenGL is, in many ways. The threading stuff and more-manual resource management is very different than previous graphics APIs. In practice, this means it will probably be impractical to unify the two styles under a single low-level or mid-level abstraction, there's just not enough wiggle-room there to hammer out the differences -- I mean, you could almost certainly implement a D3D11-style low-level API in D3D12, but you'd loose many, if not most, of the benefits of D3D12 in doing so.

 

A better approach would probably be to have a higher-level abstraction that's possibly based on multiple lower-level abstractions itself -- think of one low/mid-level graphics abstraction over D3D10/11/OpenGL, and another low/mid-level abstraction over D3D12/Mantle/Console APIs, and both of those abstracted together under a higher-level rendering API.



#7 L. Spiro   Crossbones+   -  Reputation: 14434

Like
2Likes
Like

Posted 27 August 2014 - 07:41 PM

A texture in GLES means:
a texture data + sampler state, which is directly bound able to the pipeline.

You’re looking at it wrong.
In OpenGL ES you need to “emulate” samplers by allowing the user to set sample states on the device (via your own sampler structures you provide through your library) which, like in any other API, can be set at any time. All it does is set a pointer or make a copy. The same as when you set a texture. Nothing happens until the moment you are ready to render.

When it comes time to perform a render, first all the textures that were told to be set are “officially” set in OpenGL ES.
Then your graphics device will go over all the samplers that have been set (remembering that this is sampler data inside your custom structure).
You basically create a structure for OpenGL ES that matches exactly D3D11_SAMPLER_DESC and call ::glTexParameteri() and friends each frame.

Direct3D 11:
Spoiler


OpenGL ES 2.0:
Spoiler



You have effectively added samplers to OpenGL ES.
The overhead in always setting sampler-state information can be minimized by keeping track per-texture what sampler states have been set and then not re-applying them, but I have omitted that here for simplicity’s sake.

Additionally, you should not shy away from emulating proper samplers in OpenGL ES because even Khronos admits that strongly associating them with textures was a mistake and as of OpenGL 3.3 they provide proper samplers.


L. Spiro

Edited by L. Spiro, 27 August 2014 - 07:47 PM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#8 imoogiBG   Members   -  Reputation: 1247

Like
0Likes
Like

Posted 28 August 2014 - 12:31 AM

 

 

You’re looking at it wrong.
In OpenGL ES you need to “emulate” samplers by allowing the user to set sample states on the device (via your own sampler structures you provide through your library) which, like in any other API, can be set at any time. 

 

I totally agree with you Spiro. I was just trying to point at some non-trivial parts of the code.

Actually doing the opposite thing, and I'm not sure if this is the correct way, In d3d I'm emulating OGL style textures (aka embedded sampler state). When you define a uniform texture in the shader, implicitly i create a sampler uniform. If I use different sampler the hlsl compiler will cut-out the generated sampler state and there will be no overhead.



#9 Hodgman   Moderators   -  Reputation: 32000

Like
4Likes
Like

Posted 28 August 2014 - 01:31 AM

Actually doing the opposite thing, and I'm not sure if this is the correct way, In d3d I'm emulating OGL style textures (aka embedded sampler state). When you define a uniform texture in the shader, implicitly i create a sampler uniform. If I use different sampler the hlsl compiler will cut-out the generated sampler state and there will be no overhead.

On modern GPU's, the D3D10+/GL3+ way of using samplers is more slightly optimal than the D3D9/GL2/GLES way.
When submitting a draw-call, the driver has to allocate a block of memory and copy into it all of the 'information' that the shaders used by that draw-call will use. This contains the header structures for buffers & textures (including the pointer to the buffer/texture data), and the structures that define samplers.
With the D3D10/GL3 model, you can greatly reduce the number of sampler structures that are used by the shader by sharing one sampler between many textures. If you emulate the D3D9/GL2 model, you're forced to have one sampler for every one texture, which results in each shader having a larger 'information' packet, which means slightly more work for the CPU as the driver produces these packets, and more work on the GPU as the pixels/vertices all download/read the larger packets from memory.
 
So if you're going to pick one abstraction, I would emulate D3D11 samplers on GL, instead of emulating GL samplers on D3D11 smile.png

Abstraction is a sane approach, but a complicating factor here is that D3D12 is quite a lot different than D3D11 -- More different from D3D11 than OpenGL is, in many ways. The threading stuff and more-manual resource management is very different than previous graphics APIs. In practice, this means it will probably be impractical to unify the two styles under a single low-level or mid-level abstraction, there's just not enough wiggle-room there to hammer out the differences -- I mean, you could almost certainly implement a D3D11-style low-level API in D3D12, but you'd loose many, if not most, of the benefits of D3D12 in doing so.

The big difference is slot-based APIs vs bindless APIs.
In my abstract API, it's mostly slot-based, but with a few compromises. Instead of having "texture slots" in the API, I instead expose "resource-list slots". The user can create a resource-list object, which contains an array of textures/buffers, and then they can bind that entire list to an API slot.
This maps fairly naturally to both bindless and slot-based APIs (a shader-program in a slot-based API will reserve a range of contiguous slots for each resource-list used by that shader, and, a shader-program in a bindless API will just define that structure).

For things like samplers, you hardly ever use a large number of them, so a slot-based API works fine (and can be implemented on bindless APIs fairly efficiently).

As long as you're aware of all the underlying APIs when designing your abstraction, you can come up with something that's still very low-level but also fairly efficiently implemented across the board.

Personally, I have "techniques" (shaders-program-objects with permutations), resource-lists, samplers, cbuffers, state-groups, passes (render-target/viewports) and draw-calls using the same abstraction across ps3/4, xbox360/1, d3d9/11, GL and mantle biggrin.png


Edited by Hodgman, 28 August 2014 - 01:50 AM.


#10 imoogiBG   Members   -  Reputation: 1247

Like
0Likes
Like

Posted 28 August 2014 - 02:10 AM

Thanks Hodgman, 

But I just cannot imagine implementing things in D3D11 way for GLES2 and OGL3+

 

in d3d you bind samplers and use them explicitly in HLSL

 

hlsl:

Texture2D tex0, tex1;
SamplerState ss;
Tex0.Sample(ss) + Tex1.Sampler(ss);

in OGL3+ you bind a sampler *per texture slot*.

The sampler objects are *used implicitly*.

If I want to achieve the behaivour above in GLSL and ogl3 I have to:

glBindSampler(tex0Location, ss);
glBindSampler(tex1Location, ss);

glsl:

tex2D(tex0) + tex2D(tex1)//both sampled with ss

Probably there is something that I'm missing here...

 

PS:
Batzer, excuse me for stealing your topic.


Edited by imoogiBG, 28 August 2014 - 02:19 AM.


#11 Hodgman   Moderators   -  Reputation: 32000

Like
2Likes
Like

Posted 28 August 2014 - 03:03 AM

D3D9 also works the same way as the GL code above.
To deal with this transparently on the shader side, you basically need to generate some of your shader code... I use macros to sample from textures, which are #defined differently on each platform, and I generate the definitions of buffers/textures/samplers.
e.g. Instead of "SamplerState ss register(s0);", I write "/*[FX] Sampler(0, "ss") */".
The "FX" comments are Lua code, which is executed and used to generate the actual definitions for each platform before the code is compiled/shipped.

When defining a texture, the shader author needs to declare all of the sampler names that thy wish to use with that texture.
On D3D11 this information isn't used, but on the older APIs like D3D9/GLES, you can generate one "texture" declaration for each texture+sampler combination declared by the shader author.
i.e. If the shader author says that they want to use the "Linear" sampler and the "Point" sampler with texture "Tex0", then on GLES you can emit code for a Tex0_Linear variable and a Tex0_Point variable.

Your abstract API can look just like D3D11 where you just bind "Tex0" once, but internally, using the shader's meta-data, the engine can actually be binding two Texture+Sampler objects.

This use-case is rare though - it's muh more common to bind one sampler and many textures (which is supported the same way - internally the engine actually binds many combined Texture+Sampler slots) :D

#12 Batzer   Members   -  Reputation: 259

Like
0Likes
Like

Posted 28 August 2014 - 08:41 AM

Damn there seems to much more work than I initialy thought. Thanks for your input guys. Maybe I will just stick with D3D11 for now biggrin.png

The one problem that I find with all these solutions is that the little optimizations you can do with the specific APIs can't really be applied when you abstract them all into one unified API. My current idea is to just implement the high-level classes in the different APIs. So I would provide a Mesh class which is either implemented in D3D11 or D3D12. This way I imagine the little optimizations can still be used.

class IRenderer {
    virtual std::unique_ptr<IMesh> CreateMesh(...) = 0;
};
class Renderer11 : public IRenderer {
    virtual std::unique_ptr<IMesh> CreateMesh(...) override;
};
class Renderer12 : public IRenderer {
    virtual std::unique_ptr<IMesh> CreateMesh(...) override;
};

class IMesh {};
class Mesh11 : public IMesh {};
class Mesh12 : public IMesh {};

// and so on

The only problem here is how to abstract away the ID3DxxDeviceContext, which would be needed for drawing the mesh, without inventing a new API for it.


Edited by Batzer, 28 August 2014 - 09:34 AM.


#13 haegarr   Crossbones+   -  Reputation: 4604

Like
0Likes
Like

Posted 28 August 2014 - 09:47 AM

I have decoupled the engine related resources from the graphic API related resources as follows:

 

The engine manages a GraphicServices object that has a couple of indexable containers, one for each type of resource, like so:

    ResourceBin<Texture> binTextures;

 

Whenever the rendering process generates a graphic rendering command that refers to a resource, not a pointer but the index is stored within the command. When the low-level, graphic API dependent graphic device processes the command and hits the index within, it looks into the local records of equivalent objects by using the exactly same index. E.g., assuming that we deal with an texture object in the OpenGL wrapping graphic device:

    OGLTexture* texture = objTextures[ cmd->idxTexture ];

If the texture object is null, then the graphic device accesses the original resource, like so

    Texture* origTexture = services->binTextures[ cmd->idxTexture ];

and prepares a new dependent OGLTexture object using the original texture data and the graphic API routines.

 

By this way there is no direct referencing from the independent to the dependent resources, especially no inheritance, but still an indirect one-to-one relation on the object level. It allows class sub-typing on the engine side being independent on class sub-typing on the graphic API wrapping side.



#14 phantom   Moderators   -  Reputation: 7593

Like
2Likes
Like

Posted 28 August 2014 - 10:19 AM

The only problem here is how to abstract away the ID3DxxDeviceContext, which would be needed for drawing the mesh, without inventing a new API for it.


Do not have the mesh draw itself; instead have the renderer subsystem ask the meshes for their information data (buffer IDs, material IDs, draw counts etc) and have the renderer draw the object.

A centralised list also brings with it the benefit of being able to centralise your state sorting (and using smaller objects for it) AND making collapsing for instancing much easier.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS