Abstract away the API?

Started by
12 comments, last by _the_phantom_ 9 years, 7 months ago

Hey guys,

with the upcoming Direct3D12 I want to get ready to integrate the new API into my framework, so I thought of adding a new layer of abstraction on top of the existing code. But is this really the "right" way of doing things? The other thing is that some things in my code can't really be abstracted that well. For example my Material & Texture class:


class Texture {
    Texture(ID3D11ShaderResource* theTexture);
    ...
};

class Material {
    void AttachDiffuseMap(const Texture& map);
    ...
    void Bind(ID3D11DeviceContext* ctx, ...);
};

How would I make Texture::Texture and Material::Bind more abstract, when they need something very specific, any patterns for this kind of design problem?

Advertisement

if you want your high level code (your framework) to work with different low level libraries (dx 11 or 12), the typical approach is to define your own generic low level graphics API, and write an abstraction layer that translates generic calls to library specific calls. you want the generic API to be generic enough to work with all libraries of a given type if possible. IE its generic enough that you could add openGL support, etc. then you write your high level code to use the generic API, and the translation layer does the rest.

the wrapper routines in my game development library sort of do this. parts of the generic API date back to the late 80's and remain unchanged to this day, despite the fact that its been implemented in 4 versions of borland pascal, watcom C, MS C++ versions from 4.0 though v2012, directx 8,9, and 11, dos 5, dos 6, and windows XP, 2000 pro, vista, and win7. while the underlying OS, language, compiler version, and graphics library may change, the game library's API stays pretty much the same. getmouse(&x,&y,&b) gets the mouse position and button state, whether its using dos interrupts (as it originally did), or windows messages (as it does now) is irrelevant to the game code calling it.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

What i do is have an IGraphics interface. For each API I implement, i create a class which implements IGraphics interface, so like D3D11Graphics or D3D12Graphics.

When i initialize my app, i set a static global IGraphics object to an instance of one of the classes which implements that (i read from an xml file so the graphics api can be changed through an options screen)

The texture does not load the texture itself, it calls a method from the graphics interface to load it in. the texture only stores a string which identifies the texture. so instead of passing an ID3D11ShaderResource to the constructor of the texture, i pass in a string, which is the name of the texture (or location). The texture then tells the graphics interface to load the texture, which then tells the resource manager to load it in. the resource manager needs to have separate methods for each type of resource, so there would be a load resource for ID3D11ShaderResource and for ID3D12ShaderResource. the resource manager keeps a hash table of the textures, so you just pass in the name of the texture to get a pointer to the actual data. the graphics interface knows which one it needs to call to load the resource.

You could do this without the resource manager though. just load and store the data inside the graphics interface. a resource manager is nice though.

Since we want to keep the graphics api separate from the rest of the app, only the graphics api needs to know about ID3D11ShaderResource (and resource manager if there is one). When you are binding a texture for example, you would pass the texture object containing the string of the texture (and any other information) to a method in the graphics interface class, which calls a method from the resource manager with that texture string to get a pointer to the actual texture resource.


// global
IGraphics graphics;

int main()
{
    graphics = new D3D11Graphics();
}

class IGraphics
{
    void LoadTexture(string path);
}


class D3D11Graphics : IGraphics
{
    void LoadTexture(string path)
    {
        resourceManager.D3D11LoadTexture(path); // loads the ID3D11ShaderResource and stores in a hash table
    }

    void BindTexture(Texture texture)
    {
        deviceContext.BindTexture(resourceManager.GetD3D11Texture(texture.name)); // however you bind the texture using the device context, all off the top of my head right now
    }
}


class D3D12Graphics : IGraphics
{
    void LoadTexture(string path)
    {
        resourceManager.D3D12LoadTexture(path); // loads the ID3D12ShaderResource and stores in a hash table
    }

    void BindTexture(Texture texture)
    {
        deviceContext.BindTexture(resourceManager.GetD3D12Texture(texture.name)); 
    }}

class Texture
{
    string name;

    Texture(string path)
    {
        graphics.LoadTexture(path);
        this.name = path;
    }
}
As mentioned, abstraction is the only sane thing to do in the face of multiple graphics API’s.

How would I make Texture::Texture and Material::Bind more abstract, when they need something very specific, any patterns for this kind of design problem?

Your “abstraction” looks to be merely a partial RAII wrapper around textures, and it is definitely not useful for porting to any other API because you rely an external code to create the actual resource for you and it is too low-level.

Abstract wrappers do more than just hold a pre-created resource—they create and destroy the resource themselves, and the whole wrapper system should have more relative data than just that. They are somewhat high-level.

The best solution I have found is a small tree of classes:
CTexture2dBase // Base class for all textures.  Holds the number of mipmaps, array slices, multi-sample count/quality, width, height, and unique ID.

//*********
//*********

// An API-specific class inherits from the base class and adds API-specific code and members.
CDirectX11Texture2d : public CTexture2dBase // ID3D11Texture2D *, ID3D11RenderTargetView *, ID3D11ShaderResourceView *
CDirectX9Texture2d : public CTexture2dBase // IDirect3DTexture9 *
COpenGlTexture2d : public CTexture2dBase // GLuint
COpenGlEsTexture2d : public CTexture2dBase // GLuint
// Each API class implements a protected set of methods such as CreateApiTexture( … ) and ActivateApi( ui32Slot ).

//*********
//*********

// The top-level class inherits from one of the API classes.
CTexture2d : public
#if defined( LSG_DX11 )
    CDirectX11Texture2d
#elif defined( LSG_DX9 )
    CDirectX9Texture2d
#elif defined( LSG_OGL )
    COpenGlTexture2d
#elif defined( LSG_OGLES )
    COpenGlEsTexture2d
#endif

// You only create CTexture2d classes.  They automatically generate, manage, and destroy their own API resources.
// You should never have to create a resource externally and pass it in to a texture etc.  It’s the texture’s job to handle its own resources.
Here is an example of Direct3D 11’s CreateApiTexture().
[spoiler]
	LSBOOL LSE_CALL CDirectX11Texture2d::CreateApiTexture( const CImage * _piImages, LSUINT32 _ui32Total ) {
		// Create the texture.
		D3D11_TEXTURE2D_DESC tdDec = { 0 };
		tdDec.Width = _piImages[0].GetWidth();
		tdDec.Height = _piImages[0].GetHeight();
		tdDec.MipLevels = _piImages[0].TotalMipLevels();
		tdDec.ArraySize = _ui32Total;
		tdDec.Format = Direct3DFormat( _piImages[0].GetFormat() );
		tdDec.SampleDesc.Count = 1;
		tdDec.Usage = static_cast<D3D11_USAGE>(m_ui32Usage & 0xFF);
		tdDec.BindFlags = D3D11_BIND_SHADER_RESOURCE | ((m_ui32Usage & LSG_TUT_RENDERTARGET) ? D3D11_BIND_RENDER_TARGET : 0);
		if ( FAILED( CDirectX11::GetDirectX11Device()->CreateTexture2D( &tdDec, NULL, &m_ptTexture ) ) ) { return false; }


		// Create the shader resource view.
		D3D11_SHADER_RESOURCE_VIEW_DESC srvdShaderViewDesc;		
        srvdShaderViewDesc.Format = tdDec.Format;
		srvdShaderViewDesc.ViewDimension = tdDec.ArraySize == 1 ? D3D11_SRV_DIMENSION_TEXTURE2D : D3D11_SRV_DIMENSION_TEXTURE2DARRAY;
        srvdShaderViewDesc.Texture2D.MipLevels = tdDec.MipLevels;
        srvdShaderViewDesc.Texture2D.MostDetailedMip = 0;
		if ( FAILED( CDirectX11::GetDirectX11Device()->CreateShaderResourceView( m_ptTexture, &srvdShaderViewDesc, &m_psrvShaderView ) ) ) { return false; }

		// Create the render target view.
		if ( m_ui32Usage & LSG_TUT_RENDERTARGET ) {
			D3D11_RENDER_TARGET_VIEW_DESC rtvdRenderTargetViewDesc;
			rtvdRenderTargetViewDesc.Format = tdDec.Format;
			rtvdRenderTargetViewDesc.ViewDimension = tdDec.ArraySize == 1 ? D3D11_RTV_DIMENSION_TEXTURE2D : D3D11_RTV_DIMENSION_TEXTURE2DARRAY;
			rtvdRenderTargetViewDesc.Texture2D.MipSlice = 0;
			if ( FAILED( CDirectX11::GetDirectX11Device()->CreateRenderTargetView( m_ptTexture, &rtvdRenderTargetViewDesc, &m_rtvRenderTargetView ) ) ) { return false; }
		}


		// Copy the image data into the texture.
		if ( tdDec.Usage & D3D11_USAGE_DEFAULT ) {
			// Have to create a staging texture.
			ID3D11Texture2D * ptStaging = NULL;
			tdDec.Usage = D3D11_USAGE_STAGING;
			tdDec.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
			tdDec.BindFlags = 0;
			if ( FAILED( CDirectX11::GetDirectX11Device()->CreateTexture2D( &tdDec, NULL, &ptStaging ) ) ) { return false; }

			if ( !CreateMips( ptStaging, _piImages, _ui32Total ) ) {
				CDirectX11::SafeRelease( ptStaging );
				return false;
			}

			CDirectX11::GetDirectX11Context()->CopyResource( m_ptTexture, ptStaging );
			CDirectX11::SafeRelease( ptStaging );
			return true;
		}
		if ( !CreateMips( m_ptTexture, _piImages, _ui32Total ) ) { return false; }
		return true;
	}
[/spoiler]
This is called by CTexture2d::CreateTexture( const CImage * _piImages, LSUINT32 _ui32Total ), after it does common texture-creation routines such as setting the width and height, and performing common checks to ensure at least 1 array slice, all array slices are the same size, etc.


Of note is that there is no virtual inheritance and no overhead related to virtual calls. Only 1 API class can be in a run-time, so there is no need.
If a base API class doesn’t implement a necessary method, there will be compiler errors, so there’s no risk that any required method will be overlooked.


The same method should be used on index buffers, vertex buffers, and shaders. You never create or access CDirectX11IndexBuffer directly, only CIndexBuffer.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

In practice it is not that simple.

I'm wrapping D3D11 and GLES. And it is not that easy because:

A texture in GLES means:

a texture data + sampler state, which is directly bound able to the pipeline.

A texture in D3D11 means:

the texture resource itself without any sampler state or information how you can bind the resource to the pipeline

Another difference is the way you bind it:

in OpenGL textures are bound as if is a regular uniform(keep in mind the sampler). in D3D11 Textures are bound to a specific slot for a specific shading stage. You have to manually crate a sampler and bind it to the proper location.

My advice is :

Do not blindly try to implement an abstraction, you need to know very all all the APIs that you want to use.

Abstraction is a sane approach, but a complicating factor here is that D3D12 is quite a lot different than D3D11 -- More different from D3D11 than OpenGL is, in many ways. The threading stuff and more-manual resource management is very different than previous graphics APIs. In practice, this means it will probably be impractical to unify the two styles under a single low-level or mid-level abstraction, there's just not enough wiggle-room there to hammer out the differences -- I mean, you could almost certainly implement a D3D11-style low-level API in D3D12, but you'd loose many, if not most, of the benefits of D3D12 in doing so.

A better approach would probably be to have a higher-level abstraction that's possibly based on multiple lower-level abstractions itself -- think of one low/mid-level graphics abstraction over D3D10/11/OpenGL, and another low/mid-level abstraction over D3D12/Mantle/Console APIs, and both of those abstracted together under a higher-level rendering API.

throw table_exception("(? ???)? ? ???");

A texture in GLES means:
a texture data + sampler state, which is directly bound able to the pipeline.

You’re looking at it wrong.
In OpenGL ES you need to “emulate” samplers by allowing the user to set sample states on the device (via your own sampler structures you provide through your library) which, like in any other API, can be set at any time. All it does is set a pointer or make a copy. The same as when you set a texture. Nothing happens until the moment you are ready to render.

When it comes time to perform a render, first all the textures that were told to be set are “officially” set in OpenGL ES.
Then your graphics device will go over all the samplers that have been set (remembering that this is sampler data inside your custom structure).
You basically create a structure for OpenGL ES that matches exactly D3D11_SAMPLER_DESC and call ::glTexParameteri() and friends each frame.

Direct3D 11:
[spoiler]
	LSVOID LSE_CALL CDirectX11::ApplyRenderStates( LSBOOL _bForce ) {
		const LSG_RENDER_STATE & rsCurState = m_rsCurRenderState;
		LSG_RENDER_STATE & rsLastState = m_rsLastRenderState;

#define LSG_CHANGED( MEMBER )	((rsCurState.MEMBER != rsLastState.MEMBER) || _bForce)
#define LSG_UPDATE( MEMBER )	rsLastState.MEMBER = rsCurState.MEMBER

		// Sampler states.
		LSUINT32 ui32Index = 0, ui32Total = 0;
		LSUINT32 ui32Max = LSG_MAX_SHADER_SAMPLERS;
		for ( LSUINT32 I = 0; I <= ui32Max; ++I ) {
			// Only update what has changed since the last render call.
			if ( I < ui32Max && LSG_CHANGED( pssSamplers[I] ) ) {
				++ui32Total;
				LSG_UPDATE( pssSamplers[I] );
			}
			else {
				if ( ui32Total ) {
					// Simulate OpenGL ES 2.0 by setting samplers on all shader systems.
					GetDirectX11Context()->CSSetSamplers( ui32Index, ui32Total, &rsCurState.pssSamplers[ui32Index] );
					GetDirectX11Context()->DSSetSamplers( ui32Index, ui32Total, &rsCurState.pssSamplers[ui32Index] );
					GetDirectX11Context()->HSSetSamplers( ui32Index, ui32Total, &rsCurState.pssSamplers[ui32Index] );
					GetDirectX11Context()->GSSetSamplers( ui32Index, ui32Total, &rsCurState.pssSamplers[ui32Index] );
					GetDirectX11Context()->PSSetSamplers( ui32Index, ui32Total, &rsCurState.pssSamplers[ui32Index] );
					GetDirectX11Context()->VSSetSamplers( ui32Index, ui32Total, &rsCurState.pssSamplers[ui32Index] );
				}
				ui32Index = I + 1;
				ui32Total = 0;
			}
		}

		…

		// Textures.
		ui32Index = 0;
		ui32Total = 0;
		ui32Max = CGfxBase::m_mMetrics.ui32MaxTexSlot;
		for ( LSUINT32 I = 0; I <= ui32Max; ++I ) {
			// Only update what has changed since the last render call.
			if ( I < ui32Max && LSG_CHANGED( psrShaderResources[I] ) ) {
				++ui32Total;
				LSG_UPDATE( psrShaderResources[I] );
			}
			else {
				if ( ui32Total ) {
					// Simulate OpenGL ES 2.0 by setting textures on all shader systems.
					GetDirectX11Context()->CSSetShaderResources( ui32Index, ui32Total, &rsCurState.psrShaderResources[ui32Index] );
					GetDirectX11Context()->DSSetShaderResources( ui32Index, ui32Total, &rsCurState.psrShaderResources[ui32Index] );
					GetDirectX11Context()->HSSetShaderResources( ui32Index, ui32Total, &rsCurState.psrShaderResources[ui32Index] );
					GetDirectX11Context()->GSSetShaderResources( ui32Index, ui32Total, &rsCurState.psrShaderResources[ui32Index] );
					GetDirectX11Context()->PSSetShaderResources( ui32Index, ui32Total, &rsCurState.psrShaderResources[ui32Index] );
					GetDirectX11Context()->VSSetShaderResources( ui32Index, ui32Total, &rsCurState.psrShaderResources[ui32Index] );
				}
				ui32Index = I + 1;
				ui32Total = 0;
			}
		}

#undef LSG_UPDATE
#undef LSG_CHANGED
	}
[/spoiler]

OpenGL ES 2.0:
[spoiler]
/** Sampler description. */
typedef struct LSG_SAMPLER_DESC {
	LSG_FILTER									Filter;											/**< Filtering method to use when sampling a texture. */
	LSG_TEXTURE_ADDRESS_MODE					AddressU;										/**< Method to use for resolving a u texture coordinate that is outside the 0 to 1 range. */
	LSG_TEXTURE_ADDRESS_MODE					AddressV;										/**< Method to use for resolving a v texture coordinate that is outside the 0 to 1 range. */
	LSG_TEXTURE_ADDRESS_MODE					AddressW;										/**< Method to use for resolving a w texture coordinate that is outside the 0 to 1 range. */
	LSGREAL										MipLODBias;										/**< Offset from the calculated mipmap level. */
	LSUINT32									MaxAnisotropy;									/**< Clamping value used if LSG_F_ANISOTROPIC is specified in Filter. */
	LSG_COMPARISON_FUNC							ComparisonFunc;									/**< A function that compares sampled data against existing sampled data. */
	LSGREAL										BorderColor[4];									/**< Border color to use if LSG_TAM_BORDER is specified for AddressU, AddressV, or AddressW. Range must be between 0.0 and 1.0 inclusive. */
	LSGREAL										MinLOD;											/**< Lower end of the mipmap range to clamp access to, where 0 is the largest and most detailed mipmap level and any level higher than that is less detailed. */
	LSGREAL										MaxLOD;											/**< Upper end of the mipmap range to clamp access to, where 0 is the largest and most detailed mipmap level and any level higher than that is less detailed. This value must be greater than or equal to MinLOD. To have no upper limit on LOD set this to a large value. */
} * LPLSG_SAMPLER_DESC, * const LPCLSG_SAMPLER_DESC;


	LSVOID LSE_CALL COpenGlEs::ApplyRenderStates( LSBOOL _bForce ) {
		const LSG_RENDER_STATE & rsCurState = m_rsCurRenderState;
		LSG_RENDER_STATE & rsLastState = m_rsLastRenderState;

#define LSG_CHANGED( MEMBER )	((rsCurState.MEMBER != rsLastState.MEMBER) || _bForce)
#define LSG_UPDATE( MEMBER )	rsLastState.MEMBER = rsCurState.MEMBER

		…

		// Textures.  Must set before samplers.
		for ( LSUINT32 I = LSE_ELEMENTS( rsCurState.psrShaderResources ); I--; ) {
			if ( LSG_CHANGED( psrShaderResources[I] ) ) {
				ActivateTexture( I );
				if ( !rsCurState.psrShaderResources[I] ) {
					::glBindTexture( GL_TEXTURE_2D, 0 );
				}
				else {
					::glBindTexture( GL_TEXTURE_2D, (*rsCurState.psrShaderResources[I]) );
				}
				LSG_UPDATE( psrShaderResources[I] );
			}
		}

		// Basic states.
		SetSamplerStates( _bForce );		// Must be done after textures are set.
#undef LSG_UPDATE
#undef LSG_CHANGED
	}
	
	
	LSE_INLINE LSVOID LSE_FCALL COpenGlEs::SetSamplerStates( LSBOOL _bForce ) {
		for ( LSUINT32 I = m_mMetrics.ui32MaxTexSlot; I--; ) {
			SetSamplerState( I, _bForce );
		}
	}
	
	LSE_INLINE LSVOID LSE_FCALL COpenGlEs::SetSamplerState( LSUINT32 _ui32Slot, LSBOOL _bForce ) {
		// If no texture set and not forced, do nothing.
		if ( !_bForce ) {
			if ( !m_rsCurRenderState.psrShaderResources[_ui32Slot] ) { return; }
		}
		glWarnError( "Uncaught" );
		if ( !m_rsCurRenderState.pssSamplers[_ui32Slot] ) {
			m_rsCurRenderState.pssSamplers[_ui32Slot] = m_pssDefaultSamplerState;
		}
		
		ActivateTexture( _ui32Slot );

		const LSG_SAMPLER_STATE & ssCurState = (*m_rsCurRenderState.pssSamplers[_ui32Slot]);
		//LSG_SAMPLER_STATE & ssLastState = m_ssLastSamplers[_ui32Slot];

/*#define LSG_CHANGED( MEMBER )	((ssCurState.MEMBER != ssLastState.MEMBER) || _bForce)
#define LSG_UPDATE( MEMBER )	ssLastState.MEMBER = ssCurState.MEMBER*/


		static const GLint iMinFilter[] = {
			GL_NEAREST,							// Point, no mipmaps.
			GL_NEAREST_MIPMAP_NEAREST,			// Point, point mipmaps.
			GL_NEAREST_MIPMAP_LINEAR,			// Point, linear mipmaps.
			GL_LINEAR,							// Linear, no mipmaps.
			GL_LINEAR_MIPMAP_NEAREST,			// Linear, point mipmaps.
			GL_LINEAR_MIPMAP_LINEAR,			// Linear, linear mipmaps.
		};
		static const GLint iMagFilter[] = {
			GL_NEAREST,							// Point.
			GL_LINEAR,							// Linear.
		};

		// Always set filters because they are a property of textures.
		// Min filter.
		LSUINT32 ui32CurMinFilter = LSG_DECODE_MIN_FILTER( ssCurState.Filter );
		// Mag filter.
		LSUINT32 ui32CurMagFilter = LSG_DECODE_MAG_FILTER( ssCurState.Filter );
		// Mip filter.
		//LSUINT32 ui32CurMipFilter = LSG_DECODE_MIP_FILTER( ssCurState.Filter );


		LSUINT32 ui32OglMinFilter = ui32CurMinFilter * 3;	// TODO: Add mipmap filter.
		LSUINT32 ui32OglMagFilter = ui32CurMagFilter;
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, iMinFilter[ui32OglMinFilter] );
		glWarnError( "GL_TEXTURE_MIN_FILTER" );
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, iMagFilter[ui32OglMagFilter] );
		glWarnError( "GL_TEXTURE_MAG_FILTER" );

		// Anisotropy.
		LSBOOL bCurIsAniso = LSG_DECODE_IS_ANISOTROPIC_FILTER( ssCurState.Filter );
		if ( !bCurIsAniso ) {
			::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 1 );
		}
		else {
			::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, CStd::Min<LSUINT32>( ssCurState.MaxAnisotropy, CGfxBase::m_mMetrics.ui32MaxAniso ) );
		}
		glWarnError( "GL_TEXTURE_MAX_ANISOTROPY_EXT" );

		// Address modes.
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, ssCurState.AddressU );
		glWarnError( "GL_TEXTURE_WRAP_S" );
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, ssCurState.AddressV );
		glWarnError( "GL_TEXTURE_WRAP_T" );
		/*::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, ssCurState.AddressW );
		glWarnError( "GL_TEXTURE_WRAP_R" );

		// LOD bias.
		::glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_LOD_BIAS, ssCurState.MipLODBias );
		glWarnError( "GL_TEXTURE_LOD_BIAS" );

		// Border color.
		::glTexParameterfv( GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, ssCurState.BorderColor );
		glWarnError( "GL_TEXTURE_BORDER_COLOR" );

		// Minimum/maximum mip-map level.
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, static_cast<GLint>(ssCurState.MinLOD) );
		glWarnError( "GL_TEXTURE_BASE_LEVEL" );
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, static_cast<GLint>(ssCurState.MaxLOD) );
		glWarnError( "GL_TEXTURE_MAX_LEVEL" );*/

/*#undef LSG_UPDATE
#undef LSG_CHANGED*/
	}
[/spoiler]


You have effectively added samplers to OpenGL ES.
The overhead in always setting sampler-state information can be minimized by keeping track per-texture what sampler states have been set and then not re-applying them, but I have omitted that here for simplicity’s sake.

Additionally, you should not shy away from emulating proper samplers in OpenGL ES because even Khronos admits that strongly associating them with textures was a mistake and as of OpenGL 3.3 they provide proper samplers.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

You’re looking at it wrong.
In OpenGL ES you need to “emulate” samplers by allowing the user to set sample states on the device (via your own sampler structures you provide through your library) which, like in any other API, can be set at any time.

I totally agree with you Spiro. I was just trying to point at some non-trivial parts of the code.

Actually doing the opposite thing, and I'm not sure if this is the correct way, In d3d I'm emulating OGL style textures (aka embedded sampler state). When you define a uniform texture in the shader, implicitly i create a sampler uniform. If I use different sampler the hlsl compiler will cut-out the generated sampler state and there will be no overhead.

Actually doing the opposite thing, and I'm not sure if this is the correct way, In d3d I'm emulating OGL style textures (aka embedded sampler state). When you define a uniform texture in the shader, implicitly i create a sampler uniform. If I use different sampler the hlsl compiler will cut-out the generated sampler state and there will be no overhead.

On modern GPU's, the D3D10+/GL3+ way of using samplers is more slightly optimal than the D3D9/GL2/GLES way.
When submitting a draw-call, the driver has to allocate a block of memory and copy into it all of the 'information' that the shaders used by that draw-call will use. This contains the header structures for buffers & textures (including the pointer to the buffer/texture data), and the structures that define samplers.
With the D3D10/GL3 model, you can greatly reduce the number of sampler structures that are used by the shader by sharing one sampler between many textures. If you emulate the D3D9/GL2 model, you're forced to have one sampler for every one texture, which results in each shader having a larger 'information' packet, which means slightly more work for the CPU as the driver produces these packets, and more work on the GPU as the pixels/vertices all download/read the larger packets from memory.

So if you're going to pick one abstraction, I would emulate D3D11 samplers on GL, instead of emulating GL samplers on D3D11 smile.png

Abstraction is a sane approach, but a complicating factor here is that D3D12 is quite a lot different than D3D11 -- More different from D3D11 than OpenGL is, in many ways. The threading stuff and more-manual resource management is very different than previous graphics APIs. In practice, this means it will probably be impractical to unify the two styles under a single low-level or mid-level abstraction, there's just not enough wiggle-room there to hammer out the differences -- I mean, you could almost certainly implement a D3D11-style low-level API in D3D12, but you'd loose many, if not most, of the benefits of D3D12 in doing so.

The big difference is slot-based APIs vs bindless APIs.
In my abstract API, it's mostly slot-based, but with a few compromises. Instead of having "texture slots" in the API, I instead expose "resource-list slots". The user can create a resource-list object, which contains an array of textures/buffers, and then they can bind that entire list to an API slot.
This maps fairly naturally to both bindless and slot-based APIs (a shader-program in a slot-based API will reserve a range of contiguous slots for each resource-list used by that shader, and, a shader-program in a bindless API will just define that structure).

For things like samplers, you hardly ever use a large number of them, so a slot-based API works fine (and can be implemented on bindless APIs fairly efficiently).

As long as you're aware of all the underlying APIs when designing your abstraction, you can come up with something that's still very low-level but also fairly efficiently implemented across the board.

Personally, I have "techniques" (shaders-program-objects with permutations), resource-lists, samplers, cbuffers, state-groups, passes (render-target/viewports) and draw-calls using the same abstraction across ps3/4, xbox360/1, d3d9/11, GL and mantle biggrin.png

Thanks Hodgman,

But I just cannot imagine implementing things in D3D11 way for GLES2 and OGL3+

in d3d you bind samplers and use them explicitly in HLSL

hlsl:


Texture2D tex0, tex1;
SamplerState ss;
Tex0.Sample(ss) + Tex1.Sampler(ss);

in OGL3+ you bind a sampler *per texture slot*.

The sampler objects are *used implicitly*.

If I want to achieve the behaivour above in GLSL and ogl3 I have to:


glBindSampler(tex0Location, ss);
glBindSampler(tex1Location, ss);

glsl:


tex2D(tex0) + tex2D(tex1)//both sampled with ss

Probably there is something that I'm missing here...

PS:
Batzer, excuse me for stealing your topic.

This topic is closed to new replies.

Advertisement