Performance question - texture vs surface
Without having to write a full DX9/C++ application to test with, I was wondering if there is a quick answer to what would perform better framerate-wise?
Assuming the dimensions are the same, which would be faster, a surface or a texture which is then drawn to a screen aligned quad?
Or would they essentially be the same?
Thanks in advance
In simple terms, they're both just two different views of a pixel data allocation.
In D3D11, you have a texture resource (the actual memory allocation containing pixel data), a shader resource view (used to bind as a shader input), and a render target view (used to bind as a shader output).
Generally:
* D3D9 surface == a texture resource + a render target view,
* D3D9 texture == a texture resource + a shader resource view.
It's possible to have a D3D9 surface and texture, which actually share the same texture resource (memory allocation containing pixel data) under the hood!
One difference is that a texture may be composed of multiple surfaces; e.g the faces in a cubemap or the miplevels in a mipmapped texture: each of these is a surface.
Much of this is legacy from the old (pre-D3D8) days before Direct3D and DirectDraw were unified. A surface was the DirectDraw object representing pixel data, whereas a texture was the Direct3D representation of that object. Some of that carried over to D3D8 and 9 which is why you'll see that some API calls use surfaces whereas others use textures. Think of it as legacy cruft that was finally cleaned out in D3D10 and up.
There is, incidentally, no API call in D3D9 for drawing a surface to the backbuffer. The closest you'll probably get is StretchRect to a render target, but you'll be giving up a lot of functionality such as arbitrary transforms, shaders, etc; and neither the API nor drivers make any promises about performance.
You might look at D3DXLoadSurfaceFromSurface but that's just a software wrapper around other D3D calls and might do evil things (such as readbacks from the GPU) under the hood: performance is not part of it's specification.
So in other words just use regular drawing with textures and don't worry about potential micro-optimizations.
What are you actually doing? Some parts of the API only work with textures, and some parts only with surfaces. There's not many parts that accept either/or.So, I'd imagine performance would be pretty similar then, right?
What are you actually doing? Some parts of the API only work with textures, and some parts only with surfaces. There's not many parts that accept either/or.So, I'd imagine performance would be pretty similar then, right?
Just wondering if it would be more performant to draw to a surface and then draw to the surface to the screen or whether it would be faster to render to dynamic texture and render this to a screen aligned quad.
Just wondering if it would be more performant to draw to a surface and then draw to the surface to the screen or whether it would be faster to render to dynamic texture and render this to a screen aligned quad.What are you actually doing? Some parts of the API only work with textures, and some parts only with surfaces. There's not many parts that accept either/or.
In order to render to a texture, you have to call something like:
IDirect3DSurface9* renderTarget;
texure->GetSurfaceLevel(0, &renderTarget);
device->SetRenderTarget(0, renderTarget);
which retrieves the renderable surface corresponding to the texture resource and binds it as the render target.
So, you are rendering to a surface in both cases
In this case, texture/renderTarget are both just different 'views' of the same bit of memory -- renderTarget is a writable/destination data view, and texture is a readable/source data view.