Sign in to follow this  

2 questions regarding surfaces

This topic is 4859 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I played around with surfaces and stuzmbled upon 2 problems. What I want to do is in short the following: 1) Put a picture on surface1 2) Put a picture on surface2 3) Put a part the picture on surface 1 on surface 2 4) Create a texture and use surface 2 to fill it 5) Paint this texture on a sprite 6) Paint this sprite on the screen So well, some steps may seem strange but there is a bit learning involved for me here and I think this are some things I just have to know. Now my source: Preperation, only executed once:
  //1.) Put a picure on Surface1
    Device.CreateOffscreenPlainSurface(500, 500, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, Surface1, nil);
    D3dXLoadSurfaceFromFile(Surface1, nil, nil, 'd:\fish.png', nil, D3DX_DEFAULT, 0, nil);

  //2.) Put a picure on Surface1
    Device.CreateOffscreenPlainSurface(500, 500, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, Surface2, nil);
    D3dXLoadSurfaceFromFile(Surface1, nil, nil, 'd:\house.bmp', nil, D3DX_DEFAULT, 0, nil);

  //Create Picture 3 which I need later for drawing to the screen (I think)
    Device.CreateOffscreenPlainSurface(500, 500, D3DFMT_X8R8G8B8, D3DPOOL_SYSTEMMEM, Surface3, nil);

Render Loop:
    Clear(D3DCOLOR_XRGB(0, 0, 0));
    BeginScene;

  //3) Put a part the picture on surface 1 on surface 2
    rect.Left := 0;
    rect.Top := 0;
    rect.Right := 100;
    rect.Bottom := 100;
    Device.StretchRect(Surface11, @rect, Surface2, @rect, D3DTEXF_NONE);

  //4) Create a texture and use surface 2 to fill it
    D3DXCreateTexture(Device, 500, 500, D3DX_DEFAULT, D3DUSAGE_RENDERTARGET,  D3DFMT_X8R8G8B8,D3DPOOL_DEFAULT, Texture1);
    Texture1.GetSurfaceLevel(0, Surface3);

  //5) Paint this texture on a sprite

    D3DXLoadSurfaceFromSurface(Surface3, nil, nil, Surface2, nil, nil, D3DX_DEFAULT, 0);
  D3DXCreateSprite(Device, Sprite1);

  //6) Paint this sprite on the screen
      Sprite.Draw(Texture1, nil, nil, nil, 0, nil, 0);
  //Well, this doesn't work.

    EndScene;
    Pressent(nil, nil, 0, nil);

My 2 problems are: 1) Nr. 5 is awfully slow, so damn slow I can't believe it's in anyway hardware-enhanced. Do I do anything wrong? 2) How do I paint this Sprite now to the screen? Well, I hope this questions aren't to hard to understand. Maybe someone can help me. Pleeeeaaase. Thanks in advance. Edited by Coder: Use source tags. Check GDNet Forums FAQ [Edited by - Coder on August 20, 2004 6:38:13 AM]

Share this post


Link to post
Share on other sites
Summed up, that's right. But I want to do some things to the image I want to draw before rendering. That's why I used the OffscreenSurfaces.

In fact, I want to use the surfaces like I used TBitmap in ancient times ;). As a kind canvas in memory, to which i draw and where I can compose a picture out of some more and in the end I want to draw the result to the screen.

Share this post


Link to post
Share on other sites
allright, why step 5 is slow: you are loading a surface. That is very slow. Why are you cpoying the surface to another? 2 -> 3.

I'm not so into the whole d3dxsprite thing so for that you need someone else sorry.

GBas

Share this post


Link to post
Share on other sites
Thanks so far.

Quote:
Original post by GraphicsBas
allright, why step 5 is slow: you are loading a surface. That is very slow. Why are you cpoying the surface to another? 2 -> 3.


Well, I realized that it's slow. I'm open to any ideas to make it faster. :)
With the copying itself I try to accomplish the following:

1)Surface 1 loads a picture
2) Surface 2 loads a picture
3) Draw some Parts of the picture in surface 1 to surface 2
4) Draw the altered surface 2 to the screen

If there is another, better, faster way to do this I would be glad to hear it. :)

Quote:
Original post by GraphicsBas
I'm not so into the whole d3dxsprite thing so for that you need someone else sorry.


No problem at all :)
I'm just learning it myself.

Share this post


Link to post
Share on other sites
First you should create a texture and retrieve the 0 surface and name it surface 3. This is done in an initialize function.

In the render loop you just draw surface 2 to surface 3. Then you can draw parts of surface 1 to surface 3. Now you have in the texture the part you want to render.

I believe this should work... (edit: and fast :))

GBas

Share this post


Link to post
Share on other sites
To access a d3d surface's internal color data generally, you need to lock it to get a pointer to the raw pixel data, and unlock it when done with the data for the graphics device to gain access to it again.

However, there are many helper functions built in D3D for most common blitting operations:


  • IDirect3DDevice9::StretchRect - copies between surfaces in video memory. Can stretch data while copying, depending on the relevant hardware capabilities.
  • IDirect3DDevice9::UpdateSurface - uploads a system memory surface to video memory. No stretching; this is meant as a high-performance function. You can still specify the rectangle to copy, though.
  • IDirect3DDevice9::UpdateTexture - this function suits better for texture uploading, because it handles the mipmap levels if applicable, and also takes care of cube and volume textures.

D3DX contains more robust (read: more convenient) functions for these purposes, including but not limited to D3DXFillTexture, D3DXFilterTexture, D3DXLoadSurfaceFromSurface etc.

-Nik

EDIT: By the way, in d3d it is much more efficient to draw two triangles with a source image as the texture, than it is to copy the pixel data by manually locking the surfaces. 3d accelerator hardware is specifically designed to draw textured geometry, not moving large chunks of data from the graphics bus.

Share this post


Link to post
Share on other sites
@GraphicsBas

Thanks alot. It seems to be very fast now. :) Yippeah!

But my drawn texture is totally garbled. To change the format seems also to change the garbleness (is this even a word? :D ). Are there only specific formats to use with textures?

Edit: Seems it was my hardware, my card doesn't like anispotric filtering. :D

Quote:
Original post by Nik02

  • IDirect3DDevice9::StretchRect - copies between surfaces in video memory. Can stretch data while copying, depending on the relevant hardware capabilities.
  • IDirect3DDevice9::UpdateSurface - uploads a system memory surface to video memory. No stretching; this is meant as a high-performance function. You can still specify the rectangle to copy, though.
  • IDirect3DDevice9::UpdateTexture - this function suits better for texture uploading, because it handles the mipmap levels if applicable, and also takes care of cube and volume textures.



Thanks for the list. Something like that is alwas good for newbies (like me). :)

Quote:
Original post by Nik02
D3DX contains more robust (read: more convenient) functions for these purposes, including but not limited to D3DXFillTexture, D3DXFilterTexture, D3DXLoadSurfaceFromSurface etc.


Is more convenient == slow in any cas when I use these (my try with LoadSurfaceFromSurface was damn slow)? :)

Quote:
Original post by Nik02
EDIT: By the way, in d3d it is much more efficient to draw two triangles with a source image as the texture, than it is to copy the pixel data by manually locking the surfaces. 3d accelerator hardware is specifically designed to draw textured geometry, not moving large chunks of data from the graphics bus.


I always thought locking was only required when I change vertices diffuse colors and so (I did some pure 3d-programming before I took the ID3DXSprites). Is it necessary as long as I use this functions?

Thanks you 2, I think slowly but surely things become more clear.

Edit:

Something more:

I tried it as GraphicsBas suggested, but the alphachannel of the part of surface 1 which I drew on surface 3 seems to clear the underlying pixels (from surface 2).

Do you know of any problems with alphachannels and this kind of "surface-compositing"?

Thanks in advance

[Edited by - MannyCalavera on August 20, 2004 5:44:13 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by MannyCalavera
Is more convenient == slow in any cas when I use these (my try with LoadSurfaceFromSurface was damn slow)? :)


LoadSurfaceFromSurface is relatively slow, if it has to filter the data or format conversion is required. If both the source and destination are of exactly same size and format, then the function, as far as I know, performs a direct blit and that is fast. Remember: if you want to stretch the data with your own function, it will have to do the same steps anyway.

Quote:

I always thought locking was only required when I change vertices diffuse colors and so (I did some pure 3d-programming before I took the ID3DXSprites). Is it necessary as long as I use this functions?


Any function that moves data between d3d resources have to lock the said resources, be it manually or internally by d3d. The functions that I listed do lock the resources, but it is transparent to the programmer. Nevertheless, any type of lock should be avoided unless necessary, because it may stall the rendering pipe - as the hw cannot do anything with the locked data before it is unlocked.

Quote:

I tried it as GraphicsBas suggested, but the alphachannel of the part of surface 1 which I drew on surface 3 seems to clear the underlying pixels (from surface 2).

Do you know of any problems with alphachannels and this kind of "surface-compositing"?


The surface transfer functions do not perform alpha-blending; instead, they just overwrite the destination alpha values with the source alpha values.

If you want to do alpha blending, you need to render some geometry (a quad, for example) with the texture containing the alpha values to the destination surface (which must be designated as a render target). Of course, you could also manually lock the surfaces, and do software blending on them, but that is usually orders of magnitude slower than the equivalent hardware procedure.

Please ask for more info if needed.

Kind regards,
-Nik

Share this post


Link to post
Share on other sites
Quote:
Original post by Nik02
Quote:
Original post by MannyCalavera
Is more convenient == slow in any cas when I use these (my try with LoadSurfaceFromSurface was damn slow)? :)


LoadSurfaceFromSurface is relatively slow, if it has to filter the data or format conversion is required. If both the source and destination are of exactly same size and format, then the function, as far as I know, performs a direct blit and that is fast. Remember: if you want to stretch the data with your own function, it will have to do the same steps anyway.


Woah, this information will come in handy. :) Thanks.

Quote:
Original post by Nik02
Quote:

I always thought locking was only required when I change vertices diffuse colors and so (I did some pure 3d-programming before I took the ID3DXSprites). Is it necessary as long as I use this functions?


Any function that moves data between d3d resources have to lock the said resources, be it manually or internally by d3d. The functions that I listed do lock the resources, but it is transparent to the programmer. Nevertheless, any type of lock should be avoided unless necessary, because it may stall the rendering pipe - as the hw cannot do anything with the locked data before it is unlocked.


Ok, so I'll have to look out to prevent unnecessary locking.

Quote:
Original post by Nik02
Quote:

I tried it as GraphicsBas suggested, but the alphachannel of the part of surface 1 which I drew on surface 3 seems to clear the underlying pixels (from surface 2).

Do you know of any problems with alphachannels and this kind of "surface-compositing"?


The surface transfer functions do not perform alpha-blending; instead, they just overwrite the destination alpha values with the source alpha values.

If you want to do alpha blending, you need to render some geometry (a quad, for example) with the texture containing the alpha values to the destination surface (which must be designated as a render target). Of course, you could also manually lock the surfaces, and do software blending on them, but that is usually orders of magnitude slower than the equivalent hardware procedure.


Well, I'm not entirely sure how to do this. I can create a rectangle and texture it so far and I also know how to show it on the screen. What I didn't understand yet is how to use surfaces as RenderTargets (all my tries with Device.CreateRenderTarget didn't work out like I hoped). If you could give me a quick push in the right direction I would be very happy. :)

Quote:
Original post by Nik02
Please ask for more info if needed.


Thanks for the kind offer. I just did. :)

And thanks again for the great help and all the information.

Share this post


Link to post
Share on other sites
Sorry for the late reply - gdnet was down for a moment. Anyway, a quick, but not by any means concise course on render targets:

When you create a texture using IDirect3DDevice9::CreateTexture, you can specify an usage flag, D3DUSAGE_RENDERTARGET to create a render target texture (preferably in DEFAULT pool, ie. video memory - I don't remember if this was mandatory or not). Then, at render time, you do the following:


  • Save the pointer to the original back buffer, for example's sake let's name it pOrigBackBuffer.
  • Set the previously created texture's level 0 surface as the current rt by calling pDev->SetRenderTarget(0,pRt->GetSurfaceLevel(0)).
  • Optionally, set a new depth stencil surface here, using pDev->SetDepthStencil(...) passing a previously created depth stencil surface pointer to the function. Note that in case you do this, you need to save the previous depth stencil as well, as we did to the original back buffer in step 1.
  • pDev->Clear (as necessary); pDev->BeginScene(); -Render whatever you want to the texture-; pDev->EndScene();
  • Restore the original render target using the back buffer pointer saved in step 1. Optionally, do this for the depth stencil also, if you changed it.
  • pDev->Clear (as necessary); pDev->BeginScene(); -Render whatever you want to the frame buffer a.k.a. back buffer this time around-; pDev->EndScene();
  • pDev->Present()


Do note that the default back buffer, by definition, is a render target - so you can draw two triangles to it simply by using DrawPrimitive* functions. Now, if you set a texture with alpha channel on the device, the texture will be subsequently used in all drawing operations until it is changed again. Finally, you need to set alpha blending related renderstates to actually instruct the hardware to blend the source and destination - finding out which, I leave as an excercise to the reader (hint: this is a very common topic here).

-Nik

EDIT: I should mention, "Drawing two textured triangles (to form a quad)" is exactly what the sprite interface encapsulates.

Share this post


Link to post
Share on other sites
Quote:
Original post by Nik02
Sorry for the late reply - gdnet was down for a moment.


No problem at all. I have to excuse myself for the late answer now too. I'm changing residence (is this proper english?) in the moment and usually haven't much time for the internet at the weekend.

Thanks for the very complete rundown on how to do this things. I'm a bit emberrased but I already have problems with the first step.

Quote:
Original post by Nik02

  • Save the pointer to the original back buffer, for example's sake let's name it pOrigBackBuffer.




All others seem clear to me but I don't know how to get the pointer to the Backbuffer. I didn't find a function which provides me with the backbuffer.

But I tried a bit around already and I created a textured quad to paint a transparent pic to the backbuffer with renderstates and all. So my only problem left is drawing this quad to the offscreen-surface and there my only problem will be the pointer to the backbuffer (I hope ;) ).

Thanks (again)
Without your help I would be really stuck. :)

Share this post


Link to post
Share on other sites
IDirect3DDevice::GetBackBuffer() will retrieve a pointer to the currently set backbuffer. Note that when you do change it, the function subsequently returns the new backbuffer, not (necessarily) the original one - that's the reason you must save the original backbuffer pointer manually before changing the render target to your own rt surface.

Share this post


Link to post
Share on other sites

This topic is 4859 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this