• Advertisement

Archived

This topic is now archived and is closed to further replies.

Preventing the loss of offscreen surfaces

This topic is 5860 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello Everyone, I'm somewhat new to DirectX and need some help. Currently, I'm designing a "windowed" DirectX application (not a game). The application will store several offscreen surfaces in video memory. The thing is, the application CANNOT RECREATE/RERENDER these offscreen surfaces if they are lost. And, from what I understand, whenever the video mode changes, DirectX may deallocate offscreen surfaces stored in video memory. This is bad. Is there a way to prevent the loss of offscreen surfaces (without storing them in system memory)? Or is there a way to prevent the video mode from being changed? Note that I'll be developing two versions of my app, one for NT which uses DirectX 3.0, and another version which uses DirectX 7.0 or 8.0. The NT version does not need to perfect however... Any ideas? Thanks in advance! Edited by - poozer on February 4, 2002 11:16:39 PM

Share this post


Link to post
Share on other sites
Advertisement
Well, you could respond to the WM_ACTIVATEAPP message or whatever it''s called (it''s been a couple months, bear with me ) and re-create them when your app becomes activated again... it sounds like you can''t be sure when Direct X loses your surfaces, so thats all I can think of off the top of my head.

MOV ax, bx
asm programming
written by Jello

Share this post


Link to post
Share on other sites
I appreciate the suggestion Jello.

Anyone other ideas? I'd prefer not to have to store/restore the offscreen surfaces everytime the application is activated or deactivated. This will occur a lot. Plus, the app will write to the offscreen surfaces while the application is deactivated...

It would probably help if I elaborated more. I wrote a GDI-based X Server for Windows (like Hummingbird Exceed). Now I'm incorporating DirectX to hopefully improve performance. Like other X Servers, mine can run in a "multiple window mode", so it has to coexist and play nicely with other Win32 applications. This means it should still work (but not necessarily perfectly) if the user or another application changes the display mode. The problem is that X clients can create offscreen buffers on the server called Pixmaps. I want to store the Pixmaps in video memory, but there isn't a way to inform X clients that their Pixmaps have been lost. I would not mind preventing the display mode from being changed, but I don't think there is a way.

Thanks!



Edited by - poozer on February 4, 2002 12:32:57 AM

Share this post


Link to post
Share on other sites
I guess there is no simple way to ensure offscreen surfaces are retained in video memory when the display mode changes. This kind of sucks for me.

Share this post


Link to post
Share on other sites
The best that you'll get out of D3D is to let D3D manage your texture surfaces for you (IIRC you pass D3D_MANAGED as one of the flags when creating the surface). That way you don't have to keep a copy of the surface - D3D will do that for you and maintain consistency between they backup and the actual surface that is in VRAM.

HTH


Iain Hutchison
Programmer, Silicon Dreams
The views expressed here are my own, not those of my employer.

Edited by - pieman on February 5, 2002 2:31:09 PM

Share this post


Link to post
Share on other sites
Hello,

You could dump them to regular areas of memory. When you receive the message that the screen mode is changing, just create buffers (NOT DX Surfaces) in system memory to hold the data.. Then copy it back in when you want them back

Oh, just saw that you do not want to store them aside when you loose focus... hmmmm... then i dont know, sorry. About changing them during that time, that may be dangerous because you dont know what is going to be using that memory. I guess If you ended up going with what i suggested you could edit the temporary buffers in memory.

BrianH

Edited by - BrianH on February 5, 2002 4:32:03 PM

Share this post


Link to post
Share on other sites
You CANNOT keep the data stored in video memory when your app is minimized. The ONLY WAY is to keep a backup in system memory.
When your app is minimized, the device window looses focus and DX cleans everything up. There is a reason for this apparent madness: If another D3D app is put into focus after your app has lost focus, it can re-create its surfaces and use up video memory. If every application didn''t free video memory, then you''d run out very quickly. The windows desktop for example would be in video memory, as would the surfaces for any windows that are minimized.

Steve

Share this post


Link to post
Share on other sites
plus being that you are playing nice, a fullscreen program that grabs exclusive control will kick you out of dx for a while so you wont have access to anything. like said before, keep them a backup in system memory. recreate surfaces when lost AND active. copy from the system mem back ups you have. plus since you are not using dx for anything except for actually drawing, you should be fine when somethign gets exclusive mode (i am unsure how dx deals with system memory buffers so you may be better off making your own). i dont see why you would need to use dd7 instead of dd3 for everything. also dont make the nt version less then perfect, either make one or dont.

Share this post


Link to post
Share on other sites
Ok. Thanks for the input everyone. As I said, I''m somewhat new to DirectX.

I''ll write some small programs to try out your suggestions. And I''ll run some tests to measure the potential performance gains offered by DirectX (given the requirements of the X protocol).

Thanks for the help.

Share this post


Link to post
Share on other sites
Just create a very small SYSTEM-MEMORY surface (4x4 pix or so).
Then allocate your own buffer the size you want the surface to be.
Then GetSurfaceDesc() to buffer "foo".
Then modify the lpSurface, lPitch, dwWidth and dwHeight in the buffer "foo" and call SetSurfaceDesc() with it.

This way you have your DD-Surface in system-memory (which is somewhat slower, but who cares ) and it will never get lost (!).

BTW: d3d does not guarantee you, but it will not throw away any system-memory surfaces, even if you didnt allocate the memory yourself. but then again its only guaranteed for self-allocated surface-mem.

you could (if you dont change the contents very much) create a video-memory-copy of your surface, that you update everytime the original surface is lost or changed. that way you could use the (fast) vid-mem-surface for blitting.

and: if you change the surface very frequently then its probably better to have it in system-memory anyway.

bye,

--- foobar
We push more polygons before breakfast than most people do in a day

Share this post


Link to post
Share on other sites
Again, I appreciate everyone's suggestions. However, I probably did not state my problem clearly, because there seem to be a few misunderstandings. Let me dispell them.

I already have a GDI version of my X Server, and there are several other GDI-based PC X Servers on the market. The whole point of using DirectX is to improve the speed.

I am creating a DirectX 3.0 version, because my product must run on NT. However, from what I've read, DirectX 3.0 is somewhat slow and limited compared to later versions of DirectX. So I am willing to create a DirectX 7.0 or 8.0 version to further improve the speed and functionality for Windows 2000/XP/98/ME users.

In the X Window System, an X Client can perform the exact same operations on a Pixmap (offscreen surface) that can be performed on a window. Just like a window, you can draw lines, polygons, blit to/from it etc. Graphic cards often support hardware acceleration for these common 2D operations. So (forgetting about DirectX for a second) it is more appropriate to store the Pixmaps (offscreen surface) in VRAM.

However, the more I learn about DirectX, the less I think it is suited for my purposes. For instance, DirectX does not provide an interface for the 2D operations, like line drawing, that video cards commonly hardware accelerate. You have to emulate them with 3d operations or manipulate the bytes yourself (and lose hardware acceleration). Anyways, I'm not saying DirectX is bad. (Don't flame me.) However, DirectX may be inappropriate for my needs. I may just optimize my GDI version or produce a GDI+ version. But I'll run some performance tests on a variety of hardware before I decide to drop DirectX.

Once again, thanks for the help.



Edited by - poozer on February 6, 2002 3:21:17 PM

Share this post


Link to post
Share on other sites

As far as I can tell, you''re not going to be able to "optimize"
your X-server by using DirectX instead of GDI.

Why?

You must understand where the "bottleneck" is. In a X-server,
the bottleneck is the data coming in from the network.

Most of the time, your graphic updates is limited by how
fast the data is moving across the network. 95% of the time,
your server actually has to "wait" for enough data to fill
up the screen.

Optimizing when it doesn''t matter is worse than not
optimizing at all, because you''re wasting your time.


Premature optimizations can only slow down your project even more.

Share this post


Link to post
Share on other sites
Yes, if an X client is constantly sending bitmaps over the network, the network will be become the bottleneck in an X Server. However, X was designed to be networked from day one, so the protocol and client applications are designed to minimize network communication.

I do see your point though. If I try to make my product perfect, it might be a year before I finish it. And for all practical purposes, it's already fast enough for most users (people who are just launching xterms or emacs). But much like people buy the fastest computer even though they don't need the speed, I'm hoping people will by my X Server because it is snappier. I mean, there are two dozen PC X servers out there. I'm looking for ways to differentiate mine.


Edited by - poozer on February 6, 2002 6:59:34 PM

Share this post


Link to post
Share on other sites
btw, most video card drivers accelerate some or all gdi functions (line draw, pattern fill, blits, etc). you just dont get as much control dictating where a video buffer will land up. though you could use gdi functions on dx surfaces (provided you are careful in batching (since abusing the GetDC()/ReleaseDC() could cost you performence)). i know for a fact that updating the screen using dx or the gdi gives similar results (assuming per pixel modifications and the results are uploaded from system memory to vram).

most ppl use the gdi wrong and is why it got such a bad rep for being slow. however, its should be noted that ANY color depth conversion WILL slow everything down. this is true of blit espeically, and the main reason that ppl think the gdi is slow. also using DIBs (even though most video cards accelerate them) can be costly vs using compatible bitmaps since DIBs can be create as a different color dpeth then the screen thus screwing things up in the speed department.

NEVER try to sell a product because you are being gimmicky. saying, "i am using directx, os it MUST be faster" is silly. you best do alot of benchmarks before going the dx route (especially since you are not even going to be switching the screen format). also i doubt dd7 will give you anything extra over dd3. make sure you check the docs before going that route. dx8 however will be a completely different beast to code for, so becareful.

if you really want to differentiate your x server vs other x servers, add support for features that are not handled in the other x servers. add some extra features such as an intergrated ssh client for remote commands and such. add features NOT gimmicks if you really want to stand out. cause face it, if ppl bechmark your x server and find out that its slower then the competitor (which probally dont use dx), it will make your claims of using dx fruitless. especially if you dont work hard on the nt4 version (since many large companies still use nt4 as work stations hence they will be running the x servers). make sure you support entire protocal as well.

Share this post


Link to post
Share on other sites
Thanks for the advice. Don't worry though. I am not relying on gimmicks. I intend to pursue the DirectX route only if it will provide clear performance benefits. I would never false advertise. That's why I have been running performance tests.

My X server will support all of the popular extensions. I would not have devoted hundreds, if not thousands, of hours to this without scoping out the competition, and ensuring my feature set is competitive. I still have a lot of work to do, but I plan to support the major extensions, ssh, telnet, rsh, etc. Plus I have a few new ideas of my own. Actually, most of that stuff is not hard to do. By far, the hardest part is the display engine. That's where 90% of the real technical challenges are.

And yes, I am painfully aware of inefficiencies of GDI's "virtual color handling". X features like planemasks and/or colormaps are hard to implement correctly, much less efficiently, without direct access to pixels bits. That's one reason why I decided to investigate the DirectX route.

I am not a used car salesman. I take enormous pride in my code. I have devoted a ton of my time carefully designing, implementing, and testing my application. And before I release it, I will spend a ton more. If anything, I've always been labeled as too much of a perfectionist.


Edited by - poozer on February 7, 2002 11:23:24 PM

Share this post


Link to post
Share on other sites
hm. that makes some things clear to me *g*.
i''d not use directx in that case, because the DX3 for NT is emulated "via GDI" and can therefor never be faster,
and implementing DX8 for win2000, well maybe if you could do all you wanted with DX8, but for line, circle, box drawing and/or text output it would slow things down considerably.

maybe try to see if you can "hardcode" some loops in assembler with MMX...

if you''re searching for a way to enhance your product, i''d go for "ease of use" - intuitive install&use, clear and easy to understand error-messages, stability, portability (maybe have other x-servers for other OSs that uses the same protocol and can connect to the same clients?).
stuff like that.

have a nice day, bye,

--- foobar
We push more polygons before breakfast than most people do in a day

Share this post


Link to post
Share on other sites
direct access to a screen buffer slow? you must be on something or misinformed, heh. DIB are extremely fast with direct access. in fact much faster then ANY video memory surface since its in system memory. you should look into using DIBs, heck YOU can even allocate the memory instead of letting windows do it, and draw to the screen with SetDIBits() (i think thats the function). i used this method, and its just as fast as directdraw in windowed mode (since thats what you are running anyway). many cards also accelerate dibs (ie blitting with them and drawing lines and other things to them). you should never have to rely on gdi doing color conversion or worry about it. though, i am not sure how many cards will accelerate drawing lines to dibs (i am pretty sure the 4mb ati rage pro did). in fcat the ONLY time you have to worry about the color conversion issue is the final blit to the physical screen buffer. this can either be done by the gdi or you can make a seperate buffer and do it yourself.

i am very gald to hear you take pride in yoru work. i really hope you release a cheap (price wise) competive product, cause currently all the good x servers on win32 are too expensive.

Share this post


Link to post
Share on other sites
Actually, I use DIBs and SetDIBitsToDevice now. It's kind of a long story, but I decided not to use them in the future.

Anyways, this is what I've been implementing in case anyone is interested:

1. Drawing is done to the frame buffer and pixmaps with both DirectX and GDI/GDI+ (to take advantage of hardware accelerated 2d functions).
2. During installation, performance tuning will be performed to automatically determine which operations run faster with DirectX and which ones are faster with GDI/GDI+.
3. Store pixmaps in VRAM and keep a backup in system memory.
4. When drawing to a pixmap, draw directly to the VRAM version hopefully with hardware acceleration. Don't modify the version in system memory, just mark it as dirty and record which regions have been changed.
5. Have a very, very low priority thread, which essentially syncs the pixmaps between VRAM and system memory when the system is not busy. This should result in very little performance degradation and work 99% of the time.
6. When the video mode changes restore the VRAM pixmaps using the ones in system memory. Warn the user that things may be displayed funny (as Hummingbird Exceed does).
7. Under the "Advanced" configuration screen, provide an option to store pixmaps in system memory.


Thanks.

Edited by - poozer on February 9, 2002 5:35:54 AM

Share this post


Link to post
Share on other sites
BTW, if/when I finish this, I plan to give free licenses to people who helped out a lot. But that''s a big "if/when". And I plan to keep prices in the $50 - $70 range with free upgrades for 2 years.

Thanks for the help.

Share this post


Link to post
Share on other sites

  • Advertisement