Archived

This topic is now archived and is closed to further replies.

Split-screen and multiple monitors

This topic is 4950 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Multiplayer modes are a must in my game (apart from anything else it means I can put off the AI a bit longer!) and I''ll really into having >1 player on a single PC. Firstly could anyone give some tips/links about using split-screen modes? I presume I just have 4 viewports and render from 4 cameras? (as an aside, in windowed mode D3D renders to a full-screen surface doesn''t it,then scales this to the window you use right?) Secondly, I now see that 3-head adapters are now available as well as 2-head. A couple of questions on that: 1)Do I just get a seperate D3D8 device for each monitor and treat them entirely separately; would this be very slow or would the adapters all render in parallel? 2)On a triple-head card I could conceivably have up to 12players on one PC. Where fdo I stand with trying to plug 12 input devices in at once ie 12 ps2-style joypads? Is this possible and if so is it a performance hit by itself? Many thanks for your advice.

Share this post


Link to post
Share on other sites
Split screen
------------
Yes, SetViewport(..) is the way to go, just SetViewport(..) and set the view matrix for each screen region. SetScissorRect(..) may come in handy for restricting rendering within a viewport too.

Do remember that 2 way splits will have a different aspect ratio and therefore (usually) different field of view - it works nicely for some types of games (e.g. racing) and is a real pain for others (e.g. a shooter). There is also the question about what you do for 3 simultaneous players - customisable splits, 4 way with one in "add coins to play" type mode etc.


D3D in windowed mode
--------------------
Almost, the *whole* desktop is still in video memory, the "primary" surface in <=DX7 terms. The device renders to a buckbuffer surface of the size specified in your D3DPRESENT_PARAMETERS; at Present(..) time this is copied to a *region* of the primary desktop surface (via an optional intermediate buffer depending on which Swap Effect you''ve set).


Multiple monitors
-----------------
This is where the fun begins

First off, there are two main types of hardware you could find in a multimonitor configuration:

1) multiple physically separate cards/chips (e.g. 1xAGP + 2xPCI), potentially from different manufacturers and with very different performance/capabilities (e.g. an AGP ATI Radeon 9800 and a PCI nVidia TNT2). The video memory for these devices is of course also very separate, even if the devices are on the same bus type and from the same manufacturer.

2) multi-head cards, i.e. single boards with two or more output connectors (such as 1xVGA and 1xDVI). Their on board video memory is (usually) shared between each output. Whether there are multiple render cores or a single one is shared is dependent on the actual hardware itself. How one of these devices will show up in Windows and to Direct3D is dependent on the driver for the card.

A real world PC could have any combination of the above devices, you could feasibly find one PC with multihead graphics on the motherboard, a different multihead in the AGP slot, and something else entirely in a PCI slot. If you''re going to support this kind of thing, your engine architecture better be scalable!


Next up, how D3D will see those devices:

1) Under Direct3D 8 and earlier, each head of a multihead card will be seen as a totally different adapter; any other cards will also be seen as totally different adapters. i.e. GetAdapterCount(..) will tell you the theoretical maximum number of output monitors you can have.

Important: Even if two of the D3D8 devices are on the same multihead card (with the same video memory chips), and even if they are rendering the same scene etc, you *cannot* share device resources. So you need to duplicate textures etc. Also make sure that the pointers passed to things like SetTexture(..) are the ones that belong with that D3D device.


2) Direct3D 9 and above add proper support for multihead devices. When you create these you pass an array of D3DPRESENT_PARAMETERS (one for each monitor) instead of a single one to CreateDevice(..); Each physical monitor then has its own swap chain and backbuffer within the single D3D device so you can independently control how each is rendered to and presented (or alternatively call Present(..) for the whole device). You can get more info in the "Multihead" topic of the DirectX 9 SDK docs.

While the above makes multiple monitors nice and rosy, as always there are issues:
- It requires DX9 drivers for the card which support it properly (remember your market includes things like the Matrox G400 and your customers may never have upgraded their drivers past the DirectX 7/8 ones that came with the card [yes, it happens a hell of a lot])

- All of the heads on a single device *must* be in fullscreen mode to use D3D9 multihead.

- There are still people out there with separate cards running their multiple monitors (some for serious use such as high end workstation card in one and a plain one in the other, some as novelty...) - unless you''re going to market your product as "multimon - *only with D3D9 multihead", your engine probably needs to go the whole hog and support everything.


Caveats:

1) Even if no monitor is connected to a particular head or card, you may have it being reported as an available adapter by D3D and Windows. You should allow as much user configuration as possible here (use the configuration of multiple monitors in the Windows display settings as a starting point example). Also try to do as much as you can to determine if a monitor is actually connected (query the monitor GDI functions using the HMONITOR you get from GetAdapterMonitor(..) etc).

2a) Different monitors connected to a system may require different output refresh rates and screen modes (e.g. "little Johnny gets given his dad''s ancient 15inch monitor as a hand-me-down and connects it to the spare output on his new multihead card; the other output goes to a 21inch supercool pro").

2b) Additionally there may be other separate graphics cards in the system that require different settings due to things like low performance or low video memory.

2c) Consider what happens if your game/engine waits for VSYNC (end/start of vertical refresh) in its Present(..) call(s). Your presentation stalls until the slowest rate of all the devices is ready; the rates not being the same means the amount of stall changes per frame too, not nice!! Solutions?: 1)avoid sync and suffer tearing/choppiness 2)use D3DPRESENT_DONOTWAIT and loop round each of the Present(..)''s until they''re all done 3)use multiple threads with one per swap chain, etc...

2d) On a related note, does your game use "a frame" as the end marker of your game loop, and therefore as the time limiter. If so consider what "a frame" is in a multimon situation . It may be time to investigate separating gameplay from rendering and multithreading in that case (a similar problem occurs when you want to run physics at a fixed rate that may be higher than the "frame rate" - if the logic is running at a different rate to the render, making sure the snapshots the renderer takes of the game are at the right time and are complete needs work).

3) Mixing windowed and fullscreen D3D modes in multimonitor scenarios can cause some headaches - ideally just stick to fullscreen for everything.

4) In windowed mode, if you need decent performance, make sure none of your windows straddles two monitors - even by a tiny amount. When a window straddles two monitors, only *ONE* device/head is used to render the output of that whole window, the buffer is (effectively) Lock()ed, and the rendered output is copied using the CPU pixel by pixel to the portion of the window that''s on the other monitor!. [Roll on LDDM!]


Multiple input devices
----------------------
IIRC Intel once publically demoed USB1 with something like 96 input devices connected, all working just fine.

AFAIK DirectInput should be able to handle 12 USB joypads just fine.

Of course each of those devices does require a)electrical power (implying powered USB hubs and the like) and b)CPU power: 12x more input to process, 12x more polling...

I think you''re likely to have more gameplay issues than anything else - while 12 simultaneous players "sounds" cool, and is fairly cool from a *technical* perspective, getting it right from a gameplay (+game management, +social) perspective is very far from trivial IMO!


Summary
-------
1) If you can assume all heads/cards are of equal power; have identical monitor settings etc (such as in an LBE situation), then the technical side of things is pretty easy.

2) If you''re making a product that has "multi monitor support", then to do it *properly* isn''t as trivial as people first assume. [If you say it has multimon and don''t do that properly, you''ll likely spend a lot of your time with tech support calls, returns and releasing patches].

3) While a nice techy bullet point to have, you have a lot of the gameplay issues faced by a fully networked game to resolve (e.g. "what happens if player #9 pauses the game to go to the toilet? - does the game pause for the other 11 players?, does it allow him/her to cheat by pausing while the others tackle a hard bit?, does it kick him/her out of the game?, does an AI character with the same playing style and ability take over while paused? etc).


Good Luck


Simon O''Connor
Game Programmer &
Microsoft DirectX MVP

Share this post


Link to post
Share on other sites
...I think I may want to bookmark this one

quote:
Intel once publically demoed USB1 with something like 96 input devices connected, all working just fine
Wow, that''s amazing. Just imagine all the crazy stuff you could have connected...

While it may be a cool feature to add multi-monitor, multi-device support, it really sounds like it''s not worth it, especially in an age when networked games are the social norm. Also, the tech support that would be required after the game release would be astounding, I believe.

It would be really cool, though, if a large population of the computing world had multi-monitor systems. You could then make games that would fully utilize both/all the monitors, like the Nintendo DS does. For example, you could separate the actual gameplay from the user interface.


Dustin Franklin
Mircrosoft DirectX MVP

Share this post


Link to post
Share on other sites
Firstly, thanks Simon for that great explanation.
I really wish multimon was more common, it would be really great in RTS etc games for having one monitor for the window into the world and the other for resource information, menus etc.

I think I want to demand all adapters to be equal ie a dual/triple-head card. Primarily because to gamers it would be very wierd why the game looked better on one monitor than the other.

From the point of view of timing render updates against physics I already have separate physics/render timings although not multithreaded. I think I'd render in parallel then Present() at the same time.

And regarding it having many issues from networked games, are they not equally true for a standard split-screen mode?

Could you advise what performance penalty one might expect rendering to a dual-head device, assuming that there is very little CPU work involved in the render?

Thanks again.

[edited by - d000hg on May 20, 2004 7:21:36 AM]

Share this post


Link to post
Share on other sites
I have had multiple monitors since they were supported by win98. I have always told myself every game that I make will support multimon in one way or another.

Eve-Online supports multimon, but in the same way a FPS supports it. It just stretches the screen across the 2 monitors making it pretty useless because your ship is now in the center and broken in half by the monitor cases.

Putting things like maps, readouts, radar, text chat, individual views of specific targets/objectives would be more usefull. Imagine in an MMO you have one monitor with your regular view, and the other has as many buttons as you want, all the different chat channels, your invintory etc...

IMO they are really an untapped resource. Even if the second monitor is a crappy one on a crappy vid card, you can scale all those "non-3d" type things (chat, maps etc) to fit and work with it np.

I have many videocards that date all the way back to geforce 2 MX days and will be trying to see what works best.

Share this post


Link to post
Share on other sites
quote:
Original post by d000hg
And regarding it having many issues from networked games, are they not equally true for a standard split-screen mode?


Yes, although once you get beyond say 2-4 players, the issues are magnified. For example when there are 2 people playing a racing game and one needs to go to the toilet, it''s no problem to pause the game for both players - when you get to 12 players that doesn''t scale at all well.


quote:
Could you advise what performance penalty one might expect rendering to a dual-head device, assuming that there is very little CPU work involved in the render?


This is one of those "it depends" things. In particular it depends on where the bottlenecks are within the pipeline of your particular app as well as whether the graphics chip itself has multiple cores or shares a single one.

It should be a non-issue though if your app is supporting split screen. A 4-way single screen split for example implies rendering the scene four times from potentially different camera angles anyway, so if your performance doesn''t suffer from that, then there should be no (practical) performance difference using a single chip core to render 4 monitors worth.

To be honest I''ve never properly profiled whether there is any overhead associated with multi-head, but I''ve never "noticed" any myself (except in cases where the output window straddled monitors by some amount).


Simon O''Connor
Game Programmer &
Microsoft DirectX MVP

Share this post


Link to post
Share on other sites
I''m not planning to make a multi-mon game myself -- I''m a big fan of split-screen, or better yet, fitting all four players onto the same screen without splitting it up (easy to do with 2D action/puzzle games) -- but I am curious: does DX9 share texture/vertex resources between monitors on the same card? Do separate cards share resources pooled in system memory?

Share this post


Link to post
Share on other sites
quote:
Original post by Tom
does DX9 share texture/vertex resources between monitors on the same card?


If the driver for the card exposes it as a DX9 Multihead capable device AND the application creates it''s IDirect3DDevice* interface with Multihead specified, then yes, texture/vertex/index resources can be shared between monitors.

In all other cases (<= DX8 style device setup or drivers which aren''t Multihead aware), no, resources must be duplicated for each monitor.

quote:
Do separate cards share resources pooled in system memory?


No. Each IDirect3DDevice* interface is the parent for all sub-interfaces (such as textures and buffers); there isn''t any sharing between different devices.

The system memory copies of managed resources is one of the main annoyances with multi-device multimon - however this can be desirable in the cases where the two cards are mismatched in capabilities/power.


BTW: if anyone needs a "reference" app, the Moire screensaver sample in the SDK demonstrates proper handling of multiple monitors.


Simon O''Connor
Game Programmer &
Microsoft DirectX MVP

Share this post


Link to post
Share on other sites
WRT issues like one player leaving the game that should be fine. When you have 4-player XBOX games it''s fine to pause for someone. Since all players are in the same location the same should be true. And since each race should take a few minutes they can just miss the next race while they pee!

If I want to restrict my support for multi-head systems, is DX9 the only way to detect if both adapters are the same, or is comparing their name adequate?
Currently I use only D3D8.1 from DX8.1 with only the Fixed function pipeline; how much work should it be to port to D3D9 at this point other than changing 8 to 9 everywhere? My D3D stuff should be localised since I''m actually trying to code in an organised way - just one class and a few other methods here and there. Is this a 1day/week/month type change? And will my GF2MX still run with DX9?

Share this post


Link to post
Share on other sites
quote:
Original post by d000hg
WRT issues like one player leaving the game that should be fine. When you have 4-player XBOX games it''s fine to pause for someone. Since all players are in the same location the same should be true. And since each race should take a few minutes they can just miss the next race while they pee!


My point was: what is "adequate" for 4 players doesn''t always work so well when you scale to 12... It depends on the type of game of course.

quote:
If I want to restrict my support for multi-head systems, is DX9 the only way to detect if both adapters are the same, or is comparing their name adequate?


Hmm, interesting question.

DX9 is the only way to actually specifically take advantage of Multihead (i.e. being able to share resources such as textures, shaders, buffers etc between two monitors).

Personally I''d stay away from comparing the name.

If you want to check for identical hardware (rather than similar capabilities), it''s better to use IDirect3D*::GetDeviceIdentifier(..) and then compare say the VendorId and DeviceId.

quote:
Currently I use only D3D8.1 from DX8.1 with only the Fixed function pipeline; how much work should it be to port to D3D9 at this point other than changing 8 to 9 everywhere? My D3D stuff should be localised since I''m actually trying to code in an organised way - just one class and a few other methods here and there. Is this a 1day/week/month type change?


"It depends" - when I was at Creative Asylum (RIP), it took me roughly a day to convert the "CAPER" engine from DX8.0 to DX9.0, that included reading up on changes to things like enumeration in the docs. However, how long it will take you depends on things such as the exact structure of your code (to give you an idea, that engine only had 1 call to DrawIndexedPrimtive(..) in the whole thing).


quote:
And will my GF2MX still run with DX9?


I''m typing this reply on a machine with a GeForce2 Ultra, that''s running a preview build of Longhorn... so... quite likely


Simon O''Connor
Game Programmer &
Microsoft DirectX MVP

Share this post


Link to post
Share on other sites