mouse input without windows messages ?

Started by
22 comments, last by Norman Barrows 11 years, 1 month ago

I found it interesting to note that Oblivion doesn't support Alt-Tab.

I'm just curious why you want to "take over" someone's PC? What is it about Alt-Tabbing that you just hate so much that you feel you have to take over complete control? What if someone is playing your game at work, and the boss comes in, and they can't emergency Alt-Tab out of your game? Boom, they get fired, their kids starve to death, they end up on the street in 30-below weather with snow blowing up their ragged pants leg, a dog comes out of an alley and steals their moldy old hot dog bun that they were going to have for dinner. That's what happens. All because you just had to "take over" their PC.

I mean, just sound it out. "Take over your PC." That just sounds like a hostile, unfriendly act to me. Let the users Alt-Tab, for Pete's sake. Do it for the children.

Advertisement

Boom, they get fired, their kids starve to death, they end up on the street in 30-below weather with snow blowing up their ragged pants leg, a dog comes out of an alley and steals their moldy old hot dog bun that they were going to have for dinner. That's what happens. All because you just had to "take over" their PC.

LOL! Oh I love that one!

but seriously, in the beginning, a lot of PC game building was about getting the operating system out of your way. Finally MS got a clue and made ways for gamedevs to cut thru the OS and get to the low level stuff needed for performance. that's where the "take over the PC" concept comes from. like i said, i'm really just being lazy. lost device wouldn't be an issue if i didn't have to use windows to talk to the vidcard. but then i wouldn't get the benefits of using windows to talk to the vidcard. and yes, back in the day, my games had boss keys!

now, for the other side of the coin:

while being a well behaved PE executable is all fine and good, there are some windows hotkeys that can get in the way of a game. I have problems with the windows key and Oblivion for example. Alt-tab might interfere with gameplay in Oblivion too. tab is your inventory, and you use it a LOT! even in the middle of combat to drink healing potions and select spells. I don't know if alt-tab is a problem as Bethesda disabled it.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

...and just because Oblivion doesn't support it doesn't mean that not supporting it is the right thing to do. That's (in the absence of personal knowledge of the decisions made for Oblivion on my part) most likely to be a bug in Oblivion, and - yes - a quick Google for "oblivion alt tab" shows it to be something that users are complaining about in their droves.

Also consider that Alt-Tab is just one of the potential Lost Device scenarios. There are many others, which Microsoft don't document because they can't document them all. A Lost Device can happen for any of these reasons, and it's still hugely rude to not support them - just picture the scene: player has whittled the Boss down to 1 hp, ready to land the killing blow, then - YANK! Device lost. If you were the player how would you like the program to behave? especially considering that this is something for which there is a documented recovery process which just requires a small amount of extra work on the part of the programmer?

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

As a general rule - unless the docs explicitly say that you need to use D3DPOOL_DEFAULT (e.g. depending on your D3DUSAGE) you should be using D3DPOOL_MANAGED.

"Use the D3DPOOL_DEFAULT flag at creation time to specify that a resource

are placed in the memory pool most appropriate for the set of usages requested
for the given resource. This is usually video memory, including both local video
memory and accelerated graphics port (AGP) memory."

the "most appropriate" part led me to believe this would be the optimal method.

D3DPOOL_MANAGED has the added convenience that static meshes and textures are not lost on lost device.

both run at the same speed, obviously (just tested it). whether there's a backup copy of the static meshes and textures in system memory has no effect on the speed of the vidcard. as long as nothing tries to take control of the vidcard away from directx, its happy.

however, unless the "restore" from system mem to vidram is triggered explicitly by the developer (and i believe its automatic), that would seem to require a check before using a resource in vidram, to make sure it was in sync with the system mem copy first. and a check is overhead. apparently negligible in this case. so from a clock cycle point of view, it may be slower.

i know, don't even start! who cares about a clock cycle or two?

well, until directx can do fixed function real time ray tracing, and still leave 30 milliseconds per frame at 30 fps for simulation, we probably all should.

i do. but then, i'm ambitious! <g>.

put a Cray on every user's desk, and i'll build you a REAL game!

until then, my life is an exercise in trying to shoehorn too much game into not enough computer.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Seriously - and I'll start being blunt now - this is not a problem.

Games have been using the managed pool since the managed pool first existed. That's - oooh I dunno - a clear case of production software covering thousands (or tens of thousands) of titles across millions of users.

The managed pool works. It's proven to work in the field. So it's decision time - (a) use something that's been proven to work time and time again, versus (b) adopt some user-hostile wacked-out scheme to explicitly avoid having to use the solution that works. Which do you choose?

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

there are some windows hotkeys that can get in the way of a game. I have problems with the windows key and Oblivion for example. Alt-tab might interfere with gameplay in Oblivion too. tab is your inventory, and you use it a LOT! even in the middle of combat to drink healing potions and select spells.

So add an option to your configuration screen where people can opt-in to disabling these keys.

however, unless the "restore" from system mem to vidram is triggered explicitly by the developer (and i believe its automatic), that would seem to require a check before using a resource in vidram, to make sure it was in sync with the system mem copy first.

No, it can use the same mechanism that you'd use on the application side to deal with vidram losses -- only when the lost device flag is raised (checked once per frame upon present), then iterate the list of managed resources and restore them using the sysram copy.

lost device wouldn't be an issue if i didn't have to use windows to talk to the vidcard. but then i wouldn't get the benefits of using windows to talk to the vidcard.

Exactly. On the PS3 I had the luxury of not having to go through the OS to talk to the GPU... and it was a nightmare. I had the option of using higher level APIs, but I was writing an engine, so I may as well go as close to the metal as I could, right?

* Packing bits and bytes manually into structures in order to construct command packets, instead of just calling SetBlahState() -- not fun. Yeah, slightly less clock cycles, but not enough to matter. Profiling the code shows it wasn't a hot-spot, so time-consuming micro-optimisations are a waste. I'm talking about boosting the framerate from 30FPS up to 30.09FPS, by a huge development cost. I could've spent that time optimising an actual bottleneck. Also, any malformed command packets would simply crash the GPU, without any nice OS notification of the device failure, or device restarts, or debug logs... The amount of time required to debug these systems was phenomenal, which again, means less time that I could use to optimize parts that actually mattered.

* Dealing with vidram resource management myself -- not fun. Did you know that any of your GPU resources, such as textures, may exist as multiple allocations? In order to achieve a good level of parallelism without stalls, the driver programmer (or poor console programmer) often intentionally introduces a large amount of latency between the CPU and GPU. When you ask the GPU to draw something, the driver puts this command into a queue that might not be read for upwards of 30ms. This means that if you want to CPU-update a resource that's in use by the GPU, you can either stall for 30ms (no thanks), or allocate a 2nd block of memory for it. Then you need to do all of the juggling that makes n-different vidram objects appear to be a single object to the application developer. The guys that write drivers for your PC are really good at this stuff and know how to do it efficiently. There's also lots of sub-optimal strategies that seem like a good idea to everyone else (i.e. your GPU driver probably solves these issues more efficiently than you would anyway).

* Then there's porting. Repeat the above work for every single GPU that you want to support...

Giving up a few clock cycles to the API has turned out to be a necessary evil. The alternative just isn't feasible any more.

Profile your code and optimize the bits that matter. Also, your obsession with clock cycles as a measure of performance is a bit out-dated. Fetching a variable from RAM into a register can stall the CPU for hundreds of clock cycles if your memory organization and access patterns aren't optimized -- on a CPU that I used recently, reading a variable from RAM could potentially be as costly as 800 float multiplications, if you had sub-optimal memory access patterns!

The number one optimisation target these days is memory bandwidth, not ALU operations.

put a Cray on every user's desk, and i'll build you a REAL game!

I think the point of "real" game development is to figure out how to build a game...without a Cray.

i know, don't even start! who cares about a clock cycle or two?

This guy does:


like most things in game development, there's the easy way and the fast way to do it. if you code each line with clock cycles in mind form the get go, you'll have very little optimization to do at the end.

Also, your obsession with clock cycles as a measure of performance is a bit out-dated. Fetching a variable from RAM into a register can stall the CPU for hundreds of clock cycles if your memory organization and access patterns aren't optimized -- on a CPU that I used recently, reading a variable from RAM could potentially be as costly as 800 float multiplications, if you had sub-optimal memory access patterns!
The number one optimisation target these days is memory bandwidth, not ALU operations.

This.
1000x this.

A fixation on clock cycles when coding on a highly pipelined out of order cpu and a massive latency to main memory is the sign of an 'old skool' programmer who has failed to keep up with the realities of a modern development environment.

Seriously - and I'll start being blunt now - this [sync overhead] is not a problem.

yes, I know. Just being technical, from a theoretical point of view.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

The managed pool works

at the moment, i only have 20 dynamic meshes that would have to be restored. they're actually 20 copies of the same ground quad.

loading a copy into system mem and then restoring from there would avoid a disk hit on restore. but then again, its only 20 quads. try it the easy way, go for the fast way if needed. I was hoping to pull this title off with no paging, but its just too big and complex.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This topic is closed to new replies.

Advertisement