Supposedly faster on seek and (?) data transfer bandwidth, has this actually happened in how game programming performs ?
Yes; most games load noticeably faster.
Too many times I see games hesitate or just freeze while heavy disk operations are in progress (blocking some interface interaction actions which could/should have stayed active). I had thought that threading/multiple processes SHOULD alleviate that kind of behavior
It depends on the game. The typical places where a game can do heavy disk access are: Loading assets, saving progress, and writing logs.
Loading assets may or may not be done asynchronously, depending on the type of asset. It's not exactly trivial to do, but some assets lend themselves to it more than others. The most notable example I've seen is the Unreal engine loading low-resolution textures first, letting gameplay start, then loading higher resolution textures as the game is actually being played (Borderlands is one example that I know for sure uses this, but other games using Unreal might as well).
Saving progress is trickier. You want to save a consistent snapshot of the game environment, but saving takes time. You may need to compress the data as well. You have multiple options:
- Suspend the game so that nothing is changing, then write everything to disk while playing a minimal "saving..." animation.
- Make a copy of game state in RAM (plain-old-copy, copy-on-write, transactions, etc) and allow the game to continue while you write the copy to disk in the background.
- Write transactions to disk on the fly and periodically collate them.
- ...And other options, depending on the priorities the developers have.
Logging to disk is (usually) trivial unless poorly implemented. Only a few games I've heard of have performance problems due to logging, but I can't remember them off the top of my head.
Most modern, high quality games I've played on desktops are very good at all of these, regardless of SSD or platter technology. SSDs just do it faster.
Ive been looking an future game designs that would have more disk activity going on behind the user display/processing activities in the 'foreground' (much more continuous file access than typical 'level' preloading' schemes).
You can do that now, you just have to budget how much disk access you perform based on what you expect your users' hardware to be able to handle. If you're talking about real-time strong AI using several terabytes of hard drive space as a knowledge base, then consumer hardware isn't quite ready yet.
Another general issue was degredation over time of the SSD (their rewrite cycle limits actually have dropped significantly and some of the datblock-use-spreading processing has been less than sterling).
Depends on the hardware and your access patterns, but they are definitely things to consider when designing software that uses SSDs. I personally have about 8 years total usage time among my SSDs and haven't had one fail yet. I mainly install games to the SSD and use a platter drive for write-heavy operations. I anticipate at some point the SSDs will fail. And when that happens, I'll just replace it like I do with normal hard drives that fail.
Also has there been talk of changes to programming method to take advantage (or to minimize shortcomings) when game machine have SSDs/SSD-HD hybrids (which more and more will have as time goes on) ?
Most of the special programming methods I'm aware of to minimize SSD shortcomings are implemented by the firmware and the operating system, and the application software doesn't really need to do anything special itself.
Edited by Nypyren, 11 August 2014 - 09:57 PM.