35Gb for audio, are you kidding me?

Started by
70 comments, last by cr88192 10 years, 1 month ago

Total of 6TB of hard disk space.
Number of fucks given about uncompressed audio : 0.


QFT.



I think it's nice that they've considered lower spec PCs, but these days dual-core PCs are rare amongst gamers. If anyone was serious about gaming I'd imagine they'd have upgraded to at least quad a long time ago.


You'd be wrong.

From the best source we have, there are still marginally more pc gamers on dual than quad core. Even if they are not "serious gamers", that's still a massive % of market share to ignore.
OTOH, >75% of pc gamers have at least 100GB of free hd space. (over 50% has at least 250GB free). Adding more HD space is cheap and easy on a desktop. Changing the cpu is not.

So, all things considered, it seems like a reasonable optimisation when you're pushing to have as wide an audience as possible, coupled with respawns upfront admission that framerate is their top priority.

While I'm on this soapbox, this should be required reading for every software developer.

It is a simple fact that business issues dominate technical issues, and any definition of "good" that fails to acknowledge that fact is bad.

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight
Advertisement

Total of 6TB of hard disk space.
Number of fucks given about uncompressed audio : 0.

I have 5TB internal, but still care some about compression...

ironically, it has fairly little to do with how much HDD space is used, but more often with RAM use and IO speeds.

if a person has a 32-bit process, they are limited to about 3GB of stuff in RAM, either this meaning less data and assets, forcing use of 64-bits, or needing to keep lots of things compressed in-RAM (favoring fast decompressors over compression ratio).

it also requires being conservative with disk IO to keep things like loading times and similar relatively fast (don't want bulky data or lots of small files scattered all over, ...).

so, in my case, for the distributed version of the engine, generally most of the audio is compressed into a custom format and put into PAK files which are read-in all at once (and sound-effects can be loaded by creating a sampler-object and pointing it at the correct spot in said PAK). (the PAK format is nearly identical to the Id PAK format, just with a hack/extension to allow files with names longer than 56 chars).

though, oddly, PAK is not used for textures (which use ZIP files with a PK extension, which are in-turn mounted into the VFS, nevermind if most of the contents just end up stored), and my script VM largely uses a WAD variant (sort of loosely related to WAD2, serves a similar role to a JAR).

nevermind the seeming silliness of 3 subsystems each using a different format and tools for bundling asset data (or for "compiling" script code).

ADD: also, turning off some more advanced features, ... my stuff still works sort-of passably on an 11 year old laptop.

I wouldn't consider myself a "serious" gamer, since I usually wait a year or two before picking up games and don't put many hours in a week, but I stay up on the latest gaming news, and I do play games every so often (and play them heavily when I do).

However, I have a dual-core processor. I also have 800 GB harddrive space, and (just recently) 1TB RAID1 networked storage.

I incrementally upgrade some pieces every so often (the videocard was updated in 2009, for example, the RAM was upgraded (to only 3GB(!)) in 2013.

It's an aging computer (bought in 2007), but it currently serves my purpose (but is nearing its end of life), and I intend to get as much real-world value out of my real-world money investment that I made when I purchased the machine before I replace it. I'm getting very good milage (which is very real money saved) out of it.

This is just common sense, unless you are a computer hardware enthusiast.

Not everyone who is a gamer is also a computer technician or hardware hobbyist. wink.png

I would consider it silly to think that everyone who'd be likely to want to play the game would also have a high-end machine, and I'd think it bad for business for EA to limit their market to only those people.

There are more people buying modern games than there are people who are techy about hardware. I occasionally get calls from otherwise intelligent family members asking me if new game X will run on their machine, or what kind of machine they should buy to play new game Y.

Even when they do, they always have a reasonable budget, and it's always sub-$1000. There is a limit to how much they feel comfortable investing just so they can play a game. They need a computer anyway, but if the difference is between a $400 or $500 laptop, and a $900 laptop capable of playing the latest $60 game, then the actual cost of that game (in their mind) is $460. That's not a number EA wants them to be thinking about.

They just haven't learned hardware (and frankly, I don't know much about it either - I just happen to be the only above-average techy person on their contact list) - and learning hardware shouldn't be a prerequisite to playing games. Even modern games. smile.png

Further, how many people hyped up on the latest game are kids or young teens?

"Mom, can I buy Game X? It's $60!" -> "Sounds expensive, but maybe for your birthday."

"Mom, can I buy a new $1400 gaming machine?" -> "Uh, no."

Also, they are wanting to sell the game globally, not just to Americans. What kind of hardware does the average gamer have worldwide? The minimum spec will start there, and increase as EA decides to cut out one area or another.

They also want to sell to the growing age of gamers, including people working 9-to-5 jobs, or in college, who don't have time (or find it a gross waste of money) to constantly upgrade their machines on a bi-annual basis. rolleyes.gif

What is the market audience EA is going for?

- A) Only people willing to pay $1400 for a high-end machine every two years?

- B) Only people knowledgeable enough in hardware to build (or incrementally upgrade) their own machine for less than $1400?

- C) Only people living in the United States?

- D) Only people over 18 y/o?

Or perhaps they want to be able to sell the game to anybody and everybody willing to hand over $60, and don't want to field millions of support calls from people crying that they didn't read the minimum specs, and don't even know what "a vram" or "a core quad" is, or whether a GeForce 8800 GT is better or worse than a GeForce GTX 560. I mean, surely the large number means it's better right? wink.png

(For the record, I also find the 35GB of uncompressed audio a ridiculous thing, and bet they could've come up with a much better solution if they had more time. I'm also not too interested in Titanfall, though I might buy it for $10 on Steam in two years or so when it is on sale)

This is important information to keep in mind if you are developing games for market. You are not the average consumer. I am not the average consumer, and even I have a machine that many people in this thread seem to think is "rare". tongue.png


This is important information to keep in mind if you are developing games for market. You are not the average consumer. I am not the average consumer, and even I have a machine that many people in this thread seem to think is "rare".

That is where the different viewpoints are interesting.

I think it is fun to contrast the industry veterans against the less experienced people.

Industry vets: "do not care", "time/money constraints", "business decision", and "the game sucks, who cares."

Others: "Could have been smaller!" "optimization!", "insanity", and my favorite "low spec is rare among high spec players"

Put me firmly in the "do not care" bucket. Modern games are big. Titanfall is 48GB. WoW is a decade old at 25GB. Ghost Recon was 25GB. Force Unleashed was 30GB. Max Payne 3 was 35GB.

The two biggest sources of size are all the pre-recorded audio (which this thread discusses) and the high definition textures.

Gamers keep crying for more unique lines, don't make everyone say the same thing. Give us complex dialog trees with three branches each, for each gender, and based on previous gameplay choices. The end result is that a single plot point might involve 300 or more potential lines of recorded dialog.

Taking this to the extreme, SW:TOR had over 200,000 lines of dialog. Some of them were rather long speeches, all in very high quality. They account for the vast majority of the 30GB space requirements.

Titanfall seems to have followed this same vein. All the reviews talk about the extensive recorded dialog. It seems every grunt has its own collection of voice dialogue based on a bunch of conditions. Nobody likes to hear the same lines of dialogue repeatedly. So it takes a lot of space, not a big deal. I don't want to go back to the days where all 50 people in the town are limited to the words "Please talk to the king!"

This game must be doing some hardcore signal processing for audio decoding and playback to actually be a bottleneck to such an extent that the raw waveform data needs to be precomputed and stored on disk over many dozen gigabytes. I know I would like my games to take 10% of my hard drive each just so I can hear some audio the quality of which is probably beyond my hearing range anyway... not.

Throwing infinite money at endless racks of HDD's to compensate for the failure of software to take even the most rudimentary steps towards making sensible use of the hardware it is running on might be an option for some people. It isn't for most, and even though HDD space is cheaper than CPU clocks, as both an occasional gamer and a developer I do expect software (including games) to at least try and make sane tradeoffs between performance and accessibility. An extra 4-5GB (out of the 20-30GB that comprises the totality of the game) to store some uncompressed audio? Sure, why not. But 35GB? Sorry, but that is ridiculous, and a clear sign that something has gone wrong during development, and whether it was due to time or financial constraints or simply poor programming does not change the end result that it is a colossal waste of space and bandwidth which could have been avoided given more resources.


Adding more HD space is cheap and easy on a desktop. Changing the cpu is not.

Changing the audio decompression algorithm from using the console coprocessor to a PC software implementation rather than streaming in PCM data has cost zero for the user, at the expense of some additional cost in development time. It would have been worth the tradeoff, if reaching as large a target audience as possible was the main goal. The point is, they did not, and this is the result. Whether you have the hardware to handle the consequences of that choice is irrelevant, and I feel really bad that some programmers on this forum think this was a good idea from a technical perspective. The hard truth is that the development team just ran out of time and could not delay any longer, and so hacked together this terrible solution so they could ship on time. Please remember that "meeting requirements" does not mean writing the worst code ever just because it's easier to do so, it should only be done in emergencies (which unfortunately do happen).

I don't care that games take several dozen gigabytes to store a bunch of assets. Variety is good, more dialogue lines is good. But I do care that a PC version of a game contains several dozen gigabytes of data which are effectively useless, since the same result could be achieved by properly encoding audio for the PC version. If I wanted to waste disk space, I could do it myself by creating a large file and writing random data in it.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”


But I do care that a PC version of a game contains several dozen gigabytes of data which are effectively useless, since the same result could be achieved by properly encoding audio for the PC version. If I wanted to waste disk space, I could do it myself by creating a large file and writing random data in it.

Your disk is likely NTFS formatted.

Right click the folder. Click "Properties". Click "Advanced...". Click "Compress". Wait 3 hours while the disk churns. Or if you plan ahead, mark the folder as compressed before the install takes place. Several gigabytes recovered. Done.

If you are on a min-spec machine, expect poor performance during audio playback and other loading as decompression must take place. If you are on a high spec machine, it doesn't matter because you have the cycles to burn on decompression.


Right click the folder. Click "Properties". Click "Advanced...". Click "Compress". Wait 3 hours while the disk churns. Or if you plan ahead, mark the folder as compressed before the install takes place. Several gigabytes recovered. Done.

If you are on a min-spec machine, expect poor performance during audio playback and other loading as decompression must take place. If you are on a high spec machine, it doesn't matter because you have the cycles to burn on decompression.

You completely missed the point wacko.png

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

You completely missed the point

Perhaps I did. Or perhaps I understood your point and intentionally avoided it. ph34r.png

Let's consider this as a business decision.


The facts we know about it are (1) There was a serious bug, ostensibly related to min-spec machines but potentially related to anything in the audio system, and (2) the solution that shipped was to use uncompressed audio.

Game is really big. A big investment, a big marketing push, and a big risk.

Game has firm deadlines.

Game was probably running very late and over budget.

Quite likely game had issues getting through first-party certification, so dev and QA focus was redirected away from PC launch.

Quite likely after console games have been finalized and in the final push to get PC polished, focus resumes to PC.
Game requires fairly high-spec equipment, but business demands relatively low end machines for min-spec, so QA focuses on older boxes.
QA discovers nasty problem on min-spec machines, or crash in audio codec, or other serious bug.

Leads and management discuss options for the bug:
A: Delay the PC version of the game to fix a min-spec problem, costing many million dollars.
B: Increase the min-spec, issue pre-release refunds, change marketing mid-progress, all costing a few million dollars.
C: Use uncompressed audio and if necessary fix it in a patch. Irritates some customers, but does not cost millions.
D, E, F, G: Additional options that were obviously dismissed.

Management decides C is least risky and least costly. It also has a side effect of making min-spec machines less likely to buy the game, which is good, because performance on min-spec is usually pretty terrible. This option solves many issues, reduces the overall number of complaints (at the risk of a small number of vocal but easily dismissed complaints), and can be remedied cheaply if there is backlash.

The short version of the thought process is what they gave in the press release: There was a bug on min-spec and the business decision is to use uncompressed audio.



If you have bought Titanfall for PC and you are honestly concerned about hard drive space (which is very unlikely) then compress the folder with all the audio in it, or get a bigger drive.

I see this as not a non-issue. If enough players complain it will likely be corrected in one of the first patches, which usually follow a week or two after the initial launch. By the time it becomes available on Steam it will likely be addressed, and the size reduced by around 30GB.

Yes, the decision is perfectly logical from a business point of view, I agree. I was mainly responding to Phantom who's acting as if wasting resources should be the default thing to do regardless of the circumstances (and believing that uncompressed audio and quality are somehow correlated - that one surprised me).

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”


But I do care that a PC version of a game contains several dozen gigabytes of data which are effectively useless, since the same result could be achieved by properly encoding audio for the PC version. If I wanted to waste disk space, I could do it myself by creating a large file and writing random data in it.

Your disk is likely NTFS formatted.

Right click the folder. Click "Properties". Click "Advanced...". Click "Compress". Wait 3 hours while the disk churns. Or if you plan ahead, mark the folder as compressed before the install takes place. Several gigabytes recovered. Done.

If you are on a min-spec machine, expect poor performance during audio playback and other loading as decompression must take place. If you are on a high spec machine, it doesn't matter because you have the cycles to burn on decompression.

IME, depending on the drive and types of files involved, NTFS file and folder compression actually often makes things considerably faster.

granted, it isn't always effective, and it could be nice sometimes if Windows could learn what types of files it has good effect on (like, say, it realizing it is good to compress TXT and C files and similar, but don't bother with AVI or MP4 files, ...).

the issue is, if you have a conventional HDD, typically the thing will only go about 100 MB/sec (exact speed is drive specific, for example, one could get 75 MB/sec from a 5400 RPM drive and 130 MB/sec from a 7200 RPM drive, ...).

but, often, the decompression code can be sufficiently fast vs the HDD read-speeds as to make the CPU cost of the decompression fairly negligible (like, on modernish HW, pushing around 1GB/second for an inflater in benchmarks, *), and around 400MB/sec on 11 year old laptop hardware (with an HDD that seems to read at around 30-45 MB/sec). (ADD: the result being that even reading and decompressing, the HDD read times dominate over the decompression times).

*: though, yes, a design more like LZ4 or RLEW/RLEB can be a little faster than Deflate or similar, but not drastically IME, as per-core speeds seem to cap-out at about 1.3 GB/sec or so, but experimentally, running all 4 cores at once caps out at around 5.2 GB/sec.

also, invoking speed-magic tends to involve also doing everything in a single pass when possible, and keeping pretty much all the working tables and data fitting within about 64kB or so (most of the speed magic going away if one steps much outside 64kB...).

this basically means things like using 8-bit index-tables for things like Huffman decoding, as the 12 or 16-bit lookup tables are too big and kill performance (and by nature, most common Huffman codes end up under 8-bits anyways).

(or, at least, it all works this way on my HW...).

This topic is closed to new replies.

Advertisement