Jump to content
  • Advertisement
Sign in to follow this  
svetpet

SVN vs Perforce

This topic is 825 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I am reading recently about Perforce and I am wondering if it is worth the price. It is about big team - more than 20 people, which uses SVN.

Using SVN is fine, but sometimes can be annoying(missing status icons in explorer) and hard to work with (merging branches).

Can you tell me if it is worth migrating to Perforce? What would be the annual price for around 150 people? Would Git be a good alternative?

 

Share this post


Link to post
Share on other sites
Advertisement


If all you are storing is source code, then Git can work.  Subversion can work as well for simple stuff, but it maintains multiple complete copies plus deltas so it requires far more disk space. Subversion works better at big files and big resources, but most svn clients cannot handle files over 2GB, and the default repository has limits at 16TB. If you are making a long-running game there is a high chance you will eventually bump into one or both of those limits.

 

Multiple complete copies? On client side it only stores one "pristine" copy and the working copy itself. On the server, the database is compressed. Git keeps the whole history on each client, so if anything git requires far more disk space. If you are talking about server side, "multiple copies" shouldn't be happening as they are compressed (unless you are talking about pre-compressed binary formats, those are an issue - albeit only a disk space issue - for all VCSs I know of)

 

Which svn clients can't handle >2GB files? TortoiseSVN had an issue a year or two ago, but that should be fixed.

The repository limit of 16TB is specific to the BDB database repository backend, which has been deprecated for several years now.

Share this post


Link to post
Share on other sites

One of the reasons perforce can scale to huge projects is that you can define rules to automatically "forget" the content of older versions of certain types of files. For instance, you can have it store only the last 4 versions of png files.

 

This allows to use it to store absolutely all of the assets of even a huge game, such as a large open world game, at the expense that the history gets truncated. But in those cases keeping the entire history is not feasible anyways.

 

I don't know if subversion have this feature nowadays.

Edited by Zlodo

Share this post


Link to post
Share on other sites

If i were going to buy a tool for a large team I would involve people on the team and would compare Perforce's solution to other systems - at least also to [AlienBrain](http://www.alienbrain.com/).

 

When I evaluated Perforce in the past I concluded that other tools such as FogBugz/Kiln or Atlassian's Bitbucket/Stash/JIRA were more cost effective solutions.

Share this post


Link to post
Share on other sites

If you are storing all the other game assets and their source files, Git will quickly implode under its own weight and your development team will require enormous hard drives for their clones of the repository.

 

While some people store their assets and data files outside the repository, in practice this tends to fail in games.  If you need to recover what was in place before, that means also getting copies of data and assets at the time. If the data doesn't match the then-current version of data structures, or if the data references data types that don't exist in that version of the code, at best you have a build that performs differently than before, at worst you've got an unusable build.  

 

A recent advance that is meant to mitigate this is Git-LFS (or Large File Storage) -- its a bit like the Git-Annex plugin you might've heard of. Basically what it does is that you have a separate "server" where the "large" files you specify are stored (files that you specify to be clear, there's no imposed criteria, file-size or otherwise) -- that server keeps historic versions of the files (kind of like drop-box) and I'm not sure if it supports files that have built-in histories in a smart way, but all that exists inside your git repo is a small text file that points to the server location and the hash of the intended file version.Those text files are versioned alongside your source code and other project artifacts as per usual, so old version of your project reference to the then-contemporary large file version stored on the LFS server. Some use cases would be working-versions of media assets, library dependencies, build systems, or even built-output if you like -- though I think it's better to just have your integration/build server archive its output, and make sure the hash of the project version gets written out as part of that.

 

As a result of how it works, git doesn't bog down re-computing hashes of huge files, and repositories don't bloat -- only the correct version of large files that you currently need will be pulled down to your machine; I imagine that there's some amount of caching that goes on, but not even a single version of the large files will ever be committed to your repo in the usual way.

 

Github offers Git-LFS (and as-per-usual, you get 1GB file storage and 1GB file transfer free per month, and beyond that they charge for it and its got an asinine cost-structure), and I believe are the one's who developed and/or sponsored LFS. GitLab also supports it (and as-per-usual, they don't charge for the privilege, or you can run GitLab yourself, for free). Or you can run it on lots of others, or locally on your own box if you like.

 

It hasn't been around for too long, I haven't looked for testimonials but I imagine you could find some by now.

 

I do know that GitHub has a filesize limitation of 2GB per file for large file storage, but AFAIK that's a policy thing, not a limitation of LFS itself.

Edited by Ravyne

Share this post


Link to post
Share on other sites

In addition to the Frob's advice, one major benefit that Perforce and SVN have over Git is the ability to lock binary files so only one user can modify a file at any one time. I've used Git with games development on large teams and one of our workflow bug bears is that having merge conflicts on a binary file because two people have either worked or touched (intentionally or not) the same file. 

 

For games projects or projects where there makes heavy use of binary files, my preference is Perforce over SVN and Git. 

 

Perforce does do a free tier for up to 5 users which is worth trying out before you shell out for the full licences.

Share this post


Link to post
Share on other sites

In addition to the Frob's advice, one major benefit that Perforce and SVN have over Git is the ability to lock binary files so only one user can modify a file at any one time. I've used Git with games development on large teams and one of our workflow bug bears is that having merge conflicts on a binary file because two people have either worked or touched (intentionally or not) the same file. 
 
For games projects or projects where there makes heavy use of binary files, my preference is Perforce over SVN and Git.


Github (not Git itself) now allows you to put role/user-based access controls on specific directories -- though you probably should not be checking binaries in anyways, at least not without Git-LFS or Git-Annex (in which case their metadata files can be put in protected directories). This might be a feature reserved for 'Organization' accounts though -- not sure.

More generally, though, this sounds like the sort of thing that should be prevented by policy and backed by tooling and/or code-reviews if you have a good reason to be checking binaries in. The problem doesn't sound like the lack of protections, but of carelessness or build processes that change/produce binary artifacts willy-nilly.

Share this post


Link to post
Share on other sites

I would respectfully disagree - if you have a large team (over 5) people then it's as impractical to verbally agree access to editable binaries as it is to agree access to editable source code files. Obviously for build artifacts then you don't need to have them in version control, although some would argue that it makes sense to do so, for example to ensure that the toolchain remains synced with the assets that rely on that version of the toolchain. But beyond that, there are plenty of binary assets that more than one developer can work on. The relevant difference between a binary and a source file is the inability to perform meaningful merges on the binaries, and that's why locking becomes necessary - just as it would if it wasn't practical to merge a source file easily.

Share this post


Link to post
Share on other sites

Granted, it would be nice to be able to lock some files down when concurrent edits would present a merge problem, but is it really that frequent for developers who aren't already closely collaborating together need to mess with files like that? It just seems to me that better practices ought to side-step this issue to a very large degree, and that the alternative of moving to a centralized system like Perforce from a decentralized one like Git to remedy that one issue isn't worth the cost of admission (though, if you prefer P4 to begin with, its a non-issue). I guess, ultimately, its a thing you want if you want to organize and use your source control in that way, but IMO there are alternatives that are just as good and don't need it -- I'm not against checking in certain binaries and versioning them alongside the code as needed, I just see a lock as a coarse solution to some kind of process smell. 

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!