Thread: RE: [GD-General] Re: asset & document management
Brought to you by:
vexxed72
From: Nathan R. <Nat...@te...> - 2003-05-15 22:05:29
|
> How does it handle an artist that creates 182 versions of a 80 megabyte > binary file in the course of 3 weeks? I suppose CVS would end up with 14 > GB > of archives, by which time it has probably long croaked :-) > > I've looked at Perforce in the past, it's a really nice product, simliar > to > what we already know. But does it handle really, really large amounts of > data? Have you looked at Subversion at all? (subversion.tigris.org) Subversion uses compressed binary diffs, so it does a very impressive job of keeping a file's history small. It's not terribly mature just yet, unfortunately, so you're lacking all of the nifty tools that CVS/Perforce/etc all have going for them. I've never used it outside a screwing-around-at-home capacity, so I can't vouch for its reliability in a production environment, but it has worked great for me so far. |
From: Gareth L. <GL...@cl...> - 2003-05-16 09:34:19
|
> From: Enno Rehling [mailto:en...@de...] > How does it handle an artist that creates 182 versions of a > 80 megabyte > binary file in the course of 3 weeks? I suppose CVS would end > up with 14 GB > of archives, by which time it has probably long croaked :-) There is a free 2 user license you can download, so just grab it and stress test it :) |
From: Tom F. <to...@mu...> - 2003-05-16 14:18:49
|
The fox and the chicken go first, bring the chicken back, then take the chicken and the dog :-) Tom Forsyth - Muckyfoot bloke and Microsoft MVP. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Ivan-Assen Ivanov [mailto:as...@ha...] > Sent: 16 May 2003 14:47 > To: gam...@li... > Subject: RE: [GD-General] Re: asset & document management > > > > ... on the other end of a 768Kb/s DSL line.... > > ... rsync-like tricks, which would be nice... > > On a semirelated note, what can you recommend for a > folder sync between two Windows machines, both of which > are behind firewalls with uncooperative BOFHs? > Use of a FTP server on a third machine is permitted. > > Any ideas? |
From: Tom F. <to...@mu...> - 2003-05-16 14:26:18
|
Argh. I actually meant to add a useful comment as well: -Python/etc script that handles the FTPing to and from the third machine (according to dates, etc). -Groove. www.groove.net Slightly wacky, but very cool. Note this doesn't so much syncronise existing folders, as provide a folder that is synchronised. If you get what I mean. Tom Forsyth - Muckyfoot bloke and Microsoft MVP. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Tom Forsyth > Sent: 16 May 2003 15:16 > To: 'gam...@li...' > Subject: RE: [GD-General] Re: asset & document management > > > The fox and the chicken go first, bring the chicken back, > then take the chicken and the dog :-) > > Tom Forsyth - Muckyfoot bloke and Microsoft MVP. > > This email is the product of your deranged imagination, > and does not in any way imply existence of the author. > > > -----Original Message----- > > From: Ivan-Assen Ivanov [mailto:as...@ha...] > > Sent: 16 May 2003 14:47 > > To: gam...@li... > > Subject: RE: [GD-General] Re: asset & document management > > > > > > > ... on the other end of a 768Kb/s DSL line.... > > > ... rsync-like tricks, which would be nice... > > > > On a semirelated note, what can you recommend for a > > folder sync between two Windows machines, both of which > > are behind firewalls with uncooperative BOFHs? > > Use of a FTP server on a third machine is permitted. > > > > Any ideas? > > |
From: Paul B. <pa...@mi...> - 2003-05-16 15:32:45
|
> -----Original Message----- > From: Mickael Pointier [mailto:mpo...@ed...]=20 > To: gam...@li... >=20 > Would be interesting to have some hard numbers here about the=20 > size of assets you are all managing on your projects. We just finished our game and here are some numbers on my development client on my machine. I'm a dev so art and design enlistments would be slightly different but close to this because I tended to sync everything plus some. The following includes source code, source art assets and source game/level data but no compiled artifacts for a single codeline of the game -- all told, we made 59 branches during development -- 5 of those "full" branches. 62341 total files and 14.5 Gb in total size On the server side, the depot consumes a total of 61 Gb of diskspace and contains (including deleted revisions)=20 The time it takes to sync depends on how much stuff changed. Most=20 times it takes less than a couple seconds and my syncing behavior was driven more by the risk-reward payoff of picking up other peoples changes. =20 > When I see these numbers, I wonder how it's possible to get=20 > fast versioning programs that perform CRC, compression, archiving... We started using our current version control system in May of 2001 and it has served us perfectly well (in fact it improved the quality and stability of the game/toolset considerably). The cases where=20 we had performance issues with the depot were always related to=20 too little physical memory (which is easily remedied). I think if you look into who is using commercial packages like Perforce and other such tools, you'll see that the amount and size of artifacts they have under control is pretty large. I seem to recall nvidia using Perforce and having extremely large depot sizes (in the 80-90 Gb range IIRC). Paul |
From: Gareth L. <GL...@cl...> - 2003-05-16 15:46:29
|
So you use your own tool ? Can you elaborate on it ? :) > -----Original Message----- > From: Paul Bleisch [mailto:pa...@mi...] > Sent: 16 May 2003 16:33 > To: gam...@li... > Subject: RE: [GD-General] Re: asset & document management > > > > -----Original Message----- > > From: Mickael Pointier [mailto:mpo...@ed...] > > To: gam...@li... > > > > Would be interesting to have some hard numbers here about the > > size of assets you are all managing on your projects. > > > We just finished our game and here are some numbers on my > development client on my machine. I'm a dev so art and design > enlistments would be slightly different but close to this because > I tended to sync everything plus some. > > The following includes source code, source art assets and source > game/level data but no compiled artifacts for a single codeline > of the game -- all told, we made 59 branches during development -- > 5 of those "full" branches. > > 62341 total files and 14.5 Gb in total size > > On the server side, the depot consumes a total of 61 Gb of diskspace > and contains (including deleted revisions) > > The time it takes to sync depends on how much stuff changed. Most > times it takes less than a couple seconds and my syncing behavior > was driven more by the risk-reward payoff of picking up other peoples > changes. > > > > > When I see these numbers, I wonder how it's possible to get > > fast versioning programs that perform CRC, compression, archiving... > > We started using our current version control system in May of 2001 > and it has served us perfectly well (in fact it improved the quality > and stability of the game/toolset considerably). The cases where > we had performance issues with the depot were always related to > too little physical memory (which is easily remedied). > > I think if you look into who is using commercial packages like > Perforce and other such tools, you'll see that the amount and size > of artifacts they have under control is pretty large. I seem to > recall nvidia using Perforce and having extremely large depot sizes > (in the 80-90 Gb range IIRC). > > Paul > > > > ------------------------------------------------------- > Enterprise Linux Forum Conference & Expo, June 4-6, 2003, Santa Clara > The only event dedicated to issues related to Linux > enterprise solutions > www.enterpriselinuxforum.com > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_idU7 > |
From: Stefan B. <ste...@te...> - 2003-05-16 16:17:44
|
> > -----Original Message----- > > From: Paul Bleisch [mailto:pa...@mi...] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > So you use your own tool ? Can you elaborate on it ? :) Well, due to the e-mail address, I can only assume that they use SourceDepot, which AFAIK is basically Perforce by another name ;) Cheers, Stef! :) -- Stefan Boberg, R&D Manager - Team17 Software Ltd. bo...@te... |
From: Gareth L. <GL...@cl...> - 2003-05-16 16:44:50
|
Oh, I missed that (Which is stupid of me, because Paul used the term "depot") Strange because Paul also said that their system was better than p4 which doesn't make sense. Sadly we use SourceSafe which is so bad even Microsoft dont use it :) > -----Original Message----- > From: Stefan Boberg [mailto:ste...@te...] > Sent: 16 May 2003 17:15 > To: gam...@li... > Subject: RE: [GD-General] Re: asset & document management > > > > > -----Original Message----- > > > From: Paul Bleisch [mailto:pa...@mi...] > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > So you use your own tool ? Can you elaborate on it ? :) > > Well, due to the e-mail address, I can only assume that they use > SourceDepot, which AFAIK is basically Perforce by another name ;) > > Cheers, > Stef! :) > -- > Stefan Boberg, R&D Manager - Team17 Software Ltd. > bo...@te... > > > > > ------------------------------------------------------- > Enterprise Linux Forum Conference & Expo, June 4-6, 2003, Santa Clara > The only event dedicated to issues related to Linux > enterprise solutions > www.enterpriselinuxforum.com > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=557 > |
From: brian s. <pud...@po...> - 2003-05-16 17:02:29
|
I can easily believe SourceDepot is better than P4. It's not just a copy of Perforce; they bought a code license for internal use and then set a team of very smart people to work updating it. So it's got bugfixes and performance improvements that Perforce doesn't have (the NT codebase is a very good stress-tester for any revision control system). For a game, all we had to do was layer a GUI on top for the artists and SD worked great. SourceDepot is crazy good. I wish they would sell it, but I expect whatever deal they struck with the makers of Perforce precludes that. Too bad :( --brian Gareth Lewin wrote: >Oh, I missed that (Which is stupid of me, because Paul used the term >"depot") > >Strange because Paul also said that their system was better than p4 which >doesn't make sense. > >Sadly we use SourceSafe which is so bad even Microsoft dont use it :) > > > >>-----Original Message----- >>From: Stefan Boberg [mailto:ste...@te...] >>Sent: 16 May 2003 17:15 >>To: gam...@li... >>Subject: RE: [GD-General] Re: asset & document management >> >> >> >> >>>>-----Original Message----- >>>>From: Paul Bleisch [mailto:pa...@mi...] >>>> >>>> >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> >> >> >>>So you use your own tool ? Can you elaborate on it ? :) >>> >>> >> Well, due to the e-mail address, I can only assume that they use >>SourceDepot, which AFAIK is basically Perforce by another name ;) >> >>Cheers, >>Stef! :) >>-- >>Stefan Boberg, R&D Manager - Team17 Software Ltd. >>bo...@te... >> >> |
From: Paul B. <pa...@mi...> - 2003-05-16 16:54:44
|
> -----Original Message----- > From: Gareth Lewin [mailto:GL...@cl...]=20 > To: gam...@li... >=20 > Oh, I missed that (Which is stupid of me, because Paul used the term > "depot") >=20 > Strange because Paul also said that their system was better=20 > than p4 which doesn't make sense. Hmm, that wasn't my intent. I like Perforce a lot. I used it for my personal development before joining MS and would use it again immediately if I were not at MS. The tool we use internally was developed to help the Windows and Office teams deal with version control / SCM issues. Given the size of those teams, scalability and large depots are important. Oddly enough, those aspects are important to games too. :) =20 > Sadly we use SourceSafe which is so bad even Microsoft dont use it :) Actually, many teams within MS use SourceSafe. It is all a matter of scale. =20 Paul |
From: Paul B. <pa...@mi...> - 2003-05-16 17:00:18
|
> -----Original Message----- > From: Stefan Boberg [mailto:ste...@te...]=20 > To: gam...@li... >=20 > Well, due to the e-mail address, I can only assume that they use > SourceDepot, which AFAIK is basically Perforce by another name ;) That isn't quite true. The relationship between Perforce and Source Depot is best left at relatives. :) There have been a couple public=20 talks about the featureset of the SCM tools MS uses internally and what kind of issues MS teams face in development. Paul |
From: Tom F. <to...@mu...> - 2003-05-19 11:27:11
|
> One of the things they seem to pride themselves in is > Perforce's ability to > reliably roll-back to any changelist or label very quickly, I'd think the time required to grab huge files from a server is comparable to the time required to get the much smaller diffs and apply them to the existing file. We've found that even complex stuff like LZ-like compression makes disk accesses faster, not slower, because it's so fast to decompress and still gives good space savings. So applying a diff is a doddle, surely? Tom Forsyth - Muckyfoot bloke and Microsoft MVP. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Neil Stewart [mailto:ne...@r0...] > Sent: 18 May 2003 00:10 > To: gam...@li... > Subject: Re: [GD-General] Re: asset & document management > > > > Why aren't they storing the most recent file and then doing > the binary > diffs > > backwards? So they store the diff of how to get from the > newest file to > the > > next newest file. That way the newest file would be fast to > use, you can > > save full files every time you branch etc. Might be slower > to get old > > versions but that's mostly for backup/safety anyway. > > Well, they aren't storing any binary diffs at the moment, never mind > backwards. ;) > > I get your point, but I don't think they were willing to compromise > performance on _any_ revision of a file, including the most > common usage of > getting the latest version. > > One of the things they seem to pride themselves in is > Perforce's ability to > reliably roll-back to any changelist or label very quickly, > specifically as > a bug-finding tool, i.e. you can roll back to a version where > a nasty bug > does not exist and then roll forward to see what broke. This > would not be > possible if they only optimised for the latest version of a > file, so they > chose not to store binary diffs at all. > > What I was suggesting was a halfway-house, where you could > tradeoff overall > performance (on all files) against disk usage. > > - Neil. > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: If flattening out C++ or Java > code to make your application fit in a relational database is > painful, > don't do it! Check out ObjectStore. Now part of Progress Software. > http://www.objectstore.net/sourceforge > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=557 > |
From: Mickael P. <mpo...@ed...> - 2003-05-19 11:58:06
|
>> One of the things they seem to pride themselves in is >> Perforce's ability to >> reliably roll-back to any changelist or label very quickly, > > I'd think the time required to grab huge files from a server is > comparable to the time required to get the much smaller diffs and > apply them to the existing file. We've found that even complex stuff > like LZ-like compression makes disk accesses faster, not slower, > because it's so fast to decompress and still gives good space > savings. So applying a diff is a doddle, surely? Hum, of course the whole get operation will eventually be faster, but the PC resources will totaly go down, and the machine will crawl. If you don't have any particular treatment to perform on the data coming from the server, you can afford to use functions like "CopyFileEx", and basicaly the whole operation will not even be noticeable (except for disk usage) by the user. Mickael Pointier |
From: Stefan B. <ste...@te...> - 2003-05-19 12:44:58
|
> > One of the things they seem to pride themselves in is > > Perforce's ability to > > reliably roll-back to any changelist or label very quickly, >=20 > I'd think the time required to grab huge files from a server is = comparable > to the time required to get the much smaller diffs and apply them to = the > existing file. We've found that even complex stuff like LZ-like > compression makes disk accesses faster, not slower, because it's so = fast > to decompress and still gives good space savings. So applying a diff = is a > doddle, surely? Hmm... I don't think I would really like to use a revision control = system where rollback took place locally, using local data. The diffs are = always applied on the server and the resulting file is sent to the client. Therefore grabbing a big "key" file (i.e. a standalone copy much like a key-frame in video files) and applying a set of deltas would have to be performed on the server and then sent to the client. Unless you can come = up with a very clever on-the-fly delta application algorithm, this could consume a fair amount of server resources, which is something P4 tries = very hard not to do (and that's why it scales to a very large number of users without requiring a hefty server). Grabbing the latest version is the most common action anyway so this might not be such a big issue in practice but in the presence of = branches there's not always just one "latest" version which complicates things ;) I believe the Subversion team originally stated that binaries would = be stored using (xdelta? rsync?) deltas but that's not been implemented yet = so I guess it's not as easy as they anticipated. Cheers, Stef! -- Stefan Boberg, R&D Manager - Team17 Software Ltd. bo...@te... |
From: J C L. <cl...@ka...> - 2003-05-20 07:17:14
|
On Mon, 19 May 2003 13:42:39 +0100 Stefan Boberg <ste...@te...> wrote: > Hmm... I don't think I would really like to use a revision control > system where rollback took place locally, using local data. Why? My preferred approach (which uses some properties of BitKeeper and similar systems) is to have a bit-image copy of the master repository on the local system. I then perform operations locally until I'm happy with the results before sending the changesets up to the master, to the staging box, departmental master, or whatever. > Grabbing the latest version is the most common action anyway so this > might not be such a big issue in practice but in the presence of > branches there's not always just one "latest" version which > complicates things ;) Having the local repository be a first class node (ie other repositories can be checked out from it etc) removes most of these complications, tho at the cost of adding some human/organisational requirements. -- J C Lawrence ---------(*) Satan, oscillate my metallic sonatas. cl...@ka... He lived as a devil, eh? http://www.kanga.nu/~claw/ Evil is a name of a foeman, as I live. |
From: Neil S. <ne...@r0...> - 2003-05-19 16:13:40
|
> I'd think the time required to grab huge files from a server is comparable > to the time required to get the much smaller diffs and apply them to the > existing file. We've found that even complex stuff like LZ-like compression > makes disk accesses faster, not slower, because it's so fast to decompress > and still gives good space savings. So applying a diff is a doddle, surely? Loading compressed files is faster because the hard disk access is the limiting factor, not the cost of decompressing the data. Writing compressed files can also be faster, depending on the cost of the dictionary search and the resultant compression factor. Perforce does compress its files, suggesting that they found both reading and writing to be fast enough. When applying diffs, you are actually reading more from disk than the size of the ultimate file, not less, and the amount of extra disk access is dependent on how many diffs you will have to apply to get the required version of the file, so going further back will take more and more hard disk access and therfore be slower. Applying a few diffs onto an almost complete file should be pretty quick though, which is why I was suggesting having intermittent, complete copies of each file. It should be easy to find a tradeoff where both space savings and performance are good. - Neil. |