gamedevlists-general Mailing List for gamedev (Page 48)
Brought to you by:
vexxed72
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
(28) |
Nov
(13) |
Dec
(168) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(51) |
Feb
(16) |
Mar
(29) |
Apr
(3) |
May
(24) |
Jun
(25) |
Jul
(43) |
Aug
(18) |
Sep
(41) |
Oct
(16) |
Nov
(37) |
Dec
(208) |
2003 |
Jan
(82) |
Feb
(89) |
Mar
(54) |
Apr
(75) |
May
(78) |
Jun
(141) |
Jul
(47) |
Aug
(7) |
Sep
(3) |
Oct
(16) |
Nov
(50) |
Dec
(213) |
2004 |
Jan
(76) |
Feb
(76) |
Mar
(23) |
Apr
(30) |
May
(14) |
Jun
(37) |
Jul
(64) |
Aug
(29) |
Sep
(25) |
Oct
(26) |
Nov
(1) |
Dec
(10) |
2005 |
Jan
(9) |
Feb
(3) |
Mar
|
Apr
|
May
(11) |
Jun
|
Jul
(39) |
Aug
(1) |
Sep
(1) |
Oct
(4) |
Nov
|
Dec
|
2006 |
Jan
(24) |
Feb
(18) |
Mar
(9) |
Apr
|
May
|
Jun
|
Jul
(14) |
Aug
(29) |
Sep
(2) |
Oct
(5) |
Nov
(4) |
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(11) |
Sep
(9) |
Oct
(5) |
Nov
(4) |
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(34) |
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Neil S. <ne...@r0...> - 2003-05-17 22:10:12
|
> While P4 doesn't do binary diffs, they do support compression (gzip), > which works reasonably well on most data (exception: hauge WAV files). Good point. I was forgetting about that. Although, it's not much consolation when you make a 20 byte change to a 20 meg file and it has to use another 10 megs of disk space. One solution is to try and avoid having monolithic file formats so no single file is going to waste large chunks of space for every check-in, but you can't do much about formats that you don't create yourself. > It would be nice if they supported some sort of xdelta-like diff storage > instead, but it seems like it's not a massive issue nowadays when huge > drives are cheap. I asked Perforce about this a while ago (when we first started using it, in fact) and IIRC they said that although storing binary deltas would save disk space, it would hurt performance quite badly, as they would have to construct enormous files from lots of tiny changes. They went for the performance option, which is fair enough. One idea I had was to use deltas but, in a manner similar to video files (like mpeg), store complete images every now and then, reducing the maximum reconstruction to some - possibly user-specified - amount. I don't know if we'll ever see this though, certainly not if storage costs keep going the way they are. - Neil. |
From: Stefan B. <ste...@te...> - 2003-05-17 21:10:09
|
> I just realised I wasn't very clear here. When I say it doesn't do = binary > diffs, I mean it doesn't store just the differences, but the entire = file. > It can perform a binary diff and show you the results, but that isn't = very > helpful from a storage point of view. While P4 doesn't do binary diffs, they do support compression (gzip), which works reasonably well on most data (exception: hauge WAV files). It would be nice if they supported some sort of xdelta-like diff = storage instead, but it seems like it's not a massive issue nowadays when huge drives are cheap. Cheers, Stef! :) -- Stefan Boberg, R&D Manager - Team17 Software Ltd. bo...@te... |
From: Neil S. <ne...@r0...> - 2003-05-17 17:06:19
|
> THAT is one heck of a compliment! Maybe their marketing > department will check with their lawyers and eventually > get the go-ahead to put a bullet-point on the box: > > * Supernaturally Reliable It's kind of hard to disprove, so they might just get away with it. > But this will raise questions, like WHICH deity is behind > the reliability, and whether or not the supernatural force > will eventually demand souls (better read that EULA!). > > <snip> > > Maybe they'll have a new anime series like Yu Yu Hakusho or > Inu-Yasha, or new Dungeons & Dragons expansion packs, or > a new set of Magic: The Gathering cards, featuring the > battle for cyberspace. Yikes, I'll have some of what you've been smoking. - Neil. |
From: Colin F. <cp...@ea...> - 2003-05-17 16:49:15
|
>>> On top of this, Perforce is extremely reliable, =================== >>> almost supernaturally so, [...] ======================== THAT is one heck of a compliment! Maybe their marketing department will check with their lawyers and eventually get the go-ahead to put a bullet-point on the box: * Supernaturally Reliable But this will raise questions, like WHICH deity is behind the reliability, and whether or not the supernatural force will eventually demand souls (better read that EULA!). Also, a new kind of potential incompatibility is introduced: Your new "Aligned Good" application might not work on your "Chaotic Evil" operating system. Tech support becomes a prayer line, computers become shrines, user manuals become scriptures, uninstall becomes an XOR-cism, programmers become clerics, users become disciples, and the Internet becomes the Astral plane! Maybe they'll have a new anime series like Yu Yu Hakusho or Inu-Yasha, or new Dungeons & Dragons expansion packs, or a new set of Magic: The Gathering cards, featuring the battle for cyberspace. Okay, I got in trouble the last time I extrapolated an analogy, so I'll just stop now. --- Colin |
From: Neil S. <ne...@r0...> - 2003-05-17 16:23:00
|
> Well, it doesn't do binary diffs, which means it would have to store a > complete copy of every version of the file (i.e. 14GB, same as CVS). It can, I just realised I wasn't very clear here. When I say it doesn't do binary diffs, I mean it doesn't store just the differences, but the entire file. It can perform a binary diff and show you the results, but that isn't very helpful from a storage point of view. - Neil. |
From: Neil S. <ne...@r0...> - 2003-05-17 15:56:00
|
> How does it handle an artist that creates 182 versions of a 80 megabyte > binary file in the course of 3 weeks? I suppose CVS would end up with 14 GB > of archives, by which time it has probably long croaked :-) Well, it doesn't do binary diffs, which means it would have to store a complete copy of every version of the file (i.e. 14GB, same as CVS). It can, however, do a basic binary compare so you can avoid storing multiple copies of the same file. In my experience, artists tend to check out an entire directory, change one or two files, and then check them all in again, so this simple compare can save a lot of space. The only downside is that it only seems to do the compare when you ask it to "revert unchanged files", not when simply checking in the files, so it's a bit of a pain if people forget to do that (also quite common). One ray of light, though, is that it will use a user-provided diff utility, so you could try giving it a binary-aware diff utility. What I'm not sure about is whether it will handle the output from a binary diff or not. I was planning to look into this at some point, so if I ever get round to it, I'll let you know what I find out. > I've looked at Perforce in the past, it's a really nice product, simliar to > what we already know. But does it handle really, really large amounts of data? Contrary to what I've said above, it actually does handle a lot of data rather well, albeit in a brute-force manner and using a lot of hard disk space. With a huge amount of data, it does start to slow a little, but not nearly as badly as you would expect. We split our code and data into seperate depots, just to maintain a certain level of slickness in the code depot, but we do have a lot of data, and this wouldn't be necessary for all projects. On top of this, Perforce is extremely reliable, almost supernaturally so, which is a major factor when looking after your most important assets. ;) - Neil. |
From: Brian H. <ho...@py...> - 2003-05-16 19:40:44
|
>My preference would be rsync via ssh, plus whatever scripts you >need. cygwin includes all the necessary tools. If the firewalls= are >effective, then this would involve rsync'ing to/from the third >machine as an intermediary. If you need one-way propagation, then rsync works fine, but if= you need to reconcile in multiple directions, I highly recommend= unison (which is free) over ssh. Brian |
From: Thatcher U. <tu...@tu...> - 2003-05-16 19:27:56
|
On Fri, 16 May 2003, Ivan-Assen Ivanov wrote: > > ... on the other end of a 768Kb/s DSL line.... > > ... rsync-like tricks, which would be nice... > > On a semirelated note, what can you recommend for a > folder sync between two Windows machines, both of which > are behind firewalls with uncooperative BOFHs? > Use of a FTP server on a third machine is permitted. > > Any ideas? My preference would be rsync via ssh, plus whatever scripts you need. cygwin includes all the necessary tools. If the firewalls are effective, then this would involve rsync'ing to/from the third machine as an intermediary. YMMV -Thatcher |
From: brian s. <pud...@po...> - 2003-05-16 17:02:29
|
I can easily believe SourceDepot is better than P4. It's not just a copy of Perforce; they bought a code license for internal use and then set a team of very smart people to work updating it. So it's got bugfixes and performance improvements that Perforce doesn't have (the NT codebase is a very good stress-tester for any revision control system). For a game, all we had to do was layer a GUI on top for the artists and SD worked great. SourceDepot is crazy good. I wish they would sell it, but I expect whatever deal they struck with the makers of Perforce precludes that. Too bad :( --brian Gareth Lewin wrote: >Oh, I missed that (Which is stupid of me, because Paul used the term >"depot") > >Strange because Paul also said that their system was better than p4 which >doesn't make sense. > >Sadly we use SourceSafe which is so bad even Microsoft dont use it :) > > > >>-----Original Message----- >>From: Stefan Boberg [mailto:ste...@te...] >>Sent: 16 May 2003 17:15 >>To: gam...@li... >>Subject: RE: [GD-General] Re: asset & document management >> >> >> >> >>>>-----Original Message----- >>>>From: Paul Bleisch [mailto:pa...@mi...] >>>> >>>> >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> >> >> >>>So you use your own tool ? Can you elaborate on it ? :) >>> >>> >> Well, due to the e-mail address, I can only assume that they use >>SourceDepot, which AFAIK is basically Perforce by another name ;) >> >>Cheers, >>Stef! :) >>-- >>Stefan Boberg, R&D Manager - Team17 Software Ltd. >>bo...@te... >> >> |
From: Paul B. <pa...@mi...> - 2003-05-16 17:00:18
|
> -----Original Message----- > From: Stefan Boberg [mailto:ste...@te...]=20 > To: gam...@li... >=20 > Well, due to the e-mail address, I can only assume that they use > SourceDepot, which AFAIK is basically Perforce by another name ;) That isn't quite true. The relationship between Perforce and Source Depot is best left at relatives. :) There have been a couple public=20 talks about the featureset of the SCM tools MS uses internally and what kind of issues MS teams face in development. Paul |
From: Paul B. <pa...@mi...> - 2003-05-16 16:54:44
|
> -----Original Message----- > From: Gareth Lewin [mailto:GL...@cl...]=20 > To: gam...@li... >=20 > Oh, I missed that (Which is stupid of me, because Paul used the term > "depot") >=20 > Strange because Paul also said that their system was better=20 > than p4 which doesn't make sense. Hmm, that wasn't my intent. I like Perforce a lot. I used it for my personal development before joining MS and would use it again immediately if I were not at MS. The tool we use internally was developed to help the Windows and Office teams deal with version control / SCM issues. Given the size of those teams, scalability and large depots are important. Oddly enough, those aspects are important to games too. :) =20 > Sadly we use SourceSafe which is so bad even Microsoft dont use it :) Actually, many teams within MS use SourceSafe. It is all a matter of scale. =20 Paul |
From: Gareth L. <GL...@cl...> - 2003-05-16 16:44:50
|
Oh, I missed that (Which is stupid of me, because Paul used the term "depot") Strange because Paul also said that their system was better than p4 which doesn't make sense. Sadly we use SourceSafe which is so bad even Microsoft dont use it :) > -----Original Message----- > From: Stefan Boberg [mailto:ste...@te...] > Sent: 16 May 2003 17:15 > To: gam...@li... > Subject: RE: [GD-General] Re: asset & document management > > > > > -----Original Message----- > > > From: Paul Bleisch [mailto:pa...@mi...] > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > So you use your own tool ? Can you elaborate on it ? :) > > Well, due to the e-mail address, I can only assume that they use > SourceDepot, which AFAIK is basically Perforce by another name ;) > > Cheers, > Stef! :) > -- > Stefan Boberg, R&D Manager - Team17 Software Ltd. > bo...@te... > > > > > ------------------------------------------------------- > Enterprise Linux Forum Conference & Expo, June 4-6, 2003, Santa Clara > The only event dedicated to issues related to Linux > enterprise solutions > www.enterpriselinuxforum.com > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=557 > |
From: Stefan B. <ste...@te...> - 2003-05-16 16:17:44
|
> > -----Original Message----- > > From: Paul Bleisch [mailto:pa...@mi...] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > So you use your own tool ? Can you elaborate on it ? :) Well, due to the e-mail address, I can only assume that they use SourceDepot, which AFAIK is basically Perforce by another name ;) Cheers, Stef! :) -- Stefan Boberg, R&D Manager - Team17 Software Ltd. bo...@te... |
From: Gareth L. <GL...@cl...> - 2003-05-16 15:46:29
|
So you use your own tool ? Can you elaborate on it ? :) > -----Original Message----- > From: Paul Bleisch [mailto:pa...@mi...] > Sent: 16 May 2003 16:33 > To: gam...@li... > Subject: RE: [GD-General] Re: asset & document management > > > > -----Original Message----- > > From: Mickael Pointier [mailto:mpo...@ed...] > > To: gam...@li... > > > > Would be interesting to have some hard numbers here about the > > size of assets you are all managing on your projects. > > > We just finished our game and here are some numbers on my > development client on my machine. I'm a dev so art and design > enlistments would be slightly different but close to this because > I tended to sync everything plus some. > > The following includes source code, source art assets and source > game/level data but no compiled artifacts for a single codeline > of the game -- all told, we made 59 branches during development -- > 5 of those "full" branches. > > 62341 total files and 14.5 Gb in total size > > On the server side, the depot consumes a total of 61 Gb of diskspace > and contains (including deleted revisions) > > The time it takes to sync depends on how much stuff changed. Most > times it takes less than a couple seconds and my syncing behavior > was driven more by the risk-reward payoff of picking up other peoples > changes. > > > > > When I see these numbers, I wonder how it's possible to get > > fast versioning programs that perform CRC, compression, archiving... > > We started using our current version control system in May of 2001 > and it has served us perfectly well (in fact it improved the quality > and stability of the game/toolset considerably). The cases where > we had performance issues with the depot were always related to > too little physical memory (which is easily remedied). > > I think if you look into who is using commercial packages like > Perforce and other such tools, you'll see that the amount and size > of artifacts they have under control is pretty large. I seem to > recall nvidia using Perforce and having extremely large depot sizes > (in the 80-90 Gb range IIRC). > > Paul > > > > ------------------------------------------------------- > Enterprise Linux Forum Conference & Expo, June 4-6, 2003, Santa Clara > The only event dedicated to issues related to Linux > enterprise solutions > www.enterpriselinuxforum.com > > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_idU7 > |
From: Paul B. <pa...@mi...> - 2003-05-16 15:32:45
|
> -----Original Message----- > From: Mickael Pointier [mailto:mpo...@ed...]=20 > To: gam...@li... >=20 > Would be interesting to have some hard numbers here about the=20 > size of assets you are all managing on your projects. We just finished our game and here are some numbers on my development client on my machine. I'm a dev so art and design enlistments would be slightly different but close to this because I tended to sync everything plus some. The following includes source code, source art assets and source game/level data but no compiled artifacts for a single codeline of the game -- all told, we made 59 branches during development -- 5 of those "full" branches. 62341 total files and 14.5 Gb in total size On the server side, the depot consumes a total of 61 Gb of diskspace and contains (including deleted revisions)=20 The time it takes to sync depends on how much stuff changed. Most=20 times it takes less than a couple seconds and my syncing behavior was driven more by the risk-reward payoff of picking up other peoples changes. =20 > When I see these numbers, I wonder how it's possible to get=20 > fast versioning programs that perform CRC, compression, archiving... We started using our current version control system in May of 2001 and it has served us perfectly well (in fact it improved the quality and stability of the game/toolset considerably). The cases where=20 we had performance issues with the depot were always related to=20 too little physical memory (which is easily remedied). I think if you look into who is using commercial packages like Perforce and other such tools, you'll see that the amount and size of artifacts they have under control is pretty large. I seem to recall nvidia using Perforce and having extremely large depot sizes (in the 80-90 Gb range IIRC). Paul |
From: Tom F. <to...@mu...> - 2003-05-16 14:26:18
|
Argh. I actually meant to add a useful comment as well: -Python/etc script that handles the FTPing to and from the third machine (according to dates, etc). -Groove. www.groove.net Slightly wacky, but very cool. Note this doesn't so much syncronise existing folders, as provide a folder that is synchronised. If you get what I mean. Tom Forsyth - Muckyfoot bloke and Microsoft MVP. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Tom Forsyth > Sent: 16 May 2003 15:16 > To: 'gam...@li...' > Subject: RE: [GD-General] Re: asset & document management > > > The fox and the chicken go first, bring the chicken back, > then take the chicken and the dog :-) > > Tom Forsyth - Muckyfoot bloke and Microsoft MVP. > > This email is the product of your deranged imagination, > and does not in any way imply existence of the author. > > > -----Original Message----- > > From: Ivan-Assen Ivanov [mailto:as...@ha...] > > Sent: 16 May 2003 14:47 > > To: gam...@li... > > Subject: RE: [GD-General] Re: asset & document management > > > > > > > ... on the other end of a 768Kb/s DSL line.... > > > ... rsync-like tricks, which would be nice... > > > > On a semirelated note, what can you recommend for a > > folder sync between two Windows machines, both of which > > are behind firewalls with uncooperative BOFHs? > > Use of a FTP server on a third machine is permitted. > > > > Any ideas? > > |
From: Tom F. <to...@mu...> - 2003-05-16 14:18:49
|
The fox and the chicken go first, bring the chicken back, then take the chicken and the dog :-) Tom Forsyth - Muckyfoot bloke and Microsoft MVP. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Ivan-Assen Ivanov [mailto:as...@ha...] > Sent: 16 May 2003 14:47 > To: gam...@li... > Subject: RE: [GD-General] Re: asset & document management > > > > ... on the other end of a 768Kb/s DSL line.... > > ... rsync-like tricks, which would be nice... > > On a semirelated note, what can you recommend for a > folder sync between two Windows machines, both of which > are behind firewalls with uncooperative BOFHs? > Use of a FTP server on a third machine is permitted. > > Any ideas? |
From: Ivan-Assen I. <as...@ha...> - 2003-05-16 13:47:56
|
> ... on the other end of a 768Kb/s DSL line.... > ... rsync-like tricks, which would be nice... On a semirelated note, what can you recommend for a folder sync between two Windows machines, both of which are behind firewalls with uncooperative BOFHs? Use of a FTP server on a third machine is permitted. Any ideas? |
From: Thatcher U. <tu...@tu...> - 2003-05-16 13:29:12
|
On May 16, 2003 at 09:38 +0200, Mickael Pointier wrote: > > Would be interesting to have some hard numbers here about the size > of assets you are all managing on your projects. > > I've been in hollidays for 2 weeks, and I just made a synchronization on my > local repository for the project I'm working on, there are the numbers: > * 13736 files > * 3438 folders > * 1650 files were modified and getting them from the network represented a > total of 1.29 gigabytes. > The whole synchronization operation take 7 minutes and 26 second, for an > average transfert speed of 2.95 megabyte/second > > > > That's for the main game ready asset folder. > > For what we call "rawdata" (where artists are doing experimentation), we > have a total of 70036 files (39.2 gigabytes) in 5525 folders. I'm almost completely sync'd, so I don't have handy transfer numbers. Here are some numbers for the full content tree: 7.6 GB 28050 files I exclude some branches of the content tree because I'm usually on the other end of a 768Kb/s DSL line. An empty sync using Perforce takes about 1.3 seconds over DSL. When files need to be transferred, the transfer rate is limited to the DSL speed (Perforce doesn't appear to use any rsync-like tricks, which would be nice). But generally I sync to the full repository whenever I feel like I need to (several times a day); the delay is not a consideration. On a LAN, sync's are obviously much faster, but I don't have figures for that. -- Thatcher Ulrich http://tulrich.com |
From: Jamie F. <ja...@qu...> - 2003-05-16 10:24:30
|
we use CVS for binary assets as well as code. Generally, we commit the source asset files to CVS (e.g. max, maya files, etc.), and maintain a build process that turns those files into the final binary asset. Has worked for one project and many demos, continues to work (touch wood) for two ongoing projects.... Jamie -----Original Message----- From: gam...@li... [mailto:gam...@li...]On Behalf Of Thatcher Ulrich Sent: 16 May 2003 05:30 To: gam...@li... Subject: Re: [GD-General] Re: asset & document management On May 15, 2003 at 11:00 +0200, Enno Rehling wrote: > Thatcher Ulrich wrote: > > >On May 15, 2003 at 01:54 +0200, Enno Rehling wrote: > > > >We're using Perforce; it handles binary assets just fine. We have > >done a bunch of scripting to automate some content processes, e.g. so > >that artists can hit a button in Maya to do the appropriate > >edit/checkout. This seems to be working pretty smoothly for us. > > How does it handle an artist that creates 182 versions of a 80 megabyte > binary file in the course of 3 weeks? I suppose CVS would end up with 14 GB > of archives, by which time it has probably long croaked :-) No problem so far... > I've looked at Perforce in the past, it's a really nice product, simliar to > what we already know. But does it handle really, really large amounts of > data? I don't know the actual size of our repository w/ history, but we throw everything into Perforce including tons of automatically built assets, and the systems people assure me we're in no danger of running out of disk space. The performance continues to be very good as well. Knock on wood... I've personally used CVS for binary assets on much smaller projects, but haven't stress-tested it to nearly the same extent. -- Thatcher Ulrich http://tulrich.com ------------------------------------------------------- Enterprise Linux Forum Conference & Expo, June 4-6, 2003, Santa Clara The only event dedicated to issues related to Linux enterprise solutions www.enterpriselinuxforum.com _______________________________________________ Gamedevlists-general mailing list Gam...@li... https://lists.sourceforge.net/lists/listinfo/gamedevlists-general Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=557 |
From: Gareth L. <GL...@cl...> - 2003-05-16 09:34:19
|
> From: Enno Rehling [mailto:en...@de...] > How does it handle an artist that creates 182 versions of a > 80 megabyte > binary file in the course of 3 weeks? I suppose CVS would end > up with 14 GB > of archives, by which time it has probably long croaked :-) There is a free 2 user license you can download, so just grab it and stress test it :) |
From: Mickael P. <mpo...@ed...> - 2003-05-16 07:36:23
|
Enno Rehling wrote: > Thatcher Ulrich wrote: > >> On May 15, 2003 at 01:54 +0200, Enno Rehling wrote: >> >> We're using Perforce; it handles binary assets just fine. We have >> done a bunch of scripting to automate some content processes, e.g. so >> that artists can hit a button in Maya to do the appropriate >> edit/checkout. This seems to be working pretty smoothly for us. > > How does it handle an artist that creates 182 versions of a 80 > megabyte binary file in the course of 3 weeks? I suppose CVS would > end up with 14 GB of archives, by which time it has probably long > croaked :-) > > I've looked at Perforce in the past, it's a really nice product, > simliar to what we already know. But does it handle really, really > large amounts of data? Would be interesting to have some hard numbers here about the size of assets you are all managing on your projects. I've been in hollidays for 2 weeks, and I just made a synchronization on my local repository for the project I'm working on, there are the numbers: * 13736 files * 3438 folders * 1650 files were modified and getting them from the network represented a total of 1.29 gigabytes. The whole synchronization operation take 7 minutes and 26 second, for an average transfert speed of 2.95 megabyte/second That's for the main game ready asset folder. For what we call "rawdata" (where artists are doing experimentation), we have a total of 70036 files (39.2 gigabytes) in 5525 folders. When I see these numbers, I wonder how it's possible to get fast versioning programs that perform CRC, compression, archiving... Mickael Pointier |
From: Mickael P. <mpo...@ed...> - 2003-05-16 07:18:27
|
Enno Rehling wrote: > Mickael Pointier wrote: > >> So back to our own product. It's minimalistic in the sense that it >> does not perform archiving and cannot merge. Basicaly it's an >> "Exclusive Checkout" based system that simply allow people to >> get/put/add/checkout/checking files from a common network repository. > > I've also had a look at unison, because someone on the list mentioned > it. I don't like the idea of not being in control over the archiving. > On the one hand, I clearly don't want to keep all the old versions. Well, we have the chance to have a special server that has an automatic "snapshoting" function. Basicaly we can get back in a parallel copy of the tree any file in the state it was 12 hours ago, last day, 2 days... up to one week ago. So yes there is no physical archiving of file, but there is anyway an history of modifications made by each person, we never lost anything so far since we also have the real daily/weekly/monthly/yearly backups :) Mickael Pointier |
From: Thatcher U. <tu...@tu...> - 2003-05-16 04:33:44
|
On May 15, 2003 at 11:00 +0200, Enno Rehling wrote: > Thatcher Ulrich wrote: > > >On May 15, 2003 at 01:54 +0200, Enno Rehling wrote: > > > >We're using Perforce; it handles binary assets just fine. We have > >done a bunch of scripting to automate some content processes, e.g. so > >that artists can hit a button in Maya to do the appropriate > >edit/checkout. This seems to be working pretty smoothly for us. > > How does it handle an artist that creates 182 versions of a 80 megabyte > binary file in the course of 3 weeks? I suppose CVS would end up with 14 GB > of archives, by which time it has probably long croaked :-) No problem so far... > I've looked at Perforce in the past, it's a really nice product, simliar to > what we already know. But does it handle really, really large amounts of > data? I don't know the actual size of our repository w/ history, but we throw everything into Perforce including tons of automatically built assets, and the systems people assure me we're in no danger of running out of disk space. The performance continues to be very good as well. Knock on wood... I've personally used CVS for binary assets on much smaller projects, but haven't stress-tested it to nearly the same extent. -- Thatcher Ulrich http://tulrich.com |
From: Nathan R. <Nat...@te...> - 2003-05-15 22:05:29
|
> How does it handle an artist that creates 182 versions of a 80 megabyte > binary file in the course of 3 weeks? I suppose CVS would end up with 14 > GB > of archives, by which time it has probably long croaked :-) > > I've looked at Perforce in the past, it's a really nice product, simliar > to > what we already know. But does it handle really, really large amounts of > data? Have you looked at Subversion at all? (subversion.tigris.org) Subversion uses compressed binary diffs, so it does a very impressive job of keeping a file's history small. It's not terribly mature just yet, unfortunately, so you're lacking all of the nifty tools that CVS/Perforce/etc all have going for them. I've never used it outside a screwing-around-at-home capacity, so I can't vouch for its reliability in a production environment, but it has worked great for me so far. |