You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Alexander A. <akh...@ri...> - 2010-09-24 07:02:45
|
Hi all! I wonder if you have plans to implement data deduplication in MooseFS ? I mean searching for the equal chunks and make them to be stored just once. In my humble opinion it will be no so hard with a glance each chunk have checksum already. Thanks! wbr Alexander Akhobadze |
From: Alexander A. <akh...@ri...> - 2010-09-23 12:22:31
|
Hi all! I have a problem like described is this post. My config: mfsexports.cfg on Master: 192.168.206.0/24 / rw,alldirs,ignoregid,maproot=0 On client: mfsmount /moosefs_mnt -H 192.168.206.1 -o mfscachefiles,suid mkdir /moosefs_mnt/test chmod g+s /moosefs_mnt/test MFS Client is not used as MFS Master or Metalogger or Chunk server. MFS Client is also a Win 2003 domain member Samba server and directory /moosefs_mnt/test is shared by Samba. chown "DOMAIN\some_group" /moosefs_mnt/test ls -ld /moosefs_mnt/test drwxrws--- 2 root some_group 0 2010..... /moosefs_mnt/test Some domain-user is a member of "DOMAIN\some_group". When he creates file or folder in /moosefs_mnt/test (via Samba share) the newly created child folder does not inherit groupID from its parent (as expected because the setgid flag is set): ls -ld /moosefs_mnt/test/subdir drwxrws--- 2 DOMAIN\domain-user NEWRIDAN\Domain users 0 2010..... /moosefs_mnt/test/subdir As you can see newly created child folder has "Domain users" group assigned. I think it is because "Domain users" is a primary group for domain-user. In the same time when test-folder is shared from EXT3 file system setgid flag works perfectly. Any idea? Thanks! wbr Alexander Akhobadze ====================================================== On Wed, 22 Sep 2010 11:41:00 +0200 Michał Borychowski <mic...@ge...> wrote: > Please check this FAQ entry: > http://www.moosefs.org/moosefs-faq.html#supplementary_groups Oops, I missed the FAQ ... The entry says that MFS will test privileges only of main groups because it is much safer. I love "safer" so I would use the permission 755 instead of 750 Thanks Michał for you helps, > -----Original Message----- > From: Anh K. Huynh [mailto:ky...@vi...] > Sent: Wednesday, September 22, 2010 10:52 AM > To: moo...@li... > Subject: [Moosefs-users] problem: cannot access group directory > > Hello, > > I have a user `bbt_landing` who belongs to the group `builder`, and > a directory name `test` which has the permission 750. The directory > is located on a MFS volume. The problem is that user `bbt_landing` > cannot access to the directory. > > More details can be found at > http://metakyanh.sarovar.org/moosefs.report/report3.html > > Any idea? |
From: Anh K. H. <ky...@vi...> - 2010-09-22 10:01:39
|
On Wed, 22 Sep 2010 11:41:00 +0200 Michał Borychowski <mic...@ge...> wrote: > Please check this FAQ entry: > http://www.moosefs.org/moosefs-faq.html#supplementary_groups Oops, I missed the FAQ ... The entry says that MFS will test privileges only of main groups because it is much safer. I love "safer" so I would use the permission 755 instead of 750 :P Thanks Michał for you helps, > -----Original Message----- > From: Anh K. Huynh [mailto:ky...@vi...] > Sent: Wednesday, September 22, 2010 10:52 AM > To: moo...@li... > Subject: [Moosefs-users] problem: cannot access group directory > > Hello, > > I have a user `bbt_landing` who belongs to the group `builder`, and > a directory name `test` which has the permission 750. The directory > is located on a MFS volume. The problem is that user `bbt_landing` > cannot access to the directory. > > More details can be found at > http://metakyanh.sarovar.org/moosefs.report/report3.html > > Any idea? > -- Anh Ky Huynh |
From: Michał B. <mic...@ge...> - 2010-09-22 09:41:17
|
Please check this FAQ entry: http://www.moosefs.org/moosefs-faq.html#supplementary_groups Regards Michał -----Original Message----- From: Anh K. Huynh [mailto:ky...@vi...] Sent: Wednesday, September 22, 2010 10:52 AM To: moo...@li... Subject: [Moosefs-users] problem: cannot access group directory Hello, I have a user `bbt_landing` who belongs to the group `builder`, and a directory name `test` which has the permission 750. The directory is located on a MFS volume. The problem is that user `bbt_landing` cannot access to the directory. More details can be found at http://metakyanh.sarovar.org/moosefs.report/report3.html Any idea? -- Anh Ky Huynh ---------------------------------------------------------------------------- -- Start uncovering the many advantages of virtual appliances and start using them to simplify application deployment and accelerate your shift to cloud computing. http://p.sf.net/sfu/novell-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Anh K. H. <ky...@vi...> - 2010-09-22 08:52:46
|
Hello, I have a user `bbt_landing` who belongs to the group `builder`, and a directory name `test` which has the permission 750. The directory is located on a MFS volume. The problem is that user `bbt_landing` cannot access to the directory. More details can be found at http://metakyanh.sarovar.org/moosefs.report/report3.html Any idea? -- Anh Ky Huynh |
From: Anh K. H. <ky...@vi...> - 2010-09-21 15:03:55
|
On Tue, 21 Sep 2010 16:37:59 +0200 Michał Borychowski <mic...@ge...> wrote: > Check if you don't have any old "mfsmount" process? Thanks so much. Actually there's an old "mfsmount" on the system. That process was killed with -9 flag, and now `mfs.cgi` reports correctly. :) > > -----Original Message----- > From: Anh K. Huynh [mailto:ky...@vi...] > Sent: Monday, September 20, 2010 4:25 AM > To: MFS > Subject: [Moosefs-users] duplicate entries in mounts information > > Hello, > > In my system, `mfs.cgi` reports two mount entries which are the > same: > > 13 db1.internal a.b.c.d /0.MFS 1.6.17 /db1 rw... > 42 db1.internal a.b.c.d /0.MFS 1.6.17 /db1 rw... > > I tried to restart mfsmaster, mfschunk, and then remount /0.MFS, but > `mfs.cgi` still reported a same result. This is really strange, as > other mount points are reported correctly. > > Any idea? > -- Anh Ky Huynh |
From: Michał B. <mic...@ge...> - 2010-09-21 14:38:20
|
Check if you don't have any old "mfsmount" process? Regards Michał -----Original Message----- From: Anh K. Huynh [mailto:ky...@vi...] Sent: Monday, September 20, 2010 4:25 AM To: MFS Subject: [Moosefs-users] duplicate entries in mounts information Hello, In my system, `mfs.cgi` reports two mount entries which are the same: 13 db1.internal a.b.c.d /0.MFS 1.6.17 /db1 rw... 42 db1.internal a.b.c.d /0.MFS 1.6.17 /db1 rw... I tried to restart mfsmaster, mfschunk, and then remount /0.MFS, but `mfs.cgi` still reported a same result. This is really strange, as other mount points are reported correctly. Any idea? -- Anh Ky Huynh ---------------------------------------------------------------------------- -- Start uncovering the many advantages of virtual appliances and start using them to simplify application deployment and accelerate your shift to cloud computing. http://p.sf.net/sfu/novell-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-09-21 08:49:50
|
Thank you for your remark. Now in the development branch we’ve made automatic data flushing every second. Please also have a look at this thread: http://sourceforge.net/mailarchive/forum.php?thread_name=00ba01cb5336%24e5765cf0%24b06316d0%24%40borychowski%40gemius.pl <http://sourceforge.net/mailarchive/forum.php?thread_name=00ba01cb5336%24e5765cf0%24b06316d0%24%40borychowski%40gemius.pl&forum_name=moosefs-users> &forum_name=moosefs-users Kind regards Michał From: 郑海洪 [mailto:zh...@uc...] Sent: Tuesday, September 21, 2010 3:58 AM To: moosefs-users Subject: [Moosefs-users] changelog on Metalogger not flush to disk instantly Dear Michał, I found the changelog on Metalogger machine not flush to disk instantly. I dig into the code, and found there miss a fflush in mfs-1.6.17/mfsmetalogger/masterconn.c:185. 142 void masterconn_metachanges_log(masterconn *eptr,const uint8_t *data,uint32_t length) { 143 char logname1[100],logname2[100]; ..... 182 data++; 183 version = get64bit(&data); 184 if (eptr->logfd) { 185 fprintf(eptr->logfd,"%"PRIu64": %s\n",version,data); fflush(eptr->logfd); //missed 186 } else { 187 syslog(LOG_NOTICE,"lost MFS change %"PRIu64": %s",version,data); 188 } 189 } I think this maybe a tiny bug, because the changelog is so important to the metalogger, we need to flush it instantly. Best regards! Haihong Zheng from China |
From: 郑. <zh...@uc...> - 2010-09-21 02:16:57
|
Dear Michał, I found the changelog on Metalogger machine not flush to disk instantly. I dig into the code, and found there miss a fflush in mfs-1.6.17/mfsmetalogger/masterconn.c:185. 142 void masterconn_metachanges_log(masterconn *eptr,const uint8_t *data,uint32_t length) { 143 char logname1[100],logname2[100]; ..... 182 data++; 183 version = get64bit(&data); 184 if (eptr->logfd) { 185 fprintf(eptr->logfd,"%"PRIu64": %s\n",version,data); fflush(eptr->logfd); //missed 186 } else { 187 syslog(LOG_NOTICE,"lost MFS change %"PRIu64": %s",version,data); 188 } 189 } I think this maybe a tiny bug, because the changelog is so important to the metalogger, we need to flush it instantly. Best regards! Haihong Zheng from China |
From: Anh K. H. <ky...@vi...> - 2010-09-20 14:19:31
|
On Mon, 20 Sep 2010 21:11:43 +0700 Cuong Hoang Bui <bhc...@gm...> wrote: > I'm wondering about the storage backend for chunk servers. I have 2 > options 1. Using RAID for underly storage backend for chunk servers. > 2. Not use RAID, Just separate disks attached directly to OS. > > With RAID (maybe RAID 6) option, the solution is very good, but > take money for RAID card. > > With no RAID, I wonder about tolerance (disk failure). If I have 8 > disks attached directly to OS, for example I mount > as /mnt/sdb, /mnt/sdc,.., .mnt/sdh. If one of them failed, what > happens to data, also operations that's performing on this disk. > Please give me some advice. IMHO things depend on your "goal". If your goal is 2 or more, RAID isn't necessary (here the term "distributed"). As recommended from MFS docs, you should use RAID for mfsmaster/logger. -- Anh Ky Huynh |
From: Cuong H. B. <bhc...@gm...> - 2010-09-20 14:11:55
|
Hi, I'm wondering about the storage backend for chunk servers. I have 2 options 1. Using RAID for underly storage backend for chunk servers. 2. Not use RAID, Just separate disks attached directly to OS. With RAID (maybe RAID 6) option, the solution is very good, but take money for RAID card. With no RAID, I wonder about tolerance (disk failure). If I have 8 disks attached directly to OS, for example I mount as /mnt/sdb, /mnt/sdc,.., .mnt/sdh. If one of them failed, what happens to data, also operations that's performing on this disk. Please give me some advice. -- ********************** Regards, Cuong Hoang Bui ct...@ct... bhc...@gm... ********************** |
From: Anh K. H. <ky...@vi...> - 2010-09-20 02:38:36
|
Hi, What's the correct configuration in `mfsexports.cfg`, so that mfslogger can be able to access `mfsmaster` server? The "meta" option in `mfsexports.cfg` allows client to use meta information (thrash information), but it doesn't mention any stuff related to logger. Thank you for your replies, -- Anh Ky Huynh |
From: Anh K. H. <ky...@vi...> - 2010-09-20 02:25:15
|
Hello, In my system, `mfs.cgi` reports two mount entries which are the same: 13 db1.internal a.b.c.d /0.MFS 1.6.17 /db1 rw... 42 db1.internal a.b.c.d /0.MFS 1.6.17 /db1 rw... I tried to restart mfsmaster, mfschunk, and then remount /0.MFS, but `mfs.cgi` still reported a same result. This is really strange, as other mount points are reported correctly. Any idea? -- Anh Ky Huynh |
From: Kristofer P. <kri...@cy...> - 2010-09-17 00:01:37
|
Just curious if anyone is using Hypertable on MFS, or has tested it? If so, I would like to ask some questions about the implementation of it. I realize that Hypertable questions should be asked on the Hypertable list, but since this is more specific to MooseFS, I thought there may be some people on this list that may have experience with it. Thanks, Kris |
From: Anh K. H. <ky...@vi...> - 2010-09-16 14:43:27
|
On Wed, 15 Sep 2010 13:50:47 +0200 Ioannis Aslanidis <ias...@fl...> wrote: > I am wondering if there is any list of companies that use MooseFS > for their infrastructure. I need to know how widely spread MooseFS > is before being able to implement it. Can anyone provide info on > that? We're Skunkworks at Vietnam (skunkworks.vn); we have a small MFS setup with 1 master, 1 metalogger and 2 chunk servers (these chunk servers are also the clients.) The primary purpose is to replicate and share data between machines (which are provided by Amazon, cf. http://aws.amazon.com/ec2). We found that NFS was so complex, so we choose MFS ;) Our current storage is only 186 GB, but it will be expanded soon. As our servers are running in a cloud/virtual environments, the speed varies and depends on global load: for writing, it varies from 3MiB/s -> 154 MiB/s (the maximum value that I experienced.) Have fun, -- Anh Ky Huynh |
From: Fabien G. <fab...@gm...> - 2010-09-16 08:10:04
|
On Thu, Sep 16, 2010 at 10:00 AM, Laurent Wandrebeck <lw...@hy...> wrote: > > Nice you're talking about the trash bin :) : I was asking myself how to > get > > access to its content. The documentation only says " Removed files may be > > accessed through a separately mounted MFSMETA file system". What does it > > mean ? > http://www.moosefs.org/reference-guide.html#using-moosefs > « By starting the mfsmount process with the -m (or -o mfsmeta) option > one can mount the auxiliary file system MFSMETA (which may be useful to > restore a file accidentally deleted from the MooseFS volume or to free > some space by removing a file before elapsing the quarantine time), for > example: > > mfsmount -m /mnt/mfsmeta » > add -H mfsmaster_host and it works (just tested:) Yes, that's exactly what I was looking for : "-o mfsmeta". It works, I now can reach the reserved+trash directories, thank you ! Fabien |
From: Michał B. <mic...@ge...> - 2010-09-16 08:00:59
|
Yes, this is the right part of documentation. At http://www.moosefs.org/reference-guide.html find: “Removed files may be accessed through a separately mounted MFSMETA file system” and keep on reading. Below there is: “Moving this file to the trash/undel subdirectory causes restoring of the original file in a proper MooseFS file system” Regards Michał From: Fabien Germain [mailto:fab...@gm...] Sent: Thursday, September 16, 2010 9:52 AM To: Laurent Wandrebeck Cc: moo...@li...; 夏亮 Subject: Re: [Moosefs-users] rm the size of chunkserver is not changed Hello, On Thu, Sep 16, 2010 at 8:35 AM, Laurent Wandrebeck <lw...@hy...> wrote: > When I rm a big file of 80G. I found that the disk size of > chunkserver didn’t changed > > Why? Because of the trash :) See http://www.moosefs.org/reference-guide.html#using-moosefs , basic operations part. Nice you're talking about the trash bin :) : I was asking myself how to get access to its content. The documentation only says " Removed files may be accessed through a separately mounted MFSMETA file system". What does it mean ? Fabien |
From: Laurent W. <lw...@hy...> - 2010-09-16 08:00:52
|
On Thu, 16 Sep 2010 09:51:40 +0200 Fabien Germain <fab...@gm...> wrote: > Hello, > > On Thu, Sep 16, 2010 at 8:35 AM, Laurent Wandrebeck <lw...@hy...> wrote: > > > > When I rm a big file of 80G. I found that the disk size of > > > chunkserver didn’t changed > > > > > > Why? > > Because of the trash :) > > See http://www.moosefs.org/reference-guide.html#using-moosefs , basic > > operations part. > > > > Nice you're talking about the trash bin :) : I was asking myself how to get > access to its content. The documentation only says " Removed files may be > accessed through a separately mounted MFSMETA file system". What does it > mean ? http://www.moosefs.org/reference-guide.html#using-moosefs « By starting the mfsmount process with the -m (or -o mfsmeta) option one can mount the auxiliary file system MFSMETA (which may be useful to restore a file accidentally deleted from the MooseFS volume or to free some space by removing a file before elapsing the quarantine time), for example: mfsmount -m /mnt/mfsmeta » add -H mfsmaster_host and it works (just tested:) Hope it helps, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Fabien G. <fab...@gm...> - 2010-09-16 07:52:08
|
Hello, On Thu, Sep 16, 2010 at 8:35 AM, Laurent Wandrebeck <lw...@hy...> wrote: > > When I rm a big file of 80G. I found that the disk size of > > chunkserver didn’t changed > > > > Why? > Because of the trash :) > See http://www.moosefs.org/reference-guide.html#using-moosefs , basic > operations part. > Nice you're talking about the trash bin :) : I was asking myself how to get access to its content. The documentation only says " Removed files may be accessed through a separately mounted MFSMETA file system". What does it mean ? Fabien |
From: Michał B. <mic...@ge...> - 2010-09-16 07:51:31
|
Please have a look here: http://www.moosefs.org/moosefs-faq.html#delete Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: 夏亮 [mailto:xia...@zh...] Sent: Thursday, September 16, 2010 5:48 AM To: moo...@li... Subject: [Moosefs-users] rm the size of chunkserver is not changed Hi : When I rm a big file of 80G. I found that the disk size of chunkserver didn’t changed Why? |
From: Michał B. <mic...@ge...> - 2010-09-16 07:45:22
|
And there is Gemius (http://www.gemius.com) with four deployments, the biggest has almost 30 million files distributed over 70 chunk servers having a total space of 570TiB. Chunkserver machines at the same time are used to make other calculations. You can download some screenshots from our CGI monitor here: http://www.moosefs.org/tl_files/monitor_screens_200912.tar Another Polish companies which uses MooseFS for data storage are Redefine (http://www.redefine.pl/) and AdOcean (http://www.adocean-global.com/). Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: jose maria [mailto:let...@us...] Sent: Thursday, September 16, 2010 2:14 AM To: moo...@li... Subject: Re: [Moosefs-users] What companies use MooseFS > >> Hello, >> >> I am wondering if there is any list of companies that use MooseFS for >> their infrastructure. I need to know how widely spread MooseFS is >> before being able to implement it. Can anyone provide info on that? >> >> Thank you. >> * www.seycob.es , www.copiadeseguridad.cl , remote backup company's * Four mfs clusters , mfsclients on root vsftpd virtual-servers, example on spain, primary cluster http://control.seycob.es:9425 , rsync to secondary cluster, http://control.seycob.es:9426 are on the same subnet, but in different locations, connected trough fiber optics. * Absolute stability in the processes 24/7 continuous read/write for ten months in operation, seven discs damaged, ext3 filesystem, problems on ext4 and btrfs. I'm trying again in ext4 .... * Six sata disks +- per chunkserver, comodity hardware HP/DELL +- 280 € + discs per server, except mfsmaster's, 2 Gigabit Ethernet with Link Agregation in mode 5. Moosefs has fulfilled our expectations provided stability, performance, scalability, consistency, fault tolerance, content costs, communication with applications, management simple.. * At first I was evaluating the deployment of cluster Cassandra, but stability and access for applications was painful, I am not a fan of java ...., but is now a viable option for file-based nosql/bigtable cluster. Or glusterfs for FTND raid1 filesystem. * Depending on the goal, fortunately there are several open source options. MooseFS is the most obvious chunk-based cluster. * Pending features in the future, secondary mfsmaster, define Rac's on mfsmaster/mfschunserver, goal per rac, mfstools for massive manage errors in chunks, inodes, files, reserverd files, mfssnapshot is unusable for massive or continuous backup purposes. * Regards. ------------------------------------------------------------------------------ Start uncovering the many advantages of virtual appliances and start using them to simplify application deployment and accelerate your shift to cloud computing. http://p.sf.net/sfu/novell-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Laurent W. <lw...@hy...> - 2010-09-16 07:02:44
|
On Thu, 16 Sep 2010 11:47:40 +0800 夏亮 <xia...@zh...> wrote: > Hi : > > When I rm a big file of 80G. I found that the disk size of > chunkserver didn’t changed > > Why? Because of the trash :) See http://www.moosefs.org/reference-guide.html#using-moosefs , basic operations part. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: 夏亮 <xia...@zh...> - 2010-09-16 04:17:23
|
Hi : When I rm a big file of 80G. I found that the disk size of chunkserver didn’t changed Why? |
From: jose m. <let...@us...> - 2010-09-16 00:40:52
|
> >> Hello, >> >> I am wondering if there is any list of companies that use MooseFS for >> their infrastructure. I need to know how widely spread MooseFS is >> before being able to implement it. Can anyone provide info on that? >> >> Thank you. >> * www.seycob.es , www.copiadeseguridad.cl , remote backup company's * Four mfs clusters , mfsclients on root vsftpd virtual-servers, example on spain, primary cluster http://control.seycob.es:9425 , rsync to secondary cluster, http://control.seycob.es:9426 are on the same subnet, but in different locations, connected trough fiber optics. * Absolute stability in the processes 24/7 continuous read/write for ten months in operation, seven discs damaged, ext3 filesystem, problems on ext4 and btrfs. I'm trying again in ext4 .... * Six sata disks +- per chunkserver, comodity hardware HP/DELL +- 280 € + discs per server, except mfsmaster's, 2 Gigabit Ethernet with Link Agregation in mode 5. Moosefs has fulfilled our expectations provided stability, performance, scalability, consistency, fault tolerance, content costs, communication with applications, management simple.. * At first I was evaluating the deployment of cluster Cassandra, but stability and access for applications was painful, I am not a fan of java ...., but is now a viable option for file-based nosql/bigtable cluster. Or glusterfs for FTND raid1 filesystem. * Depending on the goal, fortunately there are several open source options. MooseFS is the most obvious chunk-based cluster. * Pending features in the future, secondary mfsmaster, define Rac's on mfsmaster/mfschunserver, goal per rac, mfstools for massive manage errors in chunks, inodes, files, reserverd files, mfssnapshot is unusable for massive or continuous backup purposes. * Regards. |
From: Laurent W. <lw...@hy...> - 2010-09-15 13:01:42
|
On Wed, 15 Sep 2010 14:22:37 +0200 Bán Miklós <ba...@vo...> wrote: > Hi, we recently started to use it in University of Debrecen, Hungary on > 7 machines ~ 3Tb data. Hi, I'm right now deploying it after a testing phase. moving 70TB takes a bit of time ;) -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |