You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Josef <pe...@p-...> - 2011-01-20 19:04:06
|
fuse is alive... postak:/mnt# lsmod | grep fuse fuse 40596 1 Dne 1/20/11 7:55 PM, Ricardo J. Barberis napsal(a): > El Jue 20 Enero 2011, Josef escribió: >> Hello, >> I'm having problems with mfsmount on my Debian system. If I do >> mfsmount /mnt/mfs/ -H 10.0.0.1 >> I get: >> mfsmaster accepted connection with parameters: read-write,restricted_ip >> ; root mapped to root:root >> fuse: device not found, try 'modprobe fuse' first >> error in fuse_mount >> >> But fuse seems to be installed: >> fusermount --version >> fusermount version: 2.7.4 >> >> Any suggestions? > Firts check if it really is loaded: > > lsmod | grep fuse > > If not: > > modprobe -v fuse > >> Thanks! >> >> Josef > Cheers, |
From: Ricardo J. B. <ric...@da...> - 2011-01-20 18:55:33
|
El Jue 20 Enero 2011, Josef escribió: > Hello, > I'm having problems with mfsmount on my Debian system. If I do > mfsmount /mnt/mfs/ -H 10.0.0.1 > I get: > mfsmaster accepted connection with parameters: read-write,restricted_ip > ; root mapped to root:root > fuse: device not found, try 'modprobe fuse' first > error in fuse_mount > > But fuse seems to be installed: > fusermount --version > fusermount version: 2.7.4 > > Any suggestions? Firts check if it really is loaded: lsmod | grep fuse If not: modprobe -v fuse > Thanks! > > Josef Cheers, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Josef <pe...@p-...> - 2011-01-20 18:27:09
|
Hello, I'm having problems with mfsmount on my Debian system. If I do mfsmount /mnt/mfs/ -H 10.0.0.1 I get: mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root fuse: device not found, try 'modprobe fuse' first error in fuse_mount But fuse seems to be installed: fusermount --version fusermount version: 2.7.4 Any suggestions? Thanks! Josef |
From: Ólafur Ó. <osv...@ne...> - 2011-01-20 15:39:05
|
Hi, Attached is a patch with the changes I made on the deb packaging for use on ubuntu. Summary: Added mfscgiserv (Although stopping with service command doesn't work yet) Added ufw as a suggested package Added ufw application config to mfs-common Made minor changes to copyright file Changed user/group handling so that it uses the mfs/mfs in all places Removed compile time chown since it happens after install with the package Changed the config directory to /etc/mfs Use if you like...it should apply cleanly to mfs-1.6.20 /Oli -- Ólafur Osvaldsson System Administrator Nethonnun ehf. e-mail: osv...@ne... phone: +354 517 3400 |
From: 王浩 <wan...@gm...> - 2011-01-20 11:01:30
|
hi, Today,I change my mfs sever's IP address, but i didn't change the IP in config file such as mfschunkserver.cfg, when i reboot the machine, start the mfs application: /opt/mfs/sbin/mfsmetarestore -a /opt/mfs/sbin/mfsmaster /opt/mfs/sbin/mfsmetalogger /opt/mfs/sbin/mfschunkserver i found the /mnt/b、/mnt/c(defined in mfshdd.cfg file) have no data any more. below is my disk: mount /dev/sdb1 /mnt/b mount /dev/sdc1 /mnt/c mount /dev/sdd1 /mnt/d mount /dev/sde1 /mnt/e mount /dev/sdf1 /mnt/f below is the share folder: /opt/mfs/bin/mfsmount /mnt/mfs -H 192.168.*.* in the /mnt/mfs folder,i can see my date, but i can't use them check the space: [root@store-temp mfs]# df -h 文件系统 容量 已用 可用 已用% 挂载点 /dev/sda1 888G 14G 828G 2% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/sdb1 917G 201M 871G 1% /mnt/b /dev/sdc1 917G 201M 871G 1% /mnt/c /dev/sdd1 917G 201M 871G 1% /mnt/d /dev/sde1 917G 201M 871G 1% /mnt/e /dev/sdf1 917G 217M 871G 1% /mnt/f the error log is: Jan 20 16:13:41 store-2-4 mfsmaster[2753]: currently unavailable chunk 000000000002150F (inode: 93 ; index: 0) Jan 20 16:13:41 store-2-4 mfsmaster[2753]: currently unavailable chunk 0000000000021515 (inode: 93 ; index: 1) Jan 20 16:13:41 store-2-4 mfsmaster[2753]: currently unavailable chunk 000000000002151B (inode: 93 ; index: 2) Jan 20 16:13:41 store-2-4 mfsmaster[2753]: currently unavailable chunk 0000000000021521 (inode: 93 ; index: 3) Jan 20 16:13:41 store-2-4 mfsmaster[2753]: * currently unavailable file 93: store/20101125/19876/transcode.flv Jan 20 16:13:41 store-2-4 mfsmaster[2753]: currently unavailable chunk 000000000002B772 (inode: 95 ; index: 0) Jan 20 16:13:41 store-2-4 mfsmaster[2753]: * currently unavailable file 95: store/20110102/22518/trans2_realtime_process2.125091_144574_167863_1.flv please help me to find the date! thank you! wanghao 20110120 |
From: 妖狐 <myt...@gm...> - 2011-01-19 15:41:41
|
hi all: CHUNKS_LOOP_TIME :(Chunks loop frequency in seconds)chunks. what ’s mean the chunks loop ? |
From: 妖狐 <myt...@gm...> - 2011-01-19 15:16:54
|
hi all: CHUNKS_LOOP_TIME :(Chunks loop frequency in seconds)chunks. what ’s mean the chunks loop ? |
From: jose m. <let...@us...> - 2011-01-18 21:27:35
|
El mar, 18-01-2011 a las 21:52 +0100, jose maria escribió: > > * in upgrade to 1.2.19 > * Pardon, 1.6.19 |
From: Thomas S H. <tha...@gm...> - 2011-01-18 21:02:05
|
Don't worry about the English jose, this data is great, you have a few more kernel tuning changes that I had not thought of. Just like my environment, hardly any load on the cpu, your swappiness is lower that I have made mine (10) but 2 may yet be a good way to go. Thanks jose! This is good stuff! -Tom On Tue, Jan 18, 2011 at 1:52 PM, jose maria <let...@us...> wrote: > El mar, 18-01-2011 a las 11:58 -0700, Thomas S Hatch escribió: > > Thanks jose, yes, we are on 10G networks, it sounds like the primary > > cap to worry about might be the ram usage on the mfsmaster. > > > > * standard kernel opensuse 11.3, X86_64 x 2 processors > Intel(R) Xeon(R) CPU E5504 @ 2.00GHz > > top - 20:56:26 up 18 days, 10 min, 1 user, load average: 0.00, 0.00, > 0.00 > Tasks: 141 total, 1 running, 140 sleeping, 0 stopped, 0 zombie > Cpu0 : 0.3%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.3%hi, 0.0%si, > 0.0%st > Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, > 0.0%st > Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, > 0.0%st > Cpu3 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, > 0.0%st > Cpu4 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, > 0.0%st > Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, > 0.0%st > Cpu6 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, > 0.0%st > Cpu7 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, > 0.0%st > > Mem: 16 GiB total , 22.8 % memory used for mfsmaster process > +-20.000.000 chunks, +-6.000.000 files. , litte files, goal 3 > > MTU 9000 , NIC's 2x1GE, bonding mode 5. > > 20 chunkservers 2x1GE , bonding mode 6. > > litte sysctl tunning > > # increase TCP max buffer size > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > # increase Linux autotuning TCP buffer limits > # min, default, and max number of bytes to use > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > # don't cache ssthresh from previous connection > net.ipv4.tcp_no_metrics_save = 1 > net.ipv4.tcp_moderate_rcvbuf = 1 > # recommended to increase this for 1000 BT or higher > net.core.netdev_max_backlog = 2500 > # for 10 GigE, use this > # net.core.netdev_max_backlog = 30000 > # cubic on my system > net.ipv4.tcp_congestion_control = cubic > # probe swappiness 0 > # > vm.swappiness=2 > vm.vfs_cache_pressure = 10000 > > > * Another, 83 chunkservers 1x10GE, > 2x10GE in mfsmaster + 1x1GE > > opensuse 11.3 recompiled, numa, hig memory. > > 64GiB Ram, +-40% memory used por mfsmaster process > +-100.000.000 chunks, +-30.000.000 litte files, goal 3 > > * For test purposes > http://control.seycob.es:9425 > http://control.seycob.es:9426 > > * in upgrade to 1.2.19 > * problems with cpu comsuption on mfs-1.6.19, solved, fine in 1.6.20 > * problems with files goal 0, ¿valid copies 1? 4 days not solved. > * problems with files goal 1, valid copies 1, apply goal 2 to all > filesystem and not reflected. 4 days not solved, in test cluster: > > http://control.seycob.es:9426/mCZ RevoDfs.cgi<http://control.seycob.es:9426/mfs.cgi> > > * poor english, sorry > > > > > > > > ------------------------------------------------------------------------------ > Protect Your Site and Customers from Malware Attacks > Learn about various malware tactics and how to avoid them. Understand > malware threats, the impact they can have on your business, and how you > can protect your company and customers by using code signing. > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: jose m. <let...@us...> - 2011-01-18 20:52:48
|
El mar, 18-01-2011 a las 11:58 -0700, Thomas S Hatch escribió: > Thanks jose, yes, we are on 10G networks, it sounds like the primary > cap to worry about might be the ram usage on the mfsmaster. > * standard kernel opensuse 11.3, X86_64 x 2 processors Intel(R) Xeon(R) CPU E5504 @ 2.00GHz top - 20:56:26 up 18 days, 10 min, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 141 total, 1 running, 140 sleeping, 0 stopped, 0 zombie Cpu0 : 0.3%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.3%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16 GiB total , 22.8 % memory used for mfsmaster process +-20.000.000 chunks, +-6.000.000 files. , litte files, goal 3 MTU 9000 , NIC's 2x1GE, bonding mode 5. 20 chunkservers 2x1GE , bonding mode 6. litte sysctl tunning # increase TCP max buffer size net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 # increase Linux autotuning TCP buffer limits # min, default, and max number of bytes to use net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 # don't cache ssthresh from previous connection net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 # recommended to increase this for 1000 BT or higher net.core.netdev_max_backlog = 2500 # for 10 GigE, use this # net.core.netdev_max_backlog = 30000 # cubic on my system net.ipv4.tcp_congestion_control = cubic # probe swappiness 0 # vm.swappiness=2 vm.vfs_cache_pressure = 10000 * Another, 83 chunkservers 1x10GE, 2x10GE in mfsmaster + 1x1GE opensuse 11.3 recompiled, numa, hig memory. 64GiB Ram, +-40% memory used por mfsmaster process +-100.000.000 chunks, +-30.000.000 litte files, goal 3 * For test purposes http://control.seycob.es:9425 http://control.seycob.es:9426 * in upgrade to 1.2.19 * problems with cpu comsuption on mfs-1.6.19, solved, fine in 1.6.20 * problems with files goal 0, ¿valid copies 1? 4 days not solved. * problems with files goal 1, valid copies 1, apply goal 2 to all filesystem and not reflected. 4 days not solved, in test cluster: http://control.seycob.es:9426/mfs.cgi * poor english, sorry |
From: Thomas S H. <tha...@gm...> - 2011-01-18 19:51:19
|
Thanks Reinis, that is good data to have, I think I am going to move towards upping our total mfsmaster ram as a higher priority. On Tue, Jan 18, 2011 at 12:28 PM, Reinis Rozitis <r...@ro...> wrote: > > This deployment will also require well over 100 million chunks. > > For that number one thing you should prepare is enough ram for the master > server (and depending on the chunkserver count and file goal also on the > storage node servers). > > In our MFS setup we also plan on having 100+ m chunks but now at 40 milion > file/chunk progress the master eats about 16G and chunkserver (one of > total > 6 with goal 3) about 6G - so we will prolly end up having 40 - 50G > requirment on master (and about 10+ on chunks) for 100m. > Since the master is a single point and no way distributed (except of course > you can make seperate filesystems) it can end tricky (in a way of obtaining > the right hardware for the job) at some point in case the file count > doubles > (200m) for example. > > rr > > > > > ------------------------------------------------------------------------------ > Protect Your Site and Customers from Malware Attacks > Learn about various malware tactics and how to avoid them. Understand > malware threats, the impact they can have on your business, and how you > can protect your company and customers by using code signing. > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Reinis R. <r...@ro...> - 2011-01-18 19:47:07
|
> This deployment will also require well over 100 million chunks. For that number one thing you should prepare is enough ram for the master server (and depending on the chunkserver count and file goal also on the storage node servers). In our MFS setup we also plan on having 100+ m chunks but now at 40 milion file/chunk progress the master eats about 16G and chunkserver (one of total 6 with goal 3) about 6G - so we will prolly end up having 40 - 50G requirment on master (and about 10+ on chunks) for 100m. Since the master is a single point and no way distributed (except of course you can make seperate filesystems) it can end tricky (in a way of obtaining the right hardware for the job) at some point in case the file count doubles (200m) for example. rr |
From: jose m. <let...@us...> - 2011-01-18 18:24:35
|
El mar, 18-01-2011 a las 10:06 -0700, Thomas S Hatch escribió: > I am architecting a potential 3P MooseFS install.... 3 Peta-bytes... > and maybe even 4P (assuming we will be moving to 4T disks when they > come out). > > > My question is this, can MooseFS handle that kind of load? Are there > any additional considerations that I will need to take as > I approach such a high volume. > > > As I am seeing I will have over 100 chunkservers attached to one > master. I am going to change out the mfsmaster metadata store with > fusionio (http://www.fusionio.com/) drives to maintain the disk speed > that metadata operations will need. > > > This deployment will also require well over 100 million chunks. > > > So my question again is, what, if any, special considerations should I > take as I roll this out? > * Metadata is cached in memory, the network is, IMO, the determinant factor, NIC's 10GE and switch's apilables by means of optical fibre, commodity hardware in chunkserver or metaloggers, another network 100/1GE low cost, for administration or other jobs, mfsmaster in bonding mode 5/6 for backup, in mfsmount tune options of mfscacheto. |
From: Thomas S H. <tha...@gm...> - 2011-01-18 17:56:15
|
FusionIO in all the chunkservers, thats a little too rich for my blood :) one of the problems we are seeing is that we need to have the failover work faster, and the bottleneck for mfsmetarestore looks like it is IO, and the mfsmetarestore in the failover process is what takes up the most time. Thats why we want the fusionio drives. As for number of files, we have about 13 million right now, but we have only imported a small percentage of the total number of files we are contracted to get (we are making a music "cloud", and we are getting all the music in the world, a lot is still coming). By the end of the year we should have about 100-150 million files. Right now the idea is to have two types of storage, the moose for large scale storage where write speed is not a major issue, and then a "cluster" of ssd pci cards to handle high speed storage needs, like databases, and the master and metalogger to speed up restores and to make sure the changelogs can be written fast enough when the activity is so high. Paint a better picture? -Tom 2011/1/18 Michal Borychowski <mic...@ge...> > WOW!!! > > > > And what about FusionIO in all the chunkservers? ;) But as you are talking > about FusionIO in mfsmaster so probably you are also going to use them in > metaloggers? I just think if it is necessary… Metadata is cached in RAM in > mfsmaster, but there are changelogs… If the system will be busy (and for > sure it will be) there would be lots of operations logged to the files and > transmitted to metaloggers… > > > > Please tell us how many files do you plan to store? > > > > > > Regards > > Michal > > > > > > *From:* Thomas S Hatch [mailto:tha...@gm...] > *Sent:* Tuesday, January 18, 2011 6:06 PM > *To:* moosefs-users > *Subject:* [Moosefs-users] How big can MooseFS really go? > > > > I am architecting a potential 3P MooseFS install.... 3 Peta-bytes... and > maybe even 4P (assuming we will be moving to 4T disks when they come out). > > > > My question is this, can MooseFS handle that kind of load? Are there any > additional considerations that I will need to take as I approach such a high > volume. > > > > As I am seeing I will have over 100 chunkservers attached to one master. I > am going to change out the mfsmaster metadata store with fusionio ( > http://www.fusionio.com/) drives to maintain the disk speed that > metadata operations will need. > > > > This deployment will also require well over 100 million chunks. > > > > So my question again is, what, if any, special considerations should I take > as I roll this out? > > > > -Thomas S Hatch > |
From: Michal B. <mic...@ge...> - 2011-01-18 17:39:17
|
WOW!!! And what about FusionIO in all the chunkservers? ;) But as you are talking about FusionIO in mfsmaster so probably you are also going to use them in metaloggers? I just think if it is necessary. Metadata is cached in RAM in mfsmaster, but there are changelogs. If the system will be busy (and for sure it will be) there would be lots of operations logged to the files and transmitted to metaloggers. Please tell us how many files do you plan to store? Regards Michal From: Thomas S Hatch [mailto:tha...@gm...] Sent: Tuesday, January 18, 2011 6:06 PM To: moosefs-users Subject: [Moosefs-users] How big can MooseFS really go? I am architecting a potential 3P MooseFS install.... 3 Peta-bytes... and maybe even 4P (assuming we will be moving to 4T disks when they come out). My question is this, can MooseFS handle that kind of load? Are there any additional considerations that I will need to take as I approach such a high volume. As I am seeing I will have over 100 chunkservers attached to one master. I am going to change out the mfsmaster metadata store with fusionio (http://www.fusionio.com/) drives to maintain the disk speed that metadata operations will need. This deployment will also require well over 100 million chunks. So my question again is, what, if any, special considerations should I take as I roll this out? -Thomas S Hatch |
From: Thomas S H. <tha...@gm...> - 2011-01-18 17:06:36
|
I am architecting a potential 3P MooseFS install.... 3 Peta-bytes... and maybe even 4P (assuming we will be moving to 4T disks when they come out). My question is this, can MooseFS handle that kind of load? Are there any additional considerations that I will need to take as I approach such a high volume. As I am seeing I will have over 100 chunkservers attached to one master. I am going to change out the mfsmaster metadata store with fusionio ( http://www.fusionio.com/) drives to maintain the disk speed that metadata operations will need. This deployment will also require well over 100 million chunks. So my question again is, what, if any, special considerations should I take as I roll this out? -Thomas S Hatch |
From: Michal B. <mic...@ge...> - 2011-01-18 10:57:57
|
Hi! Probably this reply would be somehow helpful to you: https://sourceforge.net/mailarchive/message.php?msg_id=26792119 Regards Michal From: Piotr Skurczak [mailto:pio...@gm...] Sent: Tuesday, January 18, 2011 11:25 AM To: moosefs-users Subject: [Moosefs-users] Client wait time when uploading files Hello guys, I know that moosefs is rather more dedicated for LAN and extra high speed solutions, although I use it over internet. We store files below 5 megs there like office documents, and some kind of useful stuff that could be used before. Right now I experience client problems who upload the data. Say I do winscp to the storage. Once a file is uploaded it's being distributed onto other nodes. My question is why the client waits until all files are redistributed (it actually looks like the upload hangs on 100% for some time and when the file is on other nodes it releases 'the lock'). I figured out that once the file is uploaded and I stop the wait-time (simply by killing the winscp process) the file is there anyway. So why to wait? Peter |
From: Piotr S. <pio...@gm...> - 2011-01-18 10:25:12
|
Hello guys, I know that moosefs is rather more dedicated for LAN and extra high speed solutions, although I use it over internet. We store files below 5 megs there like office documents, and some kind of useful stuff that could be used before. Right now I experience client problems who upload the data. Say I do winscp to the storage. Once a file is uploaded it's being distributed onto other nodes. My question is why the client waits until all files are redistributed (it actually looks like the upload hangs on 100% for some time and when the file is on other nodes it releases 'the lock'). I figured out that once the file is uploaded and I stop the wait-time (simply by killing the winscp process) the file is there anyway. So why to wait? Peter |
From: randall <ra...@so...> - 2011-01-18 09:29:00
|
i've build moosefs for the first time last week 1.6.19 on debian squeeze following http://www.moosefs.org/tl_files/manpageszip/moosefs-step-by-step-tutorial-v.1.1.pdf and recall that mfscgi would not build, it took me some time fiddling around to get it to build (not 100% sure what i did anymore) but following back the steps i tend to believe it was an unmet dependency on python, could not find any documented on this. not sure if it is an identical problem or that something changed in MooseFS (not familiar on Moose) but you might want to check it if all else fails Randall P.S. is this list "top" or "bottom" post? On 01/17/2011 03:40 PM, Giovanni Toraldo wrote: > Hi, > > I've just build .deb packages using the tarball, but seems that > mfscgiserv isn't included in any of the destination packages. > > I am missing something or it has not yet be packaged? > > Thanks. > > > > ------------------------------------------------------------------------------ > Protect Your Site and Customers from Malware Attacks > Learn about various malware tactics and how to avoid them. Understand > malware threats, the impact they can have on your business, and how you > can protect your company and customers by using code signing. > http://p.sf.net/sfu/oracle-sfdevnl > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-01-18 07:06:54
|
Please check if "mfs.cgi" has the same rights as before? Regards -Michal From: Scoleri, Steven [mailto:Sco...@gs...] Sent: Monday, January 17, 2011 6:47 PM To: moo...@li... Subject: [Moosefs-users] mfscgiserv 403 error? I upgraded everything on my master to 1.6.20 from 1.6.17 and it all seems to work fine except my cgi server is giving a access denied now? Anyone have any ideas? |
From: Scoleri, S. <Sco...@gs...> - 2011-01-17 17:47:35
|
I upgraded everything on my master to 1.6.20 from 1.6.17 and it all seems to work fine except my cgi server is giving a access denied now? Anyone have any ideas? |
From: Thomas S H. <tha...@gm...> - 2011-01-17 16:24:49
|
Heh, it is always the middle of the night when these come out for me :) Arch Linux packages are ready: https://aur.archlinux.org/packages.php?ID=23742 Good job MooseFS team! This release should prevent fools like me from setting the config values too low :) -Thomas S Hatch 2011/1/17 Stas Oskin <sta...@gm...> > Good news. > > Any eta for RPM's for this? ;) > > > ---------- Forwarded message ---------- > From: MooseFS <co...@mo...> > Date: Mon, Jan 17, 2011 at 11:27 AM > Subject: Latest stable release of MooseFS 1.6.20 > To: co...@mo... > > > Hi! > > We just released a new stable version of MooseFS introducing improved > chunkservers packet registration (one big packet is divided into several > smaller ones) - helpful in big installations. We also set minimum socket > timeout to ten seconds so that the system has time to register and the > master doesn't hang up. > > You can download the latest version from > http://www.moosefs.org/download.html webpage. > > More information about this release is available here: > http://moosefs.org/news-reader/items/moose-file-system-v-1620-released.html > > VERY IMPORTANT: Please first update the master server to 1.6.20 and only > after that update the chunkservers. > > If you need any further assistance please let us know. > > > Kind regards > Michal Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > > > > ------------------------------------------------------------------------------ > Protect Your Site and Customers from Malware Attacks > Learn about various malware tactics and how to avoid them. Understand > malware threats, the impact they can have on your business, and how you > can protect your company and customers by using code signing. > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Giovanni T. <gt...@li...> - 2011-01-17 14:57:33
|
Hi, I've just build .deb packages using the tarball, but seems that mfscgiserv isn't included in any of the destination packages. I am missing something or it has not yet be packaged? Thanks. -- Giovanni Toraldo http://www.libersoft.it/ |
From: Stas O. <sta...@gm...> - 2011-01-17 10:00:09
|
Good news. Any eta for RPM's for this? ;) ---------- Forwarded message ---------- From: MooseFS <co...@mo...> Date: Mon, Jan 17, 2011 at 11:27 AM Subject: Latest stable release of MooseFS 1.6.20 To: co...@mo... Hi! We just released a new stable version of MooseFS introducing improved chunkservers packet registration (one big packet is divided into several smaller ones) - helpful in big installations. We also set minimum socket timeout to ten seconds so that the system has time to register and the master doesn't hang up. You can download the latest version from http://www.moosefs.org/download.html webpage. More information about this release is available here: http://moosefs.org/news-reader/items/moose-file-system-v-1620-released.html VERY IMPORTANT: Please first update the master server to 1.6.20 and only after that update the chunkservers. If you need any further assistance please let us know. Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 |
From: 颜. <rw...@12...> - 2011-01-15 07:21:41
|
Your are right, I agree with U. MFS can provide some basic funcs, but not all! Thank u very much! i expect to use v1.7 MFS early, any way, MFS is a successful DFS. What MFS need is more industry applications. 2011-01-15 发件人: Thomas S Hatch 发送时间: 2011-01-15 15:02:06 收件人: 颜秉珩 抄送: Michal Borychowski; moosefs-users 主题: Re: Re: [Moosefs-users] How many mfs clients can be supported bythesamemaster at the same time!! Heh, I was right, something outside the scope of what i was considering. This gets tricky, what you are suggesting is to layer storage paradigms to create an environment which MooseFS cannot server on top of MooseFS. But on the other hand, finding a storage platform that gives you all of this at the platform level is hard.... maybe impossible. Also this approach creates tiers of single redundancy services, it is very vulnerable. So to sum up, what you want does not exist, this I think you already know, the problem is how to create it. I am afraid that I don't know how to create what you need, and while having a massive single block device on top of MooseFS could be used in a cloud environment it would also not act as an optimal use of MooseFS. But, I do agree with one thing, quotas will be great. -Tom 2011/1/14 颜秉珩 <rw...@12...> Because there is no quota function in MFS. How can we provide multitenancy function based on MFS? 2011-01-15 The last thing I want to do start a flame war, but I don't understand how you are setting up your cloud infrastructure where a 2T vm image is a good idea. My initial impression is that there is something about the deployment that would justify that kind of vm image. If you need to access large amounts of data storage from a vm, then the vm should mount the moosefs share, maintaining a virtual machine disk image of that size is bad for performance (every write translates into large scale chunk changes) If you want to use a distributed file system for a cloud infrastructure (and mooseFS is an excellent choice) then I am going to recommended that you limit the writes to the virtual machine image, and offload as much as you can to mounts of the master moosefs filesystem. In my deployments I have all storage class file access on moosefs mounts and it has greatly improved the performance of the filesystem and the virtual machines. -Tom Hatch On Fri, Jan 14, 2011 at 10:19 PM, 颜秉珩 <rw...@12...> wrote: we use MFS to create virtual block device, A block device larger than 2T is much popular nowadays! So I think 2T limitation impacts the MFS application in this area (cloud storage environment) 发件人: Michal Borychowski 发送时间: 2010-12-30 15:25:39 收件人: 'yanbh' 抄送: 'moosefs-users' 主题: Re: [Moosefs-users] How many mfs clients can be supported by thesame master at the same time!! Hi! From: yanbh [mailto:ya...@in...] Sent: Thursday, December 30, 2010 7:49 AM To: moosefs-users Subject: [Moosefs-users] How many mfs clients can be supported by the same master at the same time!! Dear all! I have some questions about MFS, as following! 1. how many mfsclient can be supported by the same master server at the same time? what about 100 client mounting the same mfs master at the same? what about the performace? [MB] 100 clients would not be a problem. Performance should also not be affected (metadata in mfs master are kept in RAM for speed). Please have a look here: http://80.48.16.122/mfs.cgi?masterport=9421&mastername=bellona_main§ions=MS for our installation. 2. we know the mfs can only support a file whose size less than 2T, this is a bad limitation, when to remove it ? any plan? [MB] Is it really a big problem for you? Can’t you divide the files to be not bigger than 2TB? For the moment removing this limiation doesn’t have big priority. whatever, Thx for the developers giving us a execlent DFS!! [MB] :) If you need any further assistance please let us know. Kind regards Micha?Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Woska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 Best Regards! 2010-12-30 yanbh ------------------------------------------------------------------------------ Protect Your Site and Customers from Malware Attacks Learn about various malware tactics and how to avoid them. Understand malware threats, the impact they can have on your business, and how you can protect your company and customers by using code signing. http://p.sf.net/sfu/oracle-sfdevnl _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |