You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve <st...@bo...> - 2011-04-30 14:57:37
|
1. Yes 2. Don't mess about with it. No need. -------Original Message------- From: Fyodor Ustinov Date: 04/30/11 14:47:37 To: moo...@li... Subject: [Moosefs-users] missed chunks Hi. 1. I've marked the hdd on chunk server as offline and restarted chunkserver. 2. Stop chunkserver. 3. Work with cluster (create files, delete files, and so on). Delete all files, as result - 0 chunks on cluster. 4. Unmark hdd and start chunkserver. Now, I see in mfs.cgi 5 chunks with "valid copies == 1" and "goal == 0" from readded chunkserver. Passed 10 hours. Two questions: 1. Should these chunks purge automatically (and after a while)? 2. How to delete these chunks by hand? WBR, Fyodor. ----------------------------------------------------------------------------- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Fyodor U. <uf...@uf...> - 2011-04-30 13:46:45
|
Hi. 1. I've marked the hdd on chunk server as offline and restarted chunkserver. 2. Stop chunkserver. 3. Work with cluster (create files, delete files, and so on). Delete all files, as result - 0 chunks on cluster. 4. Unmark hdd and start chunkserver. Now, I see in mfs.cgi 5 chunks with "valid copies == 1" and "goal == 0" from readded chunkserver. Passed 10 hours. Two questions: 1. Should these chunks purge automatically (and after a while)? 2. How to delete these chunks by hand? WBR, Fyodor. |
From: Boyko Y. <b.y...@ex...> - 2011-04-28 15:55:27
|
Hi! For what I'm aware no binaries are available. I compiled from source and it works fine. I guess you'll have to install xcode and compile yourself. Regards, Boyko On Apr 28, 2011, at 6:51 PM, Razvan Dumitrescu wrote: > Hello again guys! > > I've seen in referrence guide that you have tried moosefs client on > MacOS X 10.5 and 10.6. > Are there any binaries available for moosefs for these two versions of > Mac? The path i tried to > go today involves compiling the MacFUSE core and then the moosefs and i > stopped since i didn't have > XCore components installed. > Is there anyone that went this path already and has moosefs client > for MacOS X 10.5 as a binary package? > > Razvan > |
From: Razvan D. <ra...@re...> - 2011-04-28 15:51:54
|
Hello again guys! I've seen in referrence guide that you have tried moosefs client on MacOS X 10.5 and 10.6. Are there any binaries available for moosefs for these two versions of Mac? The path i tried to go today involves compiling the MacFUSE core and then the moosefs and i stopped since i didn't have XCore components installed. Is there anyone that went this path already and has moosefs client for MacOS X 10.5 as a binary package? Razvan |
From: Samuel H. O. N. <sam...@ol...> - 2011-04-28 08:46:35
|
Hi, Any idea? Thanks. Regards. Samuel Le 27/04/2011 15:44, Samuel Hassine, Olympe Network a écrit : > Hi all, > > I think we found a bug in MooseFS in the POSIX permissions process > when trying to write/access with a user who is member of multiple groups. > > Here the problem: > > *On the disk (without MooseFS):* > > /root@as-001/:/home# ls -l > total 0 > drwxrwsr-x 5 perso-10431 perso-10431 4096 Apr 27 15:33 perso-10431 > /root@as-001/:/home# su sam > /sam@as-001/:/home$ id > uid=11458(sam) gid=11458(Samuel Hassine) groups=11458(Samuel > Hassine),10431(perso-10431),10433(perso-10433) > /sam@as-001/:/home$ touch perso-10431/test > > This is working well. > > *On the MooseFS partition:* > / > root@as-001/:/dns/com/anotherservice/dev/Apps$ ls -l > total 0 > drwxrwsr-x 5 perso-10431 perso-10431 0 Apr 27 15:22 perso-10431 > /sam@as-001/:/dns/com/anotherservice/dev/Apps$ id > uid=11458(sam) gid=11458(Samuel Hassine) groups=11458(Samuel > Hassine),10431(perso-10431),10433(perso-10433) > /sam@as-001/:/dns/com/anotherservice/dev/Apps$ touch perso-10431/test > touch: cannot touch `perso-10431/test': Permission denied > > This is not working (permission denied). > > Is it a already known bug? If yes, can I fix it easilly? > > Thanks for your help. > > Best regards. > -- > Samuel HASSINE > Président > > Olympe Network > - > 31 avenue Sainte Victoire, 13100 Aix-en-Pce > Tel. : +33(0)6.26.81.01.87 > Site :www.olympe-network.com > > > ------------------------------------------------------------------------------ > WhatsUp Gold - Download Free Network Management Software > The most intuitive, comprehensive, and cost-effective network > management toolset available today. Delivers lowest initial > acquisition cost and overall TCO of any competing solution. > http://p.sf.net/sfu/whatsupgold-sd > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Bán M. <ba...@vo...> - 2011-04-27 21:08:23
|
Hi, I don't know is it helps or not, I used the external ip address on the master server which is identical with the ip address on the other chunkservers. e.g.: 127.0.0.1 localhost sokar 192.168.1.2 mfsmaster Miklos On Wed, 27 Apr 2011 21:56:02 +0200 Papp Tamas <to...@ma...> wrote: > hi! > > I'm a quite new to mfs, sorry if I'm doing something wrong. > > I've installed a small "cluster" on an Ubuntu 10.04 LTS from ppa. > > Everything seemd fine: mount was OK, I could read and write it. > > Unfortunately I had never rebooted the server until I upgraded it to > 11.04. So the root of problem can be the reboot (restart of services) > and also the upgrade, I don't know:) > > So, I upgraded the server, rebooted it and now trying to start mfs > services: > > mfsmaster starts fine, but chunserver: > > $ /etc/init.d/mfs-chunkserver start > Starting mfs-chunkserver: working directory: /var/lib/mfs > lockfile created and locked > initializing mfschunkserver modules ... > hdd space manager: scanning folder /data/ ... > hdd space manager: scanning... 34% > [.....it's counting percentages here for a long time.......] > hdd space manager: scanning complete > hdd space manager: /data/: 5122835 chunks found > hdd space manager: scanning complete > main server module: listen on *:9422 > master connection module: localhost (127.0.0.1) can't be used for > connecting with master (use ip address of network controller) > init: master connection module failed !!! > error occured during initialization - exiting > > $ cat /etc/hosts > 127.0.0.1 localhost mfsmaster > > Config files are the defaults. > > While chunkserver is starting, these are getting in syslog: > > Apr 27 21:42:34 backup1 mfsmaster[7935]: unavailable chunks: > 474000 ... Apr 27 21:42:35 backup1 mfsmaster[7935]: unavailable > files: 463000 ... Apr 27 21:42:36 backup1 mfsmaster[7935]: > unavailable chunks: 475000 ... Apr 27 21:42:37 backup1 > mfsmaster[7935]: unavailable files: 464000 ... Apr 27 21:42:38 > backup1 mfsmaster[7935]: unavailable chunks: 476000 ... Apr 27 > 21:42:39 backup1 mfsmaster[7935]: unavailable files: 465000 ... Apr > 27 21:42:39 backup1 mfsmaster[7935]: unavailable chunks: 477000 ... > Apr 27 21:42:40 backup1 mfsmaster[7935]: unavailable files: > 466000 ... Apr 27 21:42:41 backup1 mfsmaster[7935]: unavailable > chunks: 478000 ... [....and so on...] > > > I don't get the error message. What's the problem with the host name, > where and how should I use IP address? > > Thank you, > > tamas > |
From: Papp T. <to...@ma...> - 2011-04-27 21:01:46
|
On 04/27/2011 09:56 PM, Papp Tamas wrote: > > I don't get the error message. What's the problem with the host name, > where and how should I use IP address? Well, I moved mfsmaster entry in the hosts file from 127.0.0.1 to the external IP address, and now it works: $ /etc/init.d/mfs-chunkserver start Starting mfs-chunkserver: working directory: /var/lib/mfs lockfile created and locked initializing mfschunkserver modules ... hdd space manager: scanning folder /data/ ... hdd space manager: scanning complete hdd space manager: /data/: 5122835 chunks found hdd space manager: scanning complete main server module: listen on *:9422 stats file has been loaded mfschunkserver daemon initialized properly mfschunkserver. But if it's possible, it's starting far more slower while scanning. It's really-really slow. Why? Is there any way to speed it up? Thank you, tamas |
From: Papp T. <to...@ma...> - 2011-04-27 19:56:13
|
hi! I'm a quite new to mfs, sorry if I'm doing something wrong. I've installed a small "cluster" on an Ubuntu 10.04 LTS from ppa. Everything seemd fine: mount was OK, I could read and write it. Unfortunately I had never rebooted the server until I upgraded it to 11.04. So the root of problem can be the reboot (restart of services) and also the upgrade, I don't know:) So, I upgraded the server, rebooted it and now trying to start mfs services: mfsmaster starts fine, but chunserver: $ /etc/init.d/mfs-chunkserver start Starting mfs-chunkserver: working directory: /var/lib/mfs lockfile created and locked initializing mfschunkserver modules ... hdd space manager: scanning folder /data/ ... hdd space manager: scanning... 34% [.....it's counting percentages here for a long time.......] hdd space manager: scanning complete hdd space manager: /data/: 5122835 chunks found hdd space manager: scanning complete main server module: listen on *:9422 master connection module: localhost (127.0.0.1) can't be used for connecting with master (use ip address of network controller) init: master connection module failed !!! error occured during initialization - exiting $ cat /etc/hosts 127.0.0.1 localhost mfsmaster Config files are the defaults. While chunkserver is starting, these are getting in syslog: Apr 27 21:42:34 backup1 mfsmaster[7935]: unavailable chunks: 474000 ... Apr 27 21:42:35 backup1 mfsmaster[7935]: unavailable files: 463000 ... Apr 27 21:42:36 backup1 mfsmaster[7935]: unavailable chunks: 475000 ... Apr 27 21:42:37 backup1 mfsmaster[7935]: unavailable files: 464000 ... Apr 27 21:42:38 backup1 mfsmaster[7935]: unavailable chunks: 476000 ... Apr 27 21:42:39 backup1 mfsmaster[7935]: unavailable files: 465000 ... Apr 27 21:42:39 backup1 mfsmaster[7935]: unavailable chunks: 477000 ... Apr 27 21:42:40 backup1 mfsmaster[7935]: unavailable files: 466000 ... Apr 27 21:42:41 backup1 mfsmaster[7935]: unavailable chunks: 478000 ... [....and so on...] I don't get the error message. What's the problem with the host name, where and how should I use IP address? Thank you, tamas |
From: Samuel H. O. N. <sam...@ol...> - 2011-04-27 14:00:01
|
Hi all, I think we found a bug in MooseFS in the POSIX permissions process when trying to write/access with a user who is member of multiple groups. Here the problem: *On the disk (without MooseFS):* /root@as-001/:/home# ls -l total 0 drwxrwsr-x 5 perso-10431 perso-10431 4096 Apr 27 15:33 perso-10431 /root@as-001/:/home# su sam /sam@as-001/:/home$ id uid=11458(sam) gid=11458(Samuel Hassine) groups=11458(Samuel Hassine),10431(perso-10431),10433(perso-10433) /sam@as-001/:/home$ touch perso-10431/test This is working well. *On the MooseFS partition:* / root@as-001/:/dns/com/anotherservice/dev/Apps$ ls -l total 0 drwxrwsr-x 5 perso-10431 perso-10431 0 Apr 27 15:22 perso-10431 /sam@as-001/:/dns/com/anotherservice/dev/Apps$ id uid=11458(sam) gid=11458(Samuel Hassine) groups=11458(Samuel Hassine),10431(perso-10431),10433(perso-10433) /sam@as-001/:/dns/com/anotherservice/dev/Apps$ touch perso-10431/test touch: cannot touch `perso-10431/test': Permission denied This is not working (permission denied). Is it a already known bug? If yes, can I fix it easilly? Thanks for your help. Best regards. -- Samuel HASSINE Président Olympe Network - 31 avenue Sainte Victoire, 13100 Aix-en-Pce Tel. : +33(0)6.26.81.01.87 Site : www.olympe-network.com |
From: Michal B. <mic...@ge...> - 2011-04-27 08:31:55
|
Hi Andrey! Unfortunately it seems that fuse4bsd is not longer developed and it may be cause of your problems. Have you used this patch: http://mercurial.creo.hu/repos/fuse4bsd-hg/index.cgi/rev/6e862286739e ? Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Andrey Belashkov [mailto:ma...@ca...] Sent: Tuesday, April 26, 2011 10:50 PM To: moo...@li... Subject: [Moosefs-users] dbench on moosefs on freebsd Hello We use moosefs 1.6.20 from ports on FreeBSD 8.2-RELEASE amd64 on ufs. When we trying 'dbench 500' on moosefs share, after some time got in dmesg: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 pid 3519 (mfsmount), uid 0: exited on signal 6 (core dumped) and in shell with dbench: 500 762 18.22 MB/sec execute 373 sec latency 12651.732 ms 500 763 18.17 MB/sec execute 374 sec latency 13687.228 ms 500 764 18.12 MB/sec execute 375 sec latency 14727.718 ms [765] open ./clients/client309/~dmtmp/PWRPNT/NEWPCB.PPT failed for handle 10032 (Device not configu red) [768] write failed on handle 10032 (Socket is not connected) (766) ERROR: handle 10032 was not found [768] write failed on handle 10032 (Device not configured) [738] open ./clients/client272/~dmtmp/PWRPNT/NEWPCB.PPT failed for handle 10026 (Socket is not connected) (739) ERROR: handle 10026 was not found [773] unlink ./clients/client204/~dmtmp/PWRPNT/PPTC112.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 204 failed at line 773 [768] write failed on handle 10032 (Socket is not connected) Child failed with status 1 [718] open ./clients/client399/~dmtmp/PWRPNT/NEWPCB.PPT failed for handle 10024 (Socket is not connected) [710] rename ./clients/client136/~dmtmp/PWRPNT/NEWPCB.PPT ./clients/client136/~dmtmp/PWRPNT/PPTB1E4.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 136 failed at line 710 (719) ERROR: handle 10024 was not found [710] rename ./clients/client455/~dmtmp/PWRPNT/NEWPCB.PPT ./clients/client455/~dmtmp/PWRPNT/PPTB1E4.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 455 failed at line 710 [808] open ./clients/client482/~dmtmp/PWRPNT/PPTC112.TMP failed for handle 10042 (Socket is not connected) [811] unlink ./clients/client161/~dmtmp/PWRPNT/PPTC112.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 161 failed at line 811 (809) ERROR: handle 10042 was not found then trying `ls /mnt/mfs`: # ls /mnt/mfs ls: .: Socket is not connected It`s normal behaviour or can be tuned? Thanks. ---------------------------------------------------------------------------- -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Andrey B. <ma...@ca...> - 2011-04-26 21:15:54
|
Hello We use moosefs 1.6.20 from ports on FreeBSD 8.2-RELEASE amd64 on ufs. When we trying 'dbench 500' on moosefs share, after some time got in dmesg: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 pid 3519 (mfsmount), uid 0: exited on signal 6 (core dumped) and in shell with dbench: 500 762 18.22 MB/sec execute 373 sec latency 12651.732 ms 500 763 18.17 MB/sec execute 374 sec latency 13687.228 ms 500 764 18.12 MB/sec execute 375 sec latency 14727.718 ms [765] open ./clients/client309/~dmtmp/PWRPNT/NEWPCB.PPT failed for handle 10032 (Device not configu red) [768] write failed on handle 10032 (Socket is not connected) (766) ERROR: handle 10032 was not found [768] write failed on handle 10032 (Device not configured) [738] open ./clients/client272/~dmtmp/PWRPNT/NEWPCB.PPT failed for handle 10026 (Socket is not connected) (739) ERROR: handle 10026 was not found [773] unlink ./clients/client204/~dmtmp/PWRPNT/PPTC112.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 204 failed at line 773 [768] write failed on handle 10032 (Socket is not connected) Child failed with status 1 [718] open ./clients/client399/~dmtmp/PWRPNT/NEWPCB.PPT failed for handle 10024 (Socket is not connected) [710] rename ./clients/client136/~dmtmp/PWRPNT/NEWPCB.PPT ./clients/client136/~dmtmp/PWRPNT/PPTB1E4.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 136 failed at line 710 (719) ERROR: handle 10024 was not found [710] rename ./clients/client455/~dmtmp/PWRPNT/NEWPCB.PPT ./clients/client455/~dmtmp/PWRPNT/PPTB1E4.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 455 failed at line 710 [808] open ./clients/client482/~dmtmp/PWRPNT/PPTC112.TMP failed for handle 10042 (Socket is not connected) [811] unlink ./clients/client161/~dmtmp/PWRPNT/PPTC112.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 161 failed at line 811 (809) ERROR: handle 10042 was not found then trying `ls /mnt/mfs`: # ls /mnt/mfs ls: .: Socket is not connected It`s normal behaviour or can be tuned? Thanks. |
From: Michal B. <mic...@ge...> - 2011-04-26 08:39:59
|
Hi! Unfortunately for the moment we do not plan to develop MooseFS in a way that metadata is divided among RAM and HDD. Though there already are computers supporting 96GB RAM. With this size of metadata you would also need a fast SDD disk(s) for the dumps made every hour from RAM. And maybe you are simply able to reduce the number of files in the system? By creating some "containers" just like .tar archives and keeping files inside them? Best regards Michal From: ha...@si... [mailto:ha...@si...] Sent: Friday, April 22, 2011 11:15 AM To: moo...@li... Subject: [Moosefs-users] hi, i ask one question now, look forward to your reply! Hi Please continue to pay attention to us and help me. First,thanks for your early reply ,now,i ask some questions as follows: As we all know,in the system,now our masterserver can support 64G RAM at most.and you say,"For 300 million files we'll need: 300 * 300 MB = 87.8 GB of RAM at least" .i want to ask: problem one:If we modify the management about the metadata's namespace ,such as from the HASH to B-TREE and take data from HDD,and whether you feel it the feasibility and reasonable? whether you think we can reduce the num of RAM? problem two:To support the storage 500T, 300 million files, operate 500G times a day and reduce the num of RAM, have you some good suggestions??? That's all ,thanks a lot! Sincerely look forward to your reply! Best regards! Hanyw |
From: b. <ai...@qq...> - 2011-04-25 14:09:03
|
Dear guys : As we know , we can use "mfsfielinfo" command to find out which chunksever the given file resides . But how can I find out what files resides on a chunkserver without scanning the whole filesystem ? Best Regards ! bob |
From: b. <ai...@qq...> - 2011-04-24 15:41:15
|
Hi, Michal : 1. Can I use serveral different filesystems (XFS/EXT3/ReiserFS) in a single MFS system ? 2. How deeply the filesystem of chunkservers affects the performance ? 3. Which filesystem is more suitable for MFS ? Someone told that XFS is better , is it true? 4. When master failover to another "new IP" , the client still try the old one even when I modify the /etc/hosts file , while metaloger and chunkserver will automatically switch to the new IP . Why can't client just do that ? Best Regards Bob |
From: Alexander A. <akh...@ri...> - 2011-04-22 12:37:12
|
Thanks for link! It is interesting! But I found on that page: Failover must be kicked whenever a failure in the primary node is detected. Currently, we don't have automatic failover mechanism ... So, by now, in my opinion, this is not a case :--( wbr Alexander ====================================================== Hello, 2011/4/20 Alexander Akhobadze <akh...@ri...> I had not any performance issues because of virtualization. But ... ESX is for money and MooseFS is for free ;--) http://www.osrg.net/kemari/http://wiki.qemu.org/Features/FaultTolerance), and get the same kind of fault tolerance infrastructure... for free. If someone has ever tried, I'd be really happy to read about his experience... Fabien |
From: <ha...@si...> - 2011-04-22 09:15:27
|
Hi Please continue to pay attention to us and help me. First,thanks for your early reply ,now,i ask some questions as follows: As we all know,in the system,now our masterserver can support 64G RAM at most.and you say,"For 300 million files we’ll need: 300 * 300 MB = 87.8 GB of RAM at least" .i want to ask: problem one:If we modify the management about the metadata's namespace ,such as from the HASH to B-TREE and take data from HDD,and whether you feel it the feasibility and reasonable? whether you think we can reduce the num of RAM? problem two:To support the storage 500T, 300 million files, operate 500G times a day and reduce the num of RAM, have you some good suggestions??? That's all ,thanks a lot! Sincerely look forward to your reply! Best regards! Hanyw |
From: Davies L. <dav...@gm...> - 2011-04-22 02:21:30
|
Hi, Youngcow: You could try this: 1. Make the MFS is accessible from network A. 2. Create routes for network B, route traffic to network A through Nics B. 3. Clients in network B could see the MFS in network A. btw, are you the "youngcow" in SMTH? I'm aslo come from there, roommate of rogerz. Davies 在 2011-2-25,上午12:13, youngcow 写道: > Hi > > I have build a moosefs system.Now I have a new demand. > I have two networks(network A and network B). Network A and network B > can't communicate each other directly. There are some moosefs clients in > network A and network B. So I will install two Nics(connect to network A > and network B) in each trunks server and master server. But I don't know > this solution can work correctly? (because I don't know master server > can return correct trunk server's ip address to client?) > > Thanks. > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Davies L. <dav...@gm...> - 2011-04-22 02:05:51
|
MFS has chunk lock when writing, may be it's the reason. You could use NFS server over iSCSI, then export it to ESXi4 VM. 在 2011-2-14,下午3:22, youngcow 写道: > I have test the solution of iscsi. > I use iscsitarget-1.4.20.2 and install on rhel 5.5 x64. I make a > file(size 80G on mfs) and share with 2 node ESXi4 but only one ESXi4 can > use it (VMFS) another ESXi4 can see the lun but it can't use it. > > Then I make another file (size 4G) on normal filesystem(ext3 local) and > share to these machines and two ESXi4 nodes can use it. > > I can;t understand these, anyone know the reason ? > > Thanks. > > the config file is > > cat /etc/iet/ietd.conf > User userid superpassword > Target iqn.2011-02.cn.thcic.storage:storage.vmiso > Lun 0 Path=/data/iscsi/vmiso,Type=fileio > Lun 1 Path=/home/data/myiso,Type=fileio > > > vmiso(lun 0) is on mfs and myiso(lun 1) on ext3. > >> Hi, >> >> You can try the ISCSI software. >> Dd a big file in the mfs mounted dir ,and share the file like a block >> in the ISCSI (iscisi-target can finish the work). >> Use the ESXi manage client(Vsphere? I forgot it's name) mount the >> iscsi disk. >> > > > ------------------------------------------------------------------------------ > The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: > Pinpoint memory and threading errors before they happen. > Find and fix more than 250 security defects in the development cycle. > Locate bottlenecks in serial and parallel code that limit performance. > http://p.sf.net/sfu/intel-dev2devfeb > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Davies L. <dav...@gm...> - 2011-04-22 00:34:47
|
MooseFS was designed for large file storage, if you have so many small files, you should choose another solutions, just like beansdb[1] or Riak [2]. http://code.google.com/p/beansdb/ http://wiki.basho.com/ 在 2011-4-21,下午8:03, ha...@si... 写道: > > i have some doubts,now ,please answer me : > > problem one: About MooseFS,i had asked the question : what size does it support the single file? and what size about the total numbers of file?? > and your answer is: > "Limit of 2TiB is for a single file. There is no limit for the total size of all files in the system. " > > and today,your reply is "if it supports 3000 million files? We would need about 3 > 000 * 300 MB = 878 GB of RAM which would be rather impossible."and today's answer conflict with that statement(e.g."no limit for the total size of all files in the system"). > please tell me ,for 300 million files, what size is the RAM that we need in the system???, as you say,it needs 87 GB of RAM,or "no limit"????? > > > > > > That's all ,thanks a lot! > > Sincerely look forward to your reply! > > > > Best regards! > > Hanyw ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev_______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Fyodor U. <uf...@uf...> - 2011-04-21 23:13:34
|
Hi! How to reduce the time before start recovery the number of goals after a "chunks" server destruction? WBR, Fyodor. |
From: Ricardo J. B. <ric...@da...> - 2011-04-21 20:06:25
|
El Jue 21 Abril 2011, Thomas S Hatch escribió: > For what it is worth, we also use XFS for all of our chunk servers Nice to know, today we installed our 10th chunkserver but this time with ext4 (the rest use ext3) and seems good so far :) Will report if > > Thanks you , Davies . > > > > I've read the xfs wikipedia entry , and it is really impressive , I will > > do my examinations and post the result later If reserved space is your concern, for ext3 you reduce it to 50 blocks with: # tune2fs -r 50 /dev/sdb1 And for ext4: # tune4fs -r 50 /dev/sdb1 Of course, replace sdb1 with your real partition name. HTH, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting |
From: Steve <st...@bo...> - 2011-04-21 19:29:07
|
errors and problems are also logged by the daemons -------Original Message------- From: Léon Keijser Date: 21/04/2011 20:17:51 To: moo...@li... Subject: Re: [Moosefs-users] filesystem check On Thu, 2011-04-21 at 16:55 +0000, Davies Liu wrote: > It DOES have, you can see the result on web interface: > Filesystem check infocheck loop start timecheck loop end > timefilesunder-goal filesmissing fileschunksunder-goal chunksmissing > chunksThu Apr 21 18:20:11 2011Thu Apr 21 22:20:27 > 2011138250600209996500 > Thanks. Any more information on how this works? When the fschk will occur, what exactly it does. That sort of information I couldn't find on the moosefs homepage. Kind regards, Léon ----------------------------------------------------------------------------- Fulfilling the Lean Software Promise Lean software platforms are now widely adopted and the benefits have been demonstrated beyond question. Learn why your peers are replacing JEE containers with lightweight application servers - and what you can gain from the move. http://p.sf.net/sfu/vmware-sfemails _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-04-21 19:27:33
|
Hi Giovanni! Thank you very much for the article and integration with OpenNebula! Regards Michal -----Original Message----- From: Giovanni Toraldo [mailto:me...@gi...] Sent: Thursday, April 21, 2011 7:31 PM To: moosefs-users Subject: [Moosefs-users] OpenNebula shared storage with MooseFS Hi, a small article has been published on OpenNebula blog about the integration with MooseFS, maybe from now more people will look at MooseFS for virtual machine image storage :) http://blog.opennebula.org/?p=1512 Bye. -- Giovanni Toraldo http://gionn.net/ ---------------------------------------------------------------------------- -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Léon K. <ke...@st...> - 2011-04-21 19:17:16
|
On Thu, 2011-04-21 at 16:55 +0000, Davies Liu wrote: > It DOES have, you can see the result on web interface: > Filesystem check infocheck loop start timecheck loop end > timefilesunder-goal filesmissing fileschunksunder-goal chunksmissing > chunksThu Apr 21 18:20:11 2011Thu Apr 21 22:20:27 > 2011138250600209996500 > Thanks. Any more information on how this works? When the fschk will occur, what exactly it does. That sort of information i couldn't find on the moosefs homepage. Kind regards, Léon |
From: Giovanni T. <me...@gi...> - 2011-04-21 17:32:16
|
Hi, a small article has been published on OpenNebula blog about the integration with MooseFS, maybe from now more people will look at MooseFS for virtual machine image storage :) http://blog.opennebula.org/?p=1512 Bye. -- Giovanni Toraldo http://gionn.net/ |