You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Alexander A. <ba...@ya...> - 2018-08-27 06:36:24
|
Hi! Dear developers! Could you shortly describe what happens when I make snapshot? Well ... why I'm asking: On the top of MooseFS I use iSCSI target (SCST) to share space to Windows. So size of all files in the directory containing files which are used by SCST is 10 TB. # ls -l /moosefs/iscsi total 12231639044 -rw-r--r-- 1 root root 751619276801 Aug 27 09:33 CSV1 -rw-r--r-- 1 root root 751619276801 Aug 27 09:33 CSV2 -rw-r--r-- 1 root root 1099511627777 Aug 27 09:33 HOME -rw-r--r-- 1 root root 8796093022209 Aug 27 09:33 PROJECTS -rw-r--r-- 1 root root 5368709121 Aug 27 09:17 quorum -rw-r--r-- 1 root root 21474836481 Aug 26 14:44 WOF And when I launch mfsmakesnapshot performance which gets Windows on iSCSI drives dramatically drops down for 20-30 min. Thanks for answer! WBR Alexander |
From: Marco M. <mar...@gm...> - 2018-08-21 19:17:00
|
You have double the network IO on the Linux client. (because it is both a chunkserver and a client.) That is why your Linux client gets about half the bandwidth. It is easy to test my theory. Boot the FreeBSD machine temporarily with Linux, install MoooseFS client on it, my guess is that you will get about the same speed as the FreeBSD client. Other option for this test is to temporarily shutdown the chunkserver on that test Linux client.(make sure that it doesn't start moving chunks around for re-balancing when you do shut it down.), and repeat the test, FreeBSD client and Linux client speed should be same in this case. -- Marco On 08/21/2018 02:33 PM, Alexander AKHOBADZE wrote: > > Hi again! > > OK. I'v made test file size twice bigger and no changes. > I'v launched dd several times and I have a stable results like this: > > Linux # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G > 155 857 747 968 bytes (156 GB, 145 GiB) copied, 595.506 s, 262 MB/s > > FreeBSD: # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G > 155 857 747 968 bytes transferred in 296.450686 secs (525 745 951 bytes/sec) > > Cluster consists of 3 chunk server each with 64 GB RAM. And Linux client is one of them. > FreeBSD host has 48 GB RAM. All hosts have mtu 9000. > > WBR > Alexander > > ----------------------------------------------------------------------------- > > From: Aleksander Wieliczko <aleksander.wieliczko@mo...> - 2018-08-21 10:35:48 > > Every thing is possible, but try execute test 5 times, than remove > highest and lowest results and count average value from 3 results that > left :) > Good idea is to read file size twice big as your RAM amount. > > ----------------------------------------------------------------------------- > > >> Hi dear All ! > >> > >> > >> As far as I can see MooseFS client on FreeBSD works faster then on Linux: > >> > >> # uname -a > >> FreeBSD 11.1-RELEASE-p13 FreeBSD 11.1-RELEASE-p13 #0: Tue Aug 14 19:34:21 UTC 2018 ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64 > >> # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G > >> 77928873984 bytes transferred in 143.082525 secs (544642847 bytes/sec) > >> > >> vs > >> > >> # uname -a > >> Linux 4.9.0-7-amd64 #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13) x86_64 GNU/Linux > >> # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G > >> 77928873984 bytes (78 GB, 73 GiB) copied, 258.288 s, 302 MB/s > >> > >> > >> All MooseFS involved hosts are connected via 10 Gigabit Ethernet. > >> > >> > >> Am I right? Why!? > >> > >> > >> WBR > >> > >> Alexander > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: <ba...@na...> - 2018-08-21 18:40:00
|
Hi again! OK. I'v made test file size twice bigger and no changes. I'v launched dd several times and I have a stable results like this: Linux # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G 155 857 747 968 bytes (156 GB, 145 GiB) copied, 595.506 s, 262 MB/s FreeBSD: # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G 155 857 747 968 bytes transferred in 296.450686 secs (525 745 951 bytes/sec) Cluster consists of 3 chunk server each with 64 GB RAM. And Linux client is one of them. FreeBSD host has 48 GB RAM. All hosts have mtu 9000. WBR Alexander ----------------------------------------------------------------------------- From: Aleksander Wieliczko <aleksander.wieliczko@mo...> - 2018-08-21 10:35:48 Every thing is possible, but try execute test 5 times, than remove highest and lowest results and count average value from 3 results that left :) Good idea is to read file size twice big as your RAM amount. ----------------------------------------------------------------------------- >> Hi dear All ! >> >> >> As far as I can see MooseFS client on FreeBSD works faster then on Linux: >> >> # uname -a >> FreeBSD 11.1-RELEASE-p13 FreeBSD 11.1-RELEASE-p13 #0: Tue Aug 14 19:34:21 UTC 2018 ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64 >> # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G >> 77928873984 bytes transferred in 143.082525 secs (544642847 bytes/sec) >> >> vs >> >> # uname -a >> Linux 4.9.0-7-amd64 #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13) x86_64 GNU/Linux >> # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G >> 77928873984 bytes (78 GB, 73 GiB) copied, 258.288 s, 302 MB/s >> >> >> All MooseFS involved hosts are connected via 10 Gigabit Ethernet. >> >> >> Am I right? Why!? >> >> >> WBR >> >> Alexander >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- С уважением, Badurls mailto:ba...@na... |
From: Alexander A. <ba...@ya...> - 2018-08-21 18:33:22
|
Hi again! OK. I'v made test file size twice bigger and no changes. I'v launched dd several times and I have a stable results like this: Linux # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G 155 857 747 968 bytes (156 GB, 145 GiB) copied, 595.506 s, 262 MB/s FreeBSD: # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G 155 857 747 968 bytes transferred in 296.450686 secs (525 745 951 bytes/sec) Cluster consists of 3 chunk server each with 64 GB RAM. And Linux client is one of them. FreeBSD host has 48 GB RAM. All hosts have mtu 9000. WBR Alexander ----------------------------------------------------------------------------- From: Aleksander Wieliczko <aleksander.wieliczko@mo...> - 2018-08-21 10:35:48 Every thing is possible, but try execute test 5 times, than remove highest and lowest results and count average value from 3 results that left :) Good idea is to read file size twice big as your RAM amount. ----------------------------------------------------------------------------- >> Hi dear All ! >> >> >> As far as I can see MooseFS client on FreeBSD works faster then on Linux: >> >> # uname -a >> FreeBSD 11.1-RELEASE-p13 FreeBSD 11.1-RELEASE-p13 #0: Tue Aug 14 19:34:21 UTC 2018 ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64 >> # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G >> 77928873984 bytes transferred in 143.082525 secs (544642847 bytes/sec) >> >> vs >> >> # uname -a >> Linux 4.9.0-7-amd64 #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13) x86_64 GNU/Linux >> # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G >> 77928873984 bytes (78 GB, 73 GiB) copied, 258.288 s, 302 MB/s >> >> >> All MooseFS involved hosts are connected via 10 Gigabit Ethernet. >> >> >> Am I right? Why!? >> >> >> WBR >> >> Alexander |
From: Marco M. <mar...@gm...> - 2018-08-21 17:00:24
|
Also can you transfer about 50GB between the Linux client and the FreeBSD client just to test the network speed? (no disk, just /dev/zero and /dev/null and no other substantial network activity during the test) On 08/21/2018 12:52 PM, Marco Milano wrote: > Alexander AKHOBADZE, > > Can you repeat your test with bs=1M ? > Are you getting the same results with bs=1M ? > > -- Marco > > On 08/21/2018 02:45 AM, Alexander AKHOBADZE wrote: >> Hi dear All ! >> >> >> As far as I can see MooseFS client on FreeBSD works faster then on Linux: >> >> # uname -a >> FreeBSD 11.1-RELEASE-p13 FreeBSD 11.1-RELEASE-p13 #0: Tue Aug 14 19:34:21 UTC 2018 ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64 >> # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G >> 77928873984 bytes transferred in 143.082525 secs (544642847 bytes/sec) >> >> vs >> >> # uname -a >> Linux 4.9.0-7-amd64 #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13) x86_64 GNU/Linux >> # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G >> 77928873984 bytes (78 GB, 73 GiB) copied, 258.288 s, 302 MB/s >> >> >> All MooseFS involved hosts are connected via 10 Gigabit Ethernet. >> >> >> Am I right? Why!? >> >> >> WBR >> >> Alexander >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Marco M. <mar...@gm...> - 2018-08-21 16:52:51
|
Alexander AKHOBADZE, Can you repeat your test with bs=1M ? Are you getting the same results with bs=1M ? -- Marco On 08/21/2018 02:45 AM, Alexander AKHOBADZE wrote: > Hi dear All ! > > > As far as I can see MooseFS client on FreeBSD works faster then on Linux: > > # uname -a > FreeBSD 11.1-RELEASE-p13 FreeBSD 11.1-RELEASE-p13 #0: Tue Aug 14 19:34:21 UTC 2018 ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64 > # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G > 77928873984 bytes transferred in 143.082525 secs (544642847 bytes/sec) > > vs > > # uname -a > Linux 4.9.0-7-amd64 #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13) x86_64 GNU/Linux > # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G > 77928873984 bytes (78 GB, 73 GiB) copied, 258.288 s, 302 MB/s > > > All MooseFS involved hosts are connected via 10 Gigabit Ethernet. > > > Am I right? Why!? > > > WBR > > Alexander > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Aleksander W. <ale...@mo...> - 2018-08-21 10:35:48
|
On 21.08.2018 08:45, Alexander AKHOBADZE wrote: > Hi dear All ! > > > As far as I can see MooseFS client on FreeBSD works faster then on Linux: Every thing is possible, but try execute test 5 times, than remove highest and lowest results and count average value from 3 results that left :) Good idea is to read file size twice big as your RAM amount. #LINUX dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1M count=1024 #FreeBSD dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1m count=1024 > > # uname -a > FreeBSD 11.1-RELEASE-p13 FreeBSD 11.1-RELEASE-p13 #0: Tue Aug 14 > 19:34:21 UTC 2018 > ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64 > # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G > 77928873984 bytes transferred in 143.082525 secs (544642847 bytes/sec) > > vs > > # uname -a > Linux 4.9.0-7-amd64 #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13) x86_64 > GNU/Linux > # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G > 77928873984 bytes (78 GB, 73 GiB) copied, 258.288 s, 302 MB/s > > > All MooseFS involved hosts are connected via 10 Gigabit Ethernet. > > > Am I right? Why!? > > > WBR > > Alexander > > ------------------------------------------------------------------------------ > > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <ba...@ya...> - 2018-08-21 06:46:02
|
Hi dear All ! As far as I can see MooseFS client on FreeBSD works faster then on Linux: # uname -a FreeBSD 11.1-RELEASE-p13 FreeBSD 11.1-RELEASE-p13 #0: Tue Aug 14 19:34:21 UTC 2018 ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64 # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G 77928873984 bytes transferred in 143.082525 secs (544642847 bytes/sec) vs # uname -a Linux 4.9.0-7-amd64 #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13) x86_64 GNU/Linux # dd if=/moosefs/mnt/random_test_data of=/dev/null bs=1G 77928873984 bytes (78 GB, 73 GiB) copied, 258.288 s, 302 MB/s All MooseFS involved hosts are connected via 10 Gigabit Ethernet. Am I right? Why!? WBR Alexander |
From: Piotr R. K. <pio...@ge...> - 2018-08-10 16:24:21
|
> To make sure you are getting the full gigabit bandwith on your servers, also use a tool to measure the connectivity between them, perhaps a tool like iperf can help. The simplest you can run is dd + netcat: First machine - listener: nc -v -l 2222 > /dev/null Second machine - sender: dd if=/dev/zero bs=1M count=1024 | nc -v IP_OF_FIRST_MACHINE 2222 or iperf as Diego suggested: Listener: iperf -s Sender: iperf -c LISTENER_IP -t 30 Best regards, Peter Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... Business & Technical Support Manager MooseFS Client Support Team WWW | GitHub | Twitter | Facebook | LinkedIn > On 9 Aug 2018, at 2:10 PM, Remolina, Diego J <dij...@ae...> wrote: > > When you say 100Mbps, do you mean megabits per second or megabytes per second? 100Mbps is much different to 100MB/s > > At Gigabit interconnect, your network limitation will be ~112MB/s (megabytes per second) read/writes which is the maximum bandwidth of your gigabit network cards. > > Have you tried mounting the file system itself on one of the servers? This should rule out network. If the reads/writes are slow there, then it may help you isolate the issue to the backend storage. > > To make sure you are getting the full gigabit bandwith on your servers, also use a tool to measure the connectivity between them, perhaps a tool like iperf can help. > > HTH, > > Diego > From: Roman <int...@gm...> > Sent: Thursday, August 9, 2018 7:39:23 AM > To: moo...@li... > Subject: [MooseFS-Users] speed drop > > Hello! > > I have problems with the drop in the speed of reading and writing. > I have a 1 Gbit network. Initially, the write and read speed in the > Moosefs cluster was ~100Mbps (3 copies of the data). At the moment it is > about 10Mbps. > There are no problems with the network. Switches and network cards work > as it should. > > Where to dig? > > > Thanks, > Roman > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Remolina, D. J <dij...@ae...> - 2018-08-09 19:47:49
|
When you say 100Mbps, do you mean megabits per second or megabytes per second? 100Mbps is much different to 100MB/s At Gigabit interconnect, your network limitation will be ~112MB/s (megabytes per second) read/writes which is the maximum bandwidth of your gigabit network cards. Have you tried mounting the file system itself on one of the servers? This should rule out network. If the reads/writes are slow there, then it may help you isolate the issue to the backend storage. To make sure you are getting the full gigabit bandwith on your servers, also use a tool to measure the connectivity between them, perhaps a tool like iperf can help. HTH, Diego ________________________________ From: Roman <int...@gm...> Sent: Thursday, August 9, 2018 7:39:23 AM To: moo...@li... Subject: [MooseFS-Users] speed drop Hello! I have problems with the drop in the speed of reading and writing. I have a 1 Gbit network. Initially, the write and read speed in the Moosefs cluster was ~100Mbps (3 copies of the data). At the moment it is about 10Mbps. There are no problems with the network. Switches and network cards work as it should. Where to dig? Thanks, Roman ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wilson, S. M <st...@pu...> - 2018-08-09 14:18:31
|
Hi Roman, I've had that happen to me on a couple of different servers over the past few months. You might find it worthwhile to monitor network interface speed using a tool like Monit: https://mmonit.com/monit/documentation/monit.html#NETWORK-INTERFACE-TESTS I haven't done it yet but your email reminded me to implement this kind of monitoring on my workstations and servers. Thanks for the reminder! :-) Regards, Steve ________________________________________ From: Roman <int...@gm...> Sent: Thursday, August 9, 2018 8:16 AM To: Aleksander Wieliczko; moo...@li... Subject: Re: [MooseFS-Users] speed drop Hi, Problem fixed! There was a problem with the network card. I did not notice that it changed from 1 Gbit/s to 100 Mb/s after the power failure. Thanks, Roman > Hi, > Would you be so kind and tell us something more about this case. > What have changed? > > Maybe broken hard disk? > > Best regards > Alex > > On 09.08.2018 13:39, Roman wrote: >> Hello! >> >> I have problems with the drop in the speed of reading and writing. >> I have a 1 Gbit network. Initially, the write and read speed in the >> Moosefs cluster was ~100Mbps (3 copies of the data). At the moment it >> is about 10Mbps. >> There are no problems with the network. Switches and network cards >> work as it should. >> >> Where to dig? >> >> >> Thanks, >> Roman > . > ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Aleksander W. <ale...@mo...> - 2018-08-09 12:19:21
|
Hi, Would you be so kind and tell us something more about this case. What have changed? Maybe broken hard disk? Best regards Alex On 09.08.2018 13:39, Roman wrote: > Hello! > > I have problems with the drop in the speed of reading and writing. > I have a 1 Gbit network. Initially, the write and read speed in the > Moosefs cluster was ~100Mbps (3 copies of the data). At the moment it > is about 10Mbps. > There are no problems with the network. Switches and network cards > work as it should. > > Where to dig? > > > Thanks, > Roman |
From: Roman <int...@gm...> - 2018-08-09 12:16:40
|
Hi, Problem fixed! There was a problem with the network card. I did not notice that it changed from 1 Gbit/s to 100 Mb/s after the power failure. Thanks, Roman > Hi, > Would you be so kind and tell us something more about this case. > What have changed? > > Maybe broken hard disk? > > Best regards > Alex > > On 09.08.2018 13:39, Roman wrote: >> Hello! >> >> I have problems with the drop in the speed of reading and writing. >> I have a 1 Gbit network. Initially, the write and read speed in the >> Moosefs cluster was ~100Mbps (3 copies of the data). At the moment it >> is about 10Mbps. >> There are no problems with the network. Switches and network cards >> work as it should. >> >> Where to dig? >> >> >> Thanks, >> Roman > . > |
From: Roman <int...@gm...> - 2018-08-09 11:39:36
|
Hello! I have problems with the drop in the speed of reading and writing. I have a 1 Gbit network. Initially, the write and read speed in the Moosefs cluster was ~100Mbps (3 copies of the data). At the moment it is about 10Mbps. There are no problems with the network. Switches and network cards work as it should. Where to dig? Thanks, Roman |
From: Gandalf C. <gan...@gm...> - 2018-06-27 10:32:05
|
Il giorno mer 27 giu 2018 alle ore 12:27 Alexander AKHOBADZE <ba...@ya...> ha scritto: > As far as I understand Solution 1 affects not only Master. In any form > chunk and client also should be informed about MASTER_DNS_SERVER. You are right. > As for Solution 3 you can avoid updating local zone file on every server > by setting up DNS-servers in primary/secondary mode. > In this case you have to update zone file only on primary. All > secondaryes will receive changes automatically. Is not that easy. If master is down, you have to spin up another master and so on, only to update DNS. |
From: Alexander A. <ba...@ya...> - 2018-06-27 10:27:22
|
As far as I understand Solution 1 affects not only Master. In any form chunk and client also should be informed about MASTER_DNS_SERVER. As for Solution 3 you can avoid updating local zone file on every server by setting up DNS-servers in primary/secondary mode. In this case you have to update zone file only on primary. All secondaryes will receive changes automatically. And finally to avoid slow down of the DNS resolution CARP can be used among DNS-servers (or something similar). DNS clients will use HA CARP IP. In case of primary DNS-server's failure CARP IP will be activated on secondary DNS-server and clients will not see any difference. On 27.06.2018 13:02, Gandalf Corvotempesta wrote: > Il giorno mer 27 giu 2018 alle ore 11:11 Alexander AKHOBADZE > <ba...@ya...> ha scritto: >> I cant's see who prevents you to set up as many DNS servers as needed to >> achieve HA of DNS-service? >> >> They can be isolated from all the rest of network (i.e. by firewall) to >> be accessible only from MFS cluster parts. >> >> I.e. they also can colocate with Master servers on the same hardware and >> host some fake DNS zone which is used only by MFS cluster. > Let's assume an isolated network used as SAN. > In this network you don't have internet access (or you have it very > very limited) > Relying to an external DNS server is prone to DNS attacks (even > advanced DNS infrastructure > like Dyn.com and Amazon were down due to an attack last year) > > If I have to add a local dns server only for MooseFS HA, I also need > to keep additional public dns server for > the rest of operating system (it doesn't matter if a public dns is > down for some hours on a SAN, in these hours i don't > run software updates or similiar, but the storage will be still up and running. > > Adding an additional resolver to /etc/resolv.conf not connected to > anything, will slow down dns resolution, as all queries > against anything except the moosefs master domain will fail and a new > query (to another resolver should be made) > > So, 3 solution: > > 1) an additional parameter in mfsmaster.cfg: > MASTER_DNS_SERVER=127.0.0.1 > to be used for HA. Then MooseFS will fetch master severs from that dns > server and not from the standard OS resolver. > > 2) adding a sort of scale-out dns service like Consul, so that each > master will register itself to Consul. > > 3) adding a local dns server with forwarning capabilty (in example, > PowerDNS). A local zone like "moosefs.master.lan" will be resolved > locally, everything else will be forwarded to a public dns server > > Solution 2 is almost start-and-forget, solution 1 need some > adjustments every time a new master node is added (we have to set the > same value on > all local dns servers) > > Solution 3 is similiar to solution 1, but doesn't require any extra > flag in MooseFS , because we can set the local resolver in > /etc/resolv.conf but still require > some adjustments every time we add/remove one master server (we have > to update the local zone file on every server) |
From: Gandalf C. <gan...@gm...> - 2018-06-27 10:03:15
|
Il giorno mer 27 giu 2018 alle ore 11:11 Alexander AKHOBADZE <ba...@ya...> ha scritto: > I cant's see who prevents you to set up as many DNS servers as needed to > achieve HA of DNS-service? > > They can be isolated from all the rest of network (i.e. by firewall) to > be accessible only from MFS cluster parts. > > I.e. they also can colocate with Master servers on the same hardware and > host some fake DNS zone which is used only by MFS cluster. Let's assume an isolated network used as SAN. In this network you don't have internet access (or you have it very very limited) Relying to an external DNS server is prone to DNS attacks (even advanced DNS infrastructure like Dyn.com and Amazon were down due to an attack last year) If I have to add a local dns server only for MooseFS HA, I also need to keep additional public dns server for the rest of operating system (it doesn't matter if a public dns is down for some hours on a SAN, in these hours i don't run software updates or similiar, but the storage will be still up and running. Adding an additional resolver to /etc/resolv.conf not connected to anything, will slow down dns resolution, as all queries against anything except the moosefs master domain will fail and a new query (to another resolver should be made) So, 3 solution: 1) an additional parameter in mfsmaster.cfg: MASTER_DNS_SERVER=127.0.0.1 to be used for HA. Then MooseFS will fetch master severs from that dns server and not from the standard OS resolver. 2) adding a sort of scale-out dns service like Consul, so that each master will register itself to Consul. 3) adding a local dns server with forwarning capabilty (in example, PowerDNS). A local zone like "moosefs.master.lan" will be resolved locally, everything else will be forwarded to a public dns server Solution 2 is almost start-and-forget, solution 1 need some adjustments every time a new master node is added (we have to set the same value on all local dns servers) Solution 3 is similiar to solution 1, but doesn't require any extra flag in MooseFS , because we can set the local resolver in /etc/resolv.conf but still require some adjustments every time we add/remove one master server (we have to update the local zone file on every server) |
From: Alexander A. <ba...@ya...> - 2018-06-27 09:11:01
|
Hi Gandalf! Could you describe your question more exactly? I cant's see who prevents you to set up as many DNS servers as needed to achieve HA of DNS-service? They can be isolated from all the rest of network (i.e. by firewall) to be accessible only from MFS cluster parts. I.e. they also can colocate with Master servers on the same hardware and host some fake DNS zone which is used only by MFS cluster. Also by the way I have to share my humble experience in making Moose Master high available. I'v achieved HA using CARP+DEVD on FreeBSD using MooseFS community edition. In short when CARP controlled IP moves from one host to another (CARP interface changes role from BACKUP to MASTER) this event is catched by DEVD. DEVD executes my script which stops the Metalogger and starts Master. They both use the same DATA_PATH. So Metalogger who has an actual metadata becomes a new Master. An vice versa in a case of CARP event MASTER -> BACKUP script stops Master and starts Metalogger. All chunk server and clients are configured to use CARP IP as Master's address. So no DNS at all ;--) Yes. I know this is not an ideal solution but it works ! :--) wbr Alexander On 27.06.2018 11:20, Gandalf Corvotempesta wrote: > In v4, HA depends on DNS. We have to add one A record for every master server. > This is good only in some environments, where you have public access > to a DNS server. > As some users could deploy MooseFS as a SAN, with limited (if not at > all) internet access, using a DNS server is not a simple task. > > Could I suggest something similiar like Consul? It provide a DNS > interface to query and also expose a very simple API that could be > used by master server to register itself (and automatically update the > DNS record) > > Doing this doesn't require any internet access in SAN nor expose > MooseFS to DNS attacks. > Using a locally configured DNS server is still not trivial, you have > to manually update all locale dns server instances (obviously, you > wan't run with just 1 server) and so on. > > Any workaround ? > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Gandalf C. <gan...@gm...> - 2018-06-27 08:20:38
|
In v4, HA depends on DNS. We have to add one A record for every master server. This is good only in some environments, where you have public access to a DNS server. As some users could deploy MooseFS as a SAN, with limited (if not at all) internet access, using a DNS server is not a simple task. Could I suggest something similiar like Consul? It provide a DNS interface to query and also expose a very simple API that could be used by master server to register itself (and automatically update the DNS record) Doing this doesn't require any internet access in SAN nor expose MooseFS to DNS attacks. Using a locally configured DNS server is still not trivial, you have to manually update all locale dns server instances (obviously, you wan't run with just 1 server) and so on. Any workaround ? |
From: Jakub Kruszona-Z. <jak...@ge...> - 2018-06-19 06:51:33
|
> On 15 Jun, 2018, at 15:48, Евгений Дятлов <jen...@gm...> wrote: > > Hi, > > I have a strange thing with mount goal. > For example, I have a storage with default goal "1" for all files and folders. ok. > For some new files I would like to set goal 2 or 3. I set minimal goal for mounting to "2". But what I saw.. file creating with goal 1. Its correct. When there is minimal goal in mfsmount then users can't set goal to lower value, but if such goal is already set then files will use it. > Why this happen? Minimum value should be taken by default file goal or not? No. Set goal 2 for all your files and folders (mfssetgoal/mfssetsclass -r 2 <mfsmountpoint>) then all new files will have goal set to 2 (see below) > > Second question. > If I set goal "2" for some folder. What does it mean? This means that all new files created in such folder will have goal initially set to "2". This is how it works in MFS. Set desired goal/sclass for given folder and all new files and folders inside will inherit this property. Minimal goal in mount means only than user can't change back goal to "1" > > -- > Eugene > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Regards, Jakub Kruszona-Zawadzki - - - - - - - - - - - - - - - - Segmentation fault (core dumped) Phone: +48 602 212 039 |
From: Alexander A. <ba...@ya...> - 2018-06-18 18:47:44
|
Hi guys! By the way I have to share my small experience. MooseFS works in VM badly :--( I'v checked it many times with all MoseFS components. On a hardware it works much better. So Master is a critical part of cluster. Everything really depends on its response time. I suggest not to use Master in VM. WBR Alexander On 18.06.2018 14:43, Remolina, Diego J wrote: > > Hi Nicolas, > > > I assume you are trying 3.x > > > You may want to also run the metalogger process on the second chunk > server. You can then manually convert that server to a master if your > master goes down since you are using it as a metalogger. On MooseFS > 3.x *Pro* (paid version), you get multiple masters and HA is easily > achieve by a multiple entry DNS A record. > > > Do note, that MooseFS 4.x is closed to being released and it will have > the multiple master feature in the free community version, so you no > longer have to pay to get the multiple master feature. > > > MooseFS requires the master to work, if the master is down and there > are no other metadata masters available or you do not promote a > metalogger to master, then your filesystem will not work. > > > Some tips on masters, you can run them on a separate machine, you need > good amounts of RAM (look at docs in website for exact formulas) and > since it is a single threaded process, you want the fastest cores you > can get, not necessarily the most number of cores for your metadata > server. > > > https://moosefs.com/support/#documentation > > > HTH, > > > Diego > > ------------------------------------------------------------------------ > *From:* Nicolas Embriz <nb...@te...> > *Sent:* Saturday, June 16, 2018 6:32:45 AM > *To:* moo...@li... > *Subject:* [MooseFS-Users] Minimum setup, local servers, remove VM as > a master > Hi, I am testing Moosefs on FreeBSD 11 so far using only 2 instances > one behaving like master and chunkserver and other only as a > chunkserver, so far all pretty good but while testing I notice that if > the master goes down all the chunks become unresponsive, In my try to > make a more redundant setup I am thinking of using a tiny VM (1GB ram > 20GB disk) only to keep the master up and allow the 2 chunks to be > operable, but I have some doubts. > > If I understand properly the metadata is stored in the master and the > chunckservers just store the raw data, therefore the master doesn’t > need to be huge in storage as the chunservers , also when writing to > data on one chunk server, only the metadata is sent to the master and > the raw data is synced only between the chunks, and if I am > understanding correctly the more intense use of network/bandwidth is > between chunks and not the master, is this correct? ( thinking on > this to have a basic setup in where I could have 2 in home servers in > the same network, but only the master in a remote tiny VM "always > online" ) > > Now regarding the master, if it is down, I can’t access /read the > files in the chunkservers, is there a way to configure the > chunkservers to work in an online mode or being available to continue > using what they have to until the master comes back and then just re-sync. > > regards. > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Remolina, D. J <dij...@ae...> - 2018-06-18 14:15:39
|
Hi Nicolas, I assume you are trying 3.x You may want to also run the metalogger process on the second chunk server. You can then manually convert that server to a master if your master goes down since you are using it as a metalogger. On MooseFS 3.x *Pro* (paid version), you get multiple masters and HA is easily achieve by a multiple entry DNS A record. Do note, that MooseFS 4.x is closed to being released and it will have the multiple master feature in the free community version, so you no longer have to pay to get the multiple master feature. MooseFS requires the master to work, if the master is down and there are no other metadata masters available or you do not promote a metalogger to master, then your filesystem will not work. Some tips on masters, you can run them on a separate machine, you need good amounts of RAM (look at docs in website for exact formulas) and since it is a single threaded process, you want the fastest cores you can get, not necessarily the most number of cores for your metadata server. https://moosefs.com/support/#documentation HTH, Diego ________________________________ From: Nicolas Embriz <nb...@te...> Sent: Saturday, June 16, 2018 6:32:45 AM To: moo...@li... Subject: [MooseFS-Users] Minimum setup, local servers, remove VM as a master Hi, I am testing Moosefs on FreeBSD 11 so far using only 2 instances one behaving like master and chunkserver and other only as a chunkserver, so far all pretty good but while testing I notice that if the master goes down all the chunks become unresponsive, In my try to make a more redundant setup I am thinking of using a tiny VM (1GB ram 20GB disk) only to keep the master up and allow the 2 chunks to be operable, but I have some doubts. If I understand properly the metadata is stored in the master and the chunckservers just store the raw data, therefore the master doesn’t need to be huge in storage as the chunservers , also when writing to data on one chunk server, only the metadata is sent to the master and the raw data is synced only between the chunks, and if I am understanding correctly the more intense use of network/bandwidth is between chunks and not the master, is this correct? ( thinking on this to have a basic setup in where I could have 2 in home servers in the same network, but only the master in a remote tiny VM "always online" ) Now regarding the master, if it is down, I can’t access /read the files in the chunkservers, is there a way to configure the chunkservers to work in an online mode or being available to continue using what they have to until the master comes back and then just re-sync. regards. |
From: Grouchy S. <sys...@gr...> - 2018-06-16 16:49:14
|
On 06/16/2018 05:32 AM, Nicolas Embriz wrote: > Hi, I am testing Moosefs on FreeBSD 11 so far using only 2 instances > one behaving like master and chunkserver and other only as a > chunkserver, so far all pretty good but while testing I notice that if > the master goes down all the chunks become unresponsive, In my try to > make a more redundant setup I am thinking of using a tiny VM (1GB ram > 20GB disk) only to keep the master up and allow the 2 chunks to be > operable, but I have some doubts. > > If I understand properly the metadata is stored in the master and the > chunckservers just store the raw data, therefore the master doesn’t > need to be huge in storage as the chunservers , also when writing to > data on one chunk server, only the metadata is sent to the master and > the raw data is synced only between the chunks, and if I am > understanding correctly the more intense use of network/bandwidth is > between chunks and not the master, is this correct? ( thinking on > this to have a basic setup in where I could have 2 in home servers in > the same network, but only the master in a remote tiny VM "always > online" ) > > Now regarding the master, if it is down, I can’t access /read the > files in the chunkservers, is there a way to configure the > chunkservers to work in an online mode or being available to continue > using what they have to until the master comes back and then just re-sync. > > regards. > > The Master server is a single point of failure in MooseFS 3 community edition. I believe that is changing in the upcoming MooseFS 4. The master server should be on the same network as the chunk servers, and the clients. Running it remotely is unlikely to work well. Since you're on FreeBSD, be aware of https://github.com/moosefs/moosefs/issues/89. The fuse implementation on FreeBSD 11 can cause several issues including corruption. |
From: Nicolas E. <nb...@te...> - 2018-06-16 10:55:25
|
Hi, I am testing Moosefs on FreeBSD 11 so far using only 2 instances one behaving like master and chunkserver and other only as a chunkserver, so far all pretty good but while testing I notice that if the master goes down all the chunks become unresponsive, In my try to make a more redundant setup I am thinking of using a tiny VM (1GB ram 20GB disk) only to keep the master up and allow the 2 chunks to be operable, but I have some doubts. If I understand properly the metadata is stored in the master and the chunckservers just store the raw data, therefore the master doesn’t need to be huge in storage as the chunservers , also when writing to data on one chunk server, only the metadata is sent to the master and the raw data is synced only between the chunks, and if I am understanding correctly the more intense use of network/bandwidth is between chunks and not the master, is this correct? ( thinking on this to have a basic setup in where I could have 2 in home servers in the same network, but only the master in a remote tiny VM "always online" ) Now regarding the master, if it is down, I can’t access /read the files in the chunkservers, is there a way to configure the chunkservers to work in an online mode or being available to continue using what they have to until the master comes back and then just re-sync. regards. |
From: Евгений Д. <jen...@gm...> - 2018-06-15 13:48:19
|
Hi, I have a strange thing with mount goal. For example, I have a storage with default goal "1" for all files and folders. For some new files I would like to set goal 2 or 3. I set minimal goal for mounting to "2". But what I saw.. file creating with goal 1. Why this happen? Minimum value should be taken by default file goal or not? Second question. If I set goal "2" for some folder. What does it mean? -- Eugene |