You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Gandalf C. <gan...@gm...> - 2018-05-30 17:55:18
|
I've attached an image. Pretty strange, metadata version mismatch, but "metadata id" are identical and follower is in "DESYNC" state. Il giorno mer 30 mag 2018 alle ore 18:48 Gandalf Corvotempesta < gan...@gm...> ha scritto: > Il giorno mer 30 mag 2018 alle ore 18:45 Gandalf Corvotempesta < > gan...@gm...> ha scritto: > > So, the only configuration is to create a DNS record pointing to all > master > > IPs, then set "MASTER_HOST" on all services pointing to this DNS record. > Another issue: after rebooting one of the follower server, the master > process is now respawning and stuck at "FOLLOWER (DESYNC)" > Should I manually run something after the reboot ? Is possible to automate > the resync ? |
From: Gandalf C. <gan...@gm...> - 2018-05-30 16:57:24
|
/cc the list ---------- Forwarded message --------- From: Gandalf Corvotempesta <gan...@gm...> Date: mer 30 mag 2018, 18:48 Subject: Re: [MooseFS-Users] Master IP in v4 To: <pio...@ge...> Il giorno mer 30 mag 2018 alle ore 18:45 Gandalf Corvotempesta < gan...@gm...> ha scritto: > So, the only configuration is to create a DNS record pointing to all master > IPs, then set "MASTER_HOST" on all services pointing to this DNS record. Another issue: after rebooting one of the follower server, the master process is now respawning and stuck at "FOLLOWER (DESYNC)" Should I manually run something after the reboot ? Is possible to automate the resync ? |
From: Gandalf C. <gan...@gm...> - 2018-05-30 16:49:16
|
Il giorno mer 30 mag 2018 alle ore 18:48 Piotr Robert Konopelko < pio...@mo...> ha scritto: > Use "vim /usr/share/mfscgi/index.html" on every host where MFS CGI is installed :) > Then clear browser's cache I've tought there was a configuration file somewhere :) |
From: Piotr R. K. <pio...@mo...> - 2018-05-30 16:48:19
|
> On 30 May 2018, at 6:45 PM, Gandalf Corvotempesta <gan...@gm...> wrote: > > Il giorno mer 30 mag 2018 alle ore 18:40 Piotr Robert Konopelko < > pio...@ge...> ha scritto: >> Yes > > So, the only configuration is to create a DNS record pointing to all master > IPs, then set "MASTER_HOST" on all services pointing to this DNS record. > > How can I set the master host in the cgi interface ? I've seen that is > possible to use "masterhost" in query string, but how can I set that as > default ? Use "vim /usr/share/mfscgi/index.html" on every host where MFS CGI is installed :) Then clear browser's cache Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com |
From: Gandalf C. <gan...@gm...> - 2018-05-30 16:45:43
|
Il giorno mer 30 mag 2018 alle ore 18:40 Piotr Robert Konopelko < pio...@ge...> ha scritto: > Yes So, the only configuration is to create a DNS record pointing to all master IPs, then set "MASTER_HOST" on all services pointing to this DNS record. How can I set the master host in the cgi interface ? I've seen that is possible to use "masterhost" in query string, but how can I set that as default ? |
From: Piotr R. K. <pio...@mo...> - 2018-05-30 16:41:11
|
> On 30 May 2018, at 6:36 PM, Gandalf Corvotempesta <gan...@gm...> wrote: > > Il giorno mer 30 mag 2018 alle ore 18:29 Piotr Robert Konopelko < > pio...@ge...> ha scritto: >> We recommend to use DNS name and three "A" entries in this case. > > Yes, i've moved to dns for the "mfsmaster" entry, pointing to all master > servers. > >> MooseFS Components are "aware" that there are few entries, so e.g. if CS > connects >> to the Master which is not Leader, it is redirected by this Master to a > Leader one, e.g.: > >> root@ts01:~# mfsmount -H mfsmaster-10g.ct.lan /mnt/mfs >> mfsmaster 10.10.10.1 - found leader: 10.10.10.2 >> mfsmaster accepted connection with parameters: > read-write,restricted_ip,admin ; root mapped to root:root >> root@ts01:~# > > Ok, so there is a sort of "redirect" > But what if the first host used for connection is down ? Will MooseFS > reconnect to another IP automatically until if find a working IP ? > Yes Best, Pete -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com |
From: Gandalf C. <gan...@gm...> - 2018-05-30 16:36:35
|
Il giorno mer 30 mag 2018 alle ore 18:29 Piotr Robert Konopelko < pio...@ge...> ha scritto: > We recommend to use DNS name and three "A" entries in this case. Yes, i've moved to dns for the "mfsmaster" entry, pointing to all master servers. > MooseFS Components are "aware" that there are few entries, so e.g. if CS connects > to the Master which is not Leader, it is redirected by this Master to a Leader one, e.g.: > root@ts01:~# mfsmount -H mfsmaster-10g.ct.lan /mnt/mfs > mfsmaster 10.10.10.1 - found leader: 10.10.10.2 > mfsmaster accepted connection with parameters: read-write,restricted_ip,admin ; root mapped to root:root > root@ts01:~# Ok, so there is a sort of "redirect" But what if the first host used for connection is down ? Will MooseFS reconnect to another IP automatically until if find a working IP ? |
From: Gandalf C. <gan...@gm...> - 2018-05-30 16:20:31
|
Hi to all One simple question: with v4 and multi-master mode, are there any floating IP pointing to the current leader ? In other words, i've configured the following hosts file: 10.200.1.11 cs01.mfs-a.san cs01 mfsmaster 10.200.1.12 cs02.mfs-a.san cs02 mfsmaster 10.200.1.13 cs03.mfs-a.san cs03 mfsmaster All chunkservers are also masters, but I don't have any floating IP set. So, how does it work ? Is mfsmount/mfscgi/etc able to retry the connection until one succeeded, so, trying the first mfsmaster ip, then fallback to the second and so on ? |
From: Gandalf C. <gan...@gm...> - 2018-05-28 18:21:46
|
Keep in mind that i've not said that docs *must* be written. i've said that IMHO, docs about internals *should* be written. Is not an imperative but an advice. Il giorno lun 28 mag 2018 alle ore 20:16 Gandalf Corvotempesta < gan...@gm...> ha scritto: > Il giorno lun 28 mag 2018 alle ore 20:03 Casper Langemeijer < > cas...@pr...> ha scritto: > > Then you state that there should be documentation. You are very wrong > there. Unless you pay for something, you cannot make any demands. The guys > at MooseFS are very talented programmers building very stable software. > Because of the businessmodel they choose, it is for you to use at > absolutely no cost. I would be humble instead of making demands. > I'm sorry but I strongly disagree. > Documentation is different from support. > if you use an OSS, you don't have any support. You'll use it as-is. > But proper documentation is needed for any kind of product. > Look at any OSS project. Even on linux there are man pages. A man page is a > documentation about a product. > > Then there is the question itself. There is no documentation. This > probably is because it is an *internal* thing of the system. I've had to > change my nagios monitoring script multiple times, because sometimes the > protocol changes. Just because you *can* talk to a listening socket, > doesn't mean that you are supposed to. It simply is not a documented API. > Is not true. > What are you saying is correct for a closed source software, where you are > forced to use what is provided. > In Windows, you don't have access to internal APIs, because you are not > supposed to do that kind of things. > But AFAIK this is an open project, even if someone is willing to contribute > with some code, should know how things works. > Are you saying that to add a very simple feature (just an example), a > developer has to reverse-engineer the whole project ? > This is very time consuming and more prone to errors and as result, will > keep developers far away. > Simple example: https://github.com/gluster/glusterfs-specs |
From: Gandalf C. <gan...@gm...> - 2018-05-28 18:16:31
|
Il giorno lun 28 mag 2018 alle ore 20:03 Casper Langemeijer < cas...@pr...> ha scritto: > Then you state that there should be documentation. You are very wrong there. Unless you pay for something, you cannot make any demands. The guys at MooseFS are very talented programmers building very stable software. Because of the businessmodel they choose, it is for you to use at absolutely no cost. I would be humble instead of making demands. I'm sorry but I strongly disagree. Documentation is different from support. if you use an OSS, you don't have any support. You'll use it as-is. But proper documentation is needed for any kind of product. Look at any OSS project. Even on linux there are man pages. A man page is a documentation about a product. > Then there is the question itself. There is no documentation. This probably is because it is an *internal* thing of the system. I've had to change my nagios monitoring script multiple times, because sometimes the protocol changes. Just because you *can* talk to a listening socket, doesn't mean that you are supposed to. It simply is not a documented API. Is not true. What are you saying is correct for a closed source software, where you are forced to use what is provided. In Windows, you don't have access to internal APIs, because you are not supposed to do that kind of things. But AFAIK this is an open project, even if someone is willing to contribute with some code, should know how things works. Are you saying that to add a very simple feature (just an example), a developer has to reverse-engineer the whole project ? This is very time consuming and more prone to errors and as result, will keep developers far away. Simple example: https://github.com/gluster/glusterfs-specs |
From: Casper L. <cas...@pr...> - 2018-05-28 18:04:08
|
Hi Gandalf, To me, the tone of this conversation has not been very friendly so far. I realize my answer was a bit short, and can be conceived as brusk (discourteously blunt). I apologize. I want to point out that if you need any assistance or want to poke other users brains for knowledge, this is the place. There are many people here that are very happy to help. Your first question is very broad, and there is no simple answer. To help us help you, you could describe the background of your questions. Then you state that there should be documentation. You are very wrong there. Unless you pay for something, you cannot make any demands. The guys at MooseFS are very talented programmers building very stable software. Because of the businessmodel they choose, it is for you to use at absolutely no cost. I would be humble instead of making demands. Then there is the question itself. There is no documentation. This probably is because it is an *internal* thing of the system. I've had to change my nagios monitoring script multiple times, because sometimes the protocol changes. Just because you *can* talk to a listening socket, doesn't mean that you are supposed to. It simply is not a documented API. This also means that even if I had the time to document some of MooseFS's internals, I wouldn't do it. It will change. I will help you anyway I can to find out as much as I've learnt from MooseFS though. This requires some effort from your side too. If you are willing to put in your time, using open-source software is a precious gift. If you want to be a user of a product, the guys at MooseFS deliver a very professional solution. Talk to them and see if your requirements are met. Greetings, Casper Op zo 27 mei 2018 18:10 schreef Gandalf Corvotempesta < gan...@gm...>: > Contributing how? I don't know how it works, so i can't write about it... > > Il dom 27 mag 2018, 18:06 Tom Ivar Helbekkmo <ti...@ha...> ha > scritto: > >> Gandalf Corvotempesta <gan...@gm...> writes: >> >> > I think that for an open source project, writing proper docs about the >> > main protocol should be considered. >> >> I am sure your contribution would be appreciated. >> >> -tih >> -- >> Most people who graduate with CS degrees don't understand the significance >> of Lisp. Lisp is the most important idea in computer science. --Alan Kay >> > |
From: Tom I. H. <ti...@ha...> - 2018-05-27 16:23:02
|
Gandalf Corvotempesta <gan...@gm...> writes: > I think that for an open source project, writing proper docs about the > main protocol should be considered. I am sure your contribution would be appreciated. -tih -- Most people who graduate with CS degrees don't understand the significance of Lisp. Lisp is the most important idea in computer science. --Alan Kay |
From: Gandalf C. <gan...@gm...> - 2018-05-27 16:10:25
|
Contributing how? I don't know how it works, so i can't write about it... Il dom 27 mag 2018, 18:06 Tom Ivar Helbekkmo <ti...@ha...> ha scritto: > Gandalf Corvotempesta <gan...@gm...> writes: > > > I think that for an open source project, writing proper docs about the > > main protocol should be considered. > > I am sure your contribution would be appreciated. > > -tih > -- > Most people who graduate with CS degrees don't understand the significance > of Lisp. Lisp is the most important idea in computer science. --Alan Kay > |
From: Gandalf C. <gan...@gm...> - 2018-05-27 12:44:06
|
Il giorno dom 27 mag 2018 alle ore 14:42 Casper Langemeijer < cas...@pr...> ha scritto: > For me, the code of the python web interface was understandable enough to build an extensive nagios plugin to monitor my MooseFS cluster. That's for sure, but having to look at sources is not easy or smart. I think that for an open source project, writing proper docs about the main protocol should be considered. |
From: Casper L. <cas...@pr...> - 2018-05-27 12:42:24
|
For me, the code of the python web interface was understandable enough to build an extensive nagios plugin to monitor my MooseFS cluster. Op zo 27 mei 2018 om 13:59 schreef Gandalf Corvotempesta < gan...@gm...>: > Let's assume someone wants to create a custom client or web interface. > It should talk to mfsmaster, but the communication protocol is not > described anywhere. > > Any docs abot that ? > > Other than that, do you have any official VCS or code-review system to > publish to see what's being worked on, like LizardFS does ? > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Gandalf C. <gan...@gm...> - 2018-05-27 11:58:59
|
Let's assume someone wants to create a custom client or web interface. It should talk to mfsmaster, but the communication protocol is not described anywhere. Any docs abot that ? Other than that, do you have any official VCS or code-review system to publish to see what's being worked on, like LizardFS does ? |
From: Wilson, S. M <st...@pu...> - 2018-05-25 20:04:54
|
________________________________________ From: Gandalf Corvotempesta <gan...@gm...> Sent: Friday, May 25, 2018 3:59 PM To: Wilson, Steven M Cc: dij...@ae...; moo...@li... Subject: Re: [MooseFS-Users] New to MooseFS, question about performance of bonnie++ test Il giorno ven 25 mag 2018 alle ore 20:49 Wilson, Steven M <st...@pu...> ha scritto: > Good luck with your testing. We have been using MooseFS in support of our research labs for several years now (since 2011) and have been extremely pleased with its reliability, features, ease of administration, and competitive performance. This is something i'm very interested in. Are you using MooseFS since 2011? Can you please share your experience ? ================================================================== Yes, we began using MooseFS in 2011 and I'd be happy to answer any specific questions you have about our experience here with it. Steve |
From: Gandalf C. <gan...@gm...> - 2018-05-25 19:59:20
|
Il giorno ven 25 mag 2018 alle ore 20:49 Wilson, Steven M <st...@pu...> ha scritto: > Good luck with your testing. We have been using MooseFS in support of our research labs for several years now (since 2011) and have been extremely pleased with its reliability, features, ease of administration, and competitive performance. This is something i'm very interested in. Are you using MooseFS since 2011? Can you please share your experience ? |
From: Wilson, S. M <st...@pu...> - 2018-05-25 18:49:10
|
Hi Diego, This is completely unrelated to your question but hopefully it's a helpful hint nonetheless. If you haven't done so already, you should set your zpool failmode property to "continue" when using single disk pools: zpool set failmode=continue {dataset} Without that, any ZFS-detected error in your pool will cause ZFS commands to hang and a server reboot is about the only way to recover. I learned this trick the hard way. Good luck with your testing. We have been using MooseFS in support of our research labs for several years now (since 2011) and have been extremely pleased with its reliability, features, ease of administration, and competitive performance. Regards, Steve ________________________________ From: Remolina, Diego J <dij...@ae...> Sent: Friday, May 25, 2018 1:54 PM To: moo...@li... Subject: [MooseFS-Users] New to MooseFS, question about performance of bonnie++ test Hi everyone, I have been playing with MooseFS 3.x for the past week and a half. It has been pretty nice to work with it and looking forward to the 4.x release with HA. As part of the testing I am doing (comparing drbd, glusterfs and MooseFS), I have discovered one issue when running bonnie++ tests. The rewrite portion of the bonnie++ test, just takes forever. I am not sure that I will hit this particular use in real life since this will be mostly a file server, likely exporting files via native MooseFS to other *nix machines and likely using a samba server (which mounts moosefs as a client) to export to windows machines. >From the bonnie++ docs: https://www.coker.com.au/bonnie++/readme.html 3. Rewrite. Each BUFSIZ of the file is read with read(2), dirtied, and rewritten with write(2), requiring an lseek(2). Since no space allocation is done, and the I/O is well-localized, this should test the effectiveness of the filesystem cache and the speed of data transfer. MooseFS Setup: 3 Similar DELL R730xv on 3 separate datacenters located in separate buildings, but on same campus. Servers have Dual Xeon 2680v4 (1 and 2) and 2640v4 (server 3) and 256GB RAM each. Servers have 24 10K RPM 2.5" SAS drives. These were formatted as individual ZFS pools and a file system created on each drive: zfs create -o logbias=trohougput -o atime=off -o xattr=sa mfspoolXX/mfsfs (XX: 00 to 23) compression=lz4 was set on the pool, so it is inherited already by any file system created under each pool. All servers connected to network via 10Gbps (dual links LACP). The datacenters are interconnected at least at 40Gbps to the campus network Server 3 is the mfsmaster, chunkserver and runs cgi server (datacenter with the best redundancies and lower risk of failure). Servers 2 and 1 are meta loggers and chunk servers. My test client is an Ubuntu machine running 16.04 and it is on a separate subnet which has to traverse a subnet firewall and each server's host based firewall. Servers have opened 9419-9422 to the subnet where my client lives. Client machine is connected via 1 gbit to the network and the building switches have at least 20gbps connections to the main campus network. Highest ping RTT from client to servers is: Server1 -> rtt min/avg/max/mdev = 1.220/1.318/1.409/0.081 ms Server2 -> rtt min/avg/max/mdev = 1.214/1.357/1.420/0.064 ms Server3 -> rtt min/avg/max/mdev = 1.399/1.458/1.606/0.072 ms I have configured MooseFS with goal=3. My bonnie++ test looks like (just one result, will be happy to post the other 2 on request): Version 1.97 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ae410001_ubuntu 32G 48 40 113920 19 1346 0 2665 99 112490 3 2531 28 Latency 197ms 29525us 1606ms 8509us 1550ms 108ms ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP ae410001_ubuntu_ 16 954 5 +++++ +++ 1886 6 934 5 1933 4 1000 4 Latency 17502us 1771us 40616us 24583us 10711us 8372us Running the 3 tests using this command: time bonnie++ -d /mnt/mfsserver/home/dijuremo -r 16G -s 32G -m ae410001_ubuntu_cliene_vlan -q -x3 -u dijuremo Took: real 1283m6.528s user 0m39.649s sys 9m29.547s Any other real time use scenarios, copying files with cp, scp and rsync and duplicating files, have been just fine. I pretty much hit the gigabit network bandwidth limitation from the client computer. I am wondering why the rewrite tests on bonnie++ in my setup are so bad (two orders of magnitude lower). Is there something that I may be overlooking in my configuration? I already tried one run with another server connected to the same subnet as the MooseFS servers to skip the subnet firewalls, but results are pretty similar. That other server has a gigabit network connection and the read and writes are around the maximum (~112MB/s) allowed for a gigabit network card, but the rewrite piece of the test is just around the 1MB/s mark. Sorry for the long e-mail, but I hope I have provided enough information to describe the setup. Thanks, Diego |
From: Remolina, D. J <dij...@ae...> - 2018-05-25 18:26:58
|
Hi everyone, I have been playing with MooseFS 3.x for the past week and a half. It has been pretty nice to work with it and looking forward to the 4.x release with HA. As part of the testing I am doing (comparing drbd, glusterfs and MooseFS), I have discovered one issue when running bonnie++ tests. The rewrite portion of the bonnie++ test, just takes forever. I am not sure that I will hit this particular use in real life since this will be mostly a file server, likely exporting files via native MooseFS to other *nix machines and likely using a samba server (which mounts moosefs as a client) to export to windows machines. >From the bonnie++ docs: https://www.coker.com.au/bonnie++/readme.html 3. Rewrite. Each BUFSIZ of the file is read with read(2), dirtied, and rewritten with write(2), requiring an lseek(2). Since no space allocation is done, and the I/O is well-localized, this should test the effectiveness of the filesystem cache and the speed of data transfer. MooseFS Setup: 3 Similar DELL R730xv on 3 separate datacenters located in separate buildings, but on same campus. Servers have Dual Xeon 2680v4 (1 and 2) and 2640v4 (server 3) and 256GB RAM each. Servers have 24 10K RPM 2.5" SAS drives. These were formatted as individual ZFS pools and a file system created on each drive: zfs create -o logbias=trohougput -o atime=off -o xattr=sa mfspoolXX/mfsfs (XX: 00 to 23) compression=lz4 was set on the pool, so it is inherited already by any file system created under each pool. All servers connected to network via 10Gbps (dual links LACP). The datacenters are interconnected at least at 40Gbps to the campus network Server 3 is the mfsmaster, chunkserver and runs cgi server (datacenter with the best redundancies and lower risk of failure). Servers 2 and 1 are meta loggers and chunk servers. My test client is an Ubuntu machine running 16.04 and it is on a separate subnet which has to traverse a subnet firewall and each server's host based firewall. Servers have opened 9419-9422 to the subnet where my client lives. Client machine is connected via 1 gbit to the network and the building switches have at least 20gbps connections to the main campus network. Highest ping RTT from client to servers is: Server1 -> rtt min/avg/max/mdev = 1.220/1.318/1.409/0.081 ms Server2 -> rtt min/avg/max/mdev = 1.214/1.357/1.420/0.064 ms Server3 -> rtt min/avg/max/mdev = 1.399/1.458/1.606/0.072 ms I have configured MooseFS with goal=3. My bonnie++ test looks like (just one result, will be happy to post the other 2 on request): Version 1.97 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ae410001_ubuntu 32G 48 40 113920 19 1346 0 2665 99 112490 3 2531 28 Latency 197ms 29525us 1606ms 8509us 1550ms 108ms ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP ae410001_ubuntu_ 16 954 5 +++++ +++ 1886 6 934 5 1933 4 1000 4 Latency 17502us 1771us 40616us 24583us 10711us 8372us Running the 3 tests using this command: time bonnie++ -d /mnt/mfsserver/home/dijuremo -r 16G -s 32G -m ae410001_ubuntu_cliene_vlan -q -x3 -u dijuremo Took: real 1283m6.528s user 0m39.649s sys 9m29.547s Any other real time use scenarios, copying files with cp, scp and rsync and duplicating files, have been just fine. I pretty much hit the gigabit network bandwidth limitation from the client computer. I am wondering why the rewrite tests on bonnie++ in my setup are so bad (two orders of magnitude lower). Is there something that I may be overlooking in my configuration? I already tried one run with another server connected to the same subnet as the MooseFS servers to skip the subnet firewalls, but results are pretty similar. That other server has a gigabit network connection and the read and writes are around the maximum (~112MB/s) allowed for a gigabit network card, but the rewrite piece of the test is just around the 1MB/s mark. Sorry for the long e-mail, but I hope I have provided enough information to describe the setup. Thanks, Diego |
From: Marco M. <mar...@gm...> - 2018-05-25 12:18:29
|
On 05/25/2018 03:10 AM, Alexander AKHOBADZE wrote: > > Hi All! > > Is it true? I'm asking about "MooseFS Pro also includes a Windows client." > > If yes where can I read more about it? MooseFS Pro is not free. I have to pay to get it. -- Marco > > > > On 20.05.2018 16:08, Marin Bernard wrote: >>>> This is very interesting >>>> Any official and detailed docs about the HA feature? >>>> >>>> Other than this, without making any flame, which are the >>>> differences >>>> between MFS4 and LizardFS? >>>> >>> Hi again, >>> >>> I've been testing both MooseFS 3.0.x and LizardFS 3.1x in parallel >>> for >>> a few weeks now. Here are the main differences I found while using >>> them. I think most of them will still be relevant with MooseFS 4.0. >>> >>> * High availability >>> In theory, LizardFS provides master high-availability with >>> _shadow_ instances. The reality is less glorious, as the piece of >>> software actually implementing master autopromotion (based on uraft) >>> is >>> still proprietary. It is expected to be GPL'd, yet nobody knows when. >>> So as of now, if you need HA with LizardFS, you have to write your >>> own >>> set of scripts and use a 3rd party cluster manager such as corosync. >>> >>> * POSIX ACLs >>> Using POSIX ACLs with LizardFS requires a recent Linux Kernel (4.9+), >>> because a version of FUSE with ACL support is needed. This means ACLs >>> are unusable with most LTS distros, whose kernels are too old. >>> >>> With MooseFS, ACLs do work even with older kernels; maybe because >>> they >>> are implemented at the master level and the client does not even try >>> to >>> enforce them? >>> >>> * FreeBSD support >>> According to the LizardFS team, all components do compile on FreeBSD. >>> They do not provide a package repository, though, nor did they >>> succeed >>> in submitting LizardFS to the FreeBSD ports tree (bug #225489 is >>> still >>> open on phabricator). >>> >>> * Storage classes >>> Erasure coding is supported in LizardFS, and I had no special issue >>> with it. So far, it works as expected. >>> >>> The equivalent of MooseFS storage classes in LizardFS are _custom >>> goals_. While MooseFS storage classes may be dealt with >>> interactively, >>> LizardFS goals are statically defined in a dedicated config file. >>> MooseFS storage classes allow the use of different label expressions >>> at >>> each step of a chunk lifecycle (different labels for new, kept and >>> archived chunks). LizardFS has no equivalent. >>> >>> One application of MooseFS storage classes is to transparently delay >>> the geo-replication of a chunk for a given amount of time, to lower >>> the >>> latency of client I/O operations. As far as I know, it is not >>> possible >>> to do the same with LizardFS. >>> >>> * NFS support >>> LizardFS supports NFSv4 ACL. It may also be used with the NFS Ganesha >>> server to export directories directly through user-space NFS. I did >>> not >>> test this feature myself. According to several people, the feature, >>> which is rather young, does work but performs poorly. Ganesha on top >>> of >>> LizardFS is a multi-tier setup with a lot of moving parts. I think it >>> will take some time for it to reach production quality, if ever. >>> >>> In theory, Ganesha is compatible with kerberized NFS, which would be >>> far more secure a solution than the current mfsmount client, enabling >>> its use in public/hostile environments. I don't know if MooseFS 4.0 >>> has >>> improved on this matter. >>> >>> * Tape server >>> LizardFS includes a tape server daemon for tape archiving. That's >>> another way to implement some kind of chunk lifecycle without storage >>> classes. >>> >>> * IO limits >>> Lizardfs includes a new config file dedicated to IO limits. It allows >>> to assign IO limits to cgroups. The LFS client negotiates its >>> bandwidth >>> limit with the master is leased a reserved bandwidth for a given >>> amount >>> of time. The big limitation of this feature is that the reserved >>> bandwidth may not be shared with another client while the original >>> one >>> is not using it. In that case, the reserved bandwidth is simply lost. >>> >>> * Windows client >>> The paid version of LizardFS includes a native Windows client. I >>> think >>> it is built upon some kind of fsal à la Dokan. The client allows to >>> map >>> a LizardFS export to a drive letter. The client supports Windows ACL >>> (probably stored as NFSv4 ACL). >>> >>> * Removed features >>> LizardFS removed chunkserver maintenance mode and authentication code >>> (AUTH_CODE). Several tabs from the Web UI are also gone, including >>> the >>> one showing quotas. The original CLI tools were replaced by their own >>> versions, which I find harder to use (no more tables, and very >>> verbose >>> output). >>> >>> >>> >>> I've been using MooseFS for several years and never had any problem >>> with it, even in very awkward situations. My feeling is that it is >>> really a rock-solid, battle-tested product. >>> >>> I gave LizardFS a try, mainly for erasure coding and high- >>> availability. >>> While the former worked as expected, the latter turned out to be a >>> myth: the free version of LizardFS does not provide more HA than >>> MooseFS CE: in both cases, building a HA solution requires writing >>> custom scripts and relying on a cluster managed such as corosync. I >>> see >>> no added value in using LizardFS for HA. >>> >>> On all other aspects, LizardFS does the same or worse than MooseFS. I >>> found performance to be roughly equivalent between the two (provided >>> you disable fsync on LizardFS chunkservers, where it is enabled by >>> default). Both solutions are still similar in many aspects, yet >>> LizardFS is clouded by a few negative points: ACLs are hardly usable, >>> custom goals are less powerful than storage classes and less >>> convenient >>> for geo-replication, FreeBSD support is inexistent, CLI tools are >>> less >>> efficient, and native NFS support is too young to be really usable. >>> >>> After a few months, I came to the conclusion than migrating to >>> LizardFS >>> was not worth the single erasure coding feature, especially now that >>> MooseFS 4.0 CE with EC is officially announced. I'd rather buy a few >>> more drives and cope with standard copies for a while than ditching >>> MooseFS reliability for LizardFS. >>> >>> Hope it helps, >>> >>> Marin >> A few corrections: >> >> 1. MooseFS Pro also includes a Windows client. >> >> 2. LizardFS did not "remove" tabs from the web UI: these tabs were >> added by MooseFS after LizardFS had forked the code base. >> > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: <li...@ol...> - 2018-05-25 07:26:16
|
It seems so : https://moosefs.com/blog/moosefs-and-high-performance-computing-isc-2017-exhibition/ Marin. De : Alexander AKHOBADZE Envoyé le :vendredi 25 mai 2018 09:10 À : MooseFS-Users Objet :Re: [MooseFS-Users] MooseFS 4.0 and new features Hi All! Is it true? I'm asking about "MooseFS Pro also includes a Windows client." If yes where can I read more about it? On 20.05.2018 16:08, Marin Bernard wrote: This is very interesting Any official and detailed docs about the HA feature? Other than this, without making any flame, which are the differences between MFS4 and LizardFS? Hi again, I've been testing both MooseFS 3.0.x and LizardFS 3.1x in parallel for a few weeks now. Here are the main differences I found while using them. I think most of them will still be relevant with MooseFS 4.0. * High availability In theory, LizardFS provides master high-availability with _shadow_ instances. The reality is less glorious, as the piece of software actually implementing master autopromotion (based on uraft) is still proprietary. It is expected to be GPL'd, yet nobody knows when. So as of now, if you need HA with LizardFS, you have to write your own set of scripts and use a 3rd party cluster manager such as corosync. * POSIX ACLs Using POSIX ACLs with LizardFS requires a recent Linux Kernel (4.9+), because a version of FUSE with ACL support is needed. This means ACLs are unusable with most LTS distros, whose kernels are too old. With MooseFS, ACLs do work even with older kernels; maybe because they are implemented at the master level and the client does not even try to enforce them? * FreeBSD support According to the LizardFS team, all components do compile on FreeBSD. They do not provide a package repository, though, nor did they succeed in submitting LizardFS to the FreeBSD ports tree (bug #225489 is still open on phabricator). * Storage classes Erasure coding is supported in LizardFS, and I had no special issue with it. So far, it works as expected. The equivalent of MooseFS storage classes in LizardFS are _custom goals_. While MooseFS storage classes may be dealt with interactively, LizardFS goals are statically defined in a dedicated config file. MooseFS storage classes allow the use of different label expressions at each step of a chunk lifecycle (different labels for new, kept and archived chunks). LizardFS has no equivalent. One application of MooseFS storage classes is to transparently delay the geo-replication of a chunk for a given amount of time, to lower the latency of client I/O operations. As far as I know, it is not possible to do the same with LizardFS. * NFS support LizardFS supports NFSv4 ACL. It may also be used with the NFS Ganesha server to export directories directly through user-space NFS. I did not test this feature myself. According to several people, the feature, which is rather young, does work but performs poorly. Ganesha on top of LizardFS is a multi-tier setup with a lot of moving parts. I think it will take some time for it to reach production quality, if ever. In theory, Ganesha is compatible with kerberized NFS, which would be far more secure a solution than the current mfsmount client, enabling its use in public/hostile environments. I don't know if MooseFS 4.0 has improved on this matter. * Tape server LizardFS includes a tape server daemon for tape archiving. That's another way to implement some kind of chunk lifecycle without storage classes. * IO limits Lizardfs includes a new config file dedicated to IO limits. It allows to assign IO limits to cgroups. The LFS client negotiates its bandwidth limit with the master is leased a reserved bandwidth for a given amount of time. The big limitation of this feature is that the reserved bandwidth may not be shared with another client while the original one is not using it. In that case, the reserved bandwidth is simply lost. * Windows client The paid version of LizardFS includes a native Windows client. I think it is built upon some kind of fsal à la Dokan. The client allows to map a LizardFS export to a drive letter. The client supports Windows ACL (probably stored as NFSv4 ACL). * Removed features LizardFS removed chunkserver maintenance mode and authentication code (AUTH_CODE). Several tabs from the Web UI are also gone, including the one showing quotas. The original CLI tools were replaced by their own versions, which I find harder to use (no more tables, and very verbose output). I've been using MooseFS for several years and never had any problem with it, even in very awkward situations. My feeling is that it is really a rock-solid, battle-tested product. I gave LizardFS a try, mainly for erasure coding and high- availability. While the former worked as expected, the latter turned out to be a myth: the free version of LizardFS does not provide more HA than MooseFS CE: in both cases, building a HA solution requires writing custom scripts and relying on a cluster managed such as corosync. I see no added value in using LizardFS for HA. On all other aspects, LizardFS does the same or worse than MooseFS. I found performance to be roughly equivalent between the two (provided you disable fsync on LizardFS chunkservers, where it is enabled by default). Both solutions are still similar in many aspects, yet LizardFS is clouded by a few negative points: ACLs are hardly usable, custom goals are less powerful than storage classes and less convenient for geo-replication, FreeBSD support is inexistent, CLI tools are less efficient, and native NFS support is too young to be really usable. After a few months, I came to the conclusion than migrating to LizardFS was not worth the single erasure coding feature, especially now that MooseFS 4.0 CE with EC is officially announced. I'd rather buy a few more drives and cope with standard copies for a while than ditching MooseFS reliability for LizardFS. Hope it helps, Marin A few corrections: 1. MooseFS Pro also includes a Windows client. 2. LizardFS did not "remove" tabs from the web UI: these tabs were added by MooseFS after LizardFS had forked the code base. |
From: Alexander A. <ba...@ya...> - 2018-05-25 07:10:44
|
Hi All! Is it true? I'm asking about "MooseFS Pro also includes a Windows client." If yes where can I read more about it? On 20.05.2018 16:08, Marin Bernard wrote: >>> This is very interesting >>> Any official and detailed docs about the HA feature? >>> >>> Other than this, without making any flame, which are the >>> differences >>> between MFS4 and LizardFS? >>> >> Hi again, >> >> I've been testing both MooseFS 3.0.x and LizardFS 3.1x in parallel >> for >> a few weeks now. Here are the main differences I found while using >> them. I think most of them will still be relevant with MooseFS 4.0. >> >> * High availability >> In theory, LizardFS provides master high-availability with >> _shadow_ instances. The reality is less glorious, as the piece of >> software actually implementing master autopromotion (based on uraft) >> is >> still proprietary. It is expected to be GPL'd, yet nobody knows when. >> So as of now, if you need HA with LizardFS, you have to write your >> own >> set of scripts and use a 3rd party cluster manager such as corosync. >> >> * POSIX ACLs >> Using POSIX ACLs with LizardFS requires a recent Linux Kernel (4.9+), >> because a version of FUSE with ACL support is needed. This means ACLs >> are unusable with most LTS distros, whose kernels are too old. >> >> With MooseFS, ACLs do work even with older kernels; maybe because >> they >> are implemented at the master level and the client does not even try >> to >> enforce them? >> >> * FreeBSD support >> According to the LizardFS team, all components do compile on FreeBSD. >> They do not provide a package repository, though, nor did they >> succeed >> in submitting LizardFS to the FreeBSD ports tree (bug #225489 is >> still >> open on phabricator). >> >> * Storage classes >> Erasure coding is supported in LizardFS, and I had no special issue >> with it. So far, it works as expected. >> >> The equivalent of MooseFS storage classes in LizardFS are _custom >> goals_. While MooseFS storage classes may be dealt with >> interactively, >> LizardFS goals are statically defined in a dedicated config file. >> MooseFS storage classes allow the use of different label expressions >> at >> each step of a chunk lifecycle (different labels for new, kept and >> archived chunks). LizardFS has no equivalent. >> >> One application of MooseFS storage classes is to transparently delay >> the geo-replication of a chunk for a given amount of time, to lower >> the >> latency of client I/O operations. As far as I know, it is not >> possible >> to do the same with LizardFS. >> >> * NFS support >> LizardFS supports NFSv4 ACL. It may also be used with the NFS Ganesha >> server to export directories directly through user-space NFS. I did >> not >> test this feature myself. According to several people, the feature, >> which is rather young, does work but performs poorly. Ganesha on top >> of >> LizardFS is a multi-tier setup with a lot of moving parts. I think it >> will take some time for it to reach production quality, if ever. >> >> In theory, Ganesha is compatible with kerberized NFS, which would be >> far more secure a solution than the current mfsmount client, enabling >> its use in public/hostile environments. I don't know if MooseFS 4.0 >> has >> improved on this matter. >> >> * Tape server >> LizardFS includes a tape server daemon for tape archiving. That's >> another way to implement some kind of chunk lifecycle without storage >> classes. >> >> * IO limits >> Lizardfs includes a new config file dedicated to IO limits. It allows >> to assign IO limits to cgroups. The LFS client negotiates its >> bandwidth >> limit with the master is leased a reserved bandwidth for a given >> amount >> of time. The big limitation of this feature is that the reserved >> bandwidth may not be shared with another client while the original >> one >> is not using it. In that case, the reserved bandwidth is simply lost. >> >> * Windows client >> The paid version of LizardFS includes a native Windows client. I >> think >> it is built upon some kind of fsal à la Dokan. The client allows to >> map >> a LizardFS export to a drive letter. The client supports Windows ACL >> (probably stored as NFSv4 ACL). >> >> * Removed features >> LizardFS removed chunkserver maintenance mode and authentication code >> (AUTH_CODE). Several tabs from the Web UI are also gone, including >> the >> one showing quotas. The original CLI tools were replaced by their own >> versions, which I find harder to use (no more tables, and very >> verbose >> output). >> >> >> >> I've been using MooseFS for several years and never had any problem >> with it, even in very awkward situations. My feeling is that it is >> really a rock-solid, battle-tested product. >> >> I gave LizardFS a try, mainly for erasure coding and high- >> availability. >> While the former worked as expected, the latter turned out to be a >> myth: the free version of LizardFS does not provide more HA than >> MooseFS CE: in both cases, building a HA solution requires writing >> custom scripts and relying on a cluster managed such as corosync. I >> see >> no added value in using LizardFS for HA. >> >> On all other aspects, LizardFS does the same or worse than MooseFS. I >> found performance to be roughly equivalent between the two (provided >> you disable fsync on LizardFS chunkservers, where it is enabled by >> default). Both solutions are still similar in many aspects, yet >> LizardFS is clouded by a few negative points: ACLs are hardly usable, >> custom goals are less powerful than storage classes and less >> convenient >> for geo-replication, FreeBSD support is inexistent, CLI tools are >> less >> efficient, and native NFS support is too young to be really usable. >> >> After a few months, I came to the conclusion than migrating to >> LizardFS >> was not worth the single erasure coding feature, especially now that >> MooseFS 4.0 CE with EC is officially announced. I'd rather buy a few >> more drives and cope with standard copies for a while than ditching >> MooseFS reliability for LizardFS. >> >> Hope it helps, >> >> Marin > A few corrections: > > 1. MooseFS Pro also includes a Windows client. > > 2. LizardFS did not "remove" tabs from the web UI: these tabs were > added by MooseFS after LizardFS had forked the code base. > |
From: WK <wk...@bn...> - 2018-05-22 22:27:23
|
On 5/21/2018 10:46 PM, Jakub Kruszona-Zawadzki wrote: > > As I remember something wrong had happened when somebody had used MFS 1.6 - not MFS 3.x. If anybody knows any scenario (other than intentionally deleting data) leading to data corruption in MFS (3.x or 4.x - not 1.x) then let me know. > Yes, I was the one who reported the issue where a tech re-used the IP and we lost data. It was years ago on we were still using 1.6.x We have not seen the issue on 3.x but then again we now are more careful about numbering as well <grin> -wk |
From: Marin B. <li...@ol...> - 2018-05-22 19:56:34
|
> On 05/22/2018 02:36 PM, Gandalf Corvotempesta wrote: > > Il giorno mar 22 mag 2018 alle ore 19:28 Marin Bernard <lists@oliva > > rim.com> > > ha scritto: > > > So does Proxmox VE. > > > > Not all server are using proxmox. > > Proxmox repackage ZFS on every release, because they support it. > > If you have to mantain multiple different system, using DKMS is > > more prone > > to error > > than without. A small kernel upgrade could break everything. > > Yes. You may use Proxmox, Ubuntu, FreeBSD or even build your own kernel. > > > That's a myth. ZFS never required ECC RAM, and I run it on boxes > > > with > > > as little as 1GB RAM. Every bit of it can be tuned, including the > > > size > > > of the ARC. > > > > Is not a myth. Is the truth. ECC RAM is not required to run ZFS, > > but you > > won't be sure > > that what are you writing to disks (and checksumming) is exactly > > the same > > you received. > > > > In other words, without ECC RAM you could experience in memory data > > corruption and then > > you will write corrupted data (with a proper checksum), so that ZFS > > will > > reply with corrupted data. > > > > ECC is not mandatory, but highly suggested. > > Without ECC you'll fix the bit-rot, but you are still subject to > > in-memory > > corruption, > > so, the original issue (data corruption) is still unfixed and ZFS > > can't do > > nothing if data is > > corrupted before ZFS. Yes, I know that. However, you seemed to imply that ECC was a requirement. I'm sorry if I misunderstood. Of course, ECC memory is a must-have; I see no reason for not using it. > > > Checksumming and duplication (ditto blocks) of pool metadata are > > > NOT > > > provided by the master. This is a much appreciated feature when > > > you > > > come from an XFS background where a single urecoverable read can > > > crash > > > an entire filesystem. I've been there before; never ever! > > > > At which pool metadata are you referring to ? All. ZFS stores double or triple copies of each metadata block (it depends on the type of the metadata). Corrupted metadata blocks *will* be corrected, even in single-disk setups. > > Anyway, I hate XFS :-) I had multiple failures...... > > > > > MooseFS background verification may take months to check the > > > whole > > > dataset > > > > True. > > > > > ZFS does scrub a whole chunkserver within a few hours, with > > > adaptive, tunable throughput to minimize the impact on the > > > cluster. > > > > Is not the same. > > When ZFS detect a corruption, it does nothing without a RAID. it > > simply > > discard data > > during a read. But if you are reading a file, MooseFS will check > > the > > checksum automatically > > and does the same. Actually, ZFS keeps a list of damaged files. So in case of damaged blocks, you may: * Stop the chunkserver * List and remove damaged chunk files * Restart the chunkserver The mfschunkserver daemon will rescan chunk files and the master will soon be aware that a chunk is missing, and trigger a replication. This is easy to automate with a simple script. > Assuming that you have minimum of 2 copies in MooseFS, it will read, > detect > and read from second copy and will heal the first copy. > So, I don't know what you mean exactly by "does the same" but > it is not the *same* > > > > > > Anyway, even if you scrub the whole ZFS pool, you won't get any > > advantage, > > ZFS is unable > > to recover by itself (without raid) and MooseFS is still unaware of > > corruption. > > MooseFS will be *aware* of the corruption during the read and will > self heal > as I explained above. (Or during the checksum checking (native scrub) > loop, > whichever comes first.) > > > > > Ok, chunk1 is corrupted, ZFS detected it during a scrub. And now ? > > ZFS doesn't have any replica to rebuild from. > > MooseFS is unaware of this because their native scrub takes months > > and > > no one is reading that file from a client (forcing the checksum > > verification). > > You seem to be making these constant claims about "native scrub > taking months", > but I believe it was explained in earlier emails that this will > depend on your > hardware configuration. AFAIK, you can't scrub faster than 1 chunk/sec. per chunkserver. I you own 12 servers, they'll 12 chunks/sec. = 720 chunks/min. = 43,200 chunks/hour = 1,036,800 chunks/day. If you have 50,000,000 chunks, it would take roughly 50 days to have them checked at this rate, which would probably put the cluster on its knees. If you scan at a more reasonable rate of 3 chunks/sec, it rises to 150 days. So that's not a claim; that's a fact. > I believe there was another email which basically said > this "native scrub speed" was much improved in version 4. > So I think it is fair to say that you should stop repeating this > "native scrub takes months" claim, > or if you are not going to stop repeating it, at least put some > qualifiers around it. > Or download v4, and see if the speed improved... > I do know that v4 improves on this point, but it not yet production ready. I won't be mentioning it until it is released. |