You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Neddy, N. N. <na...@nd...> - 2015-04-07 15:54:26
|
Hi, I got this error message when copying data from a Novell network drive to SMB drive (that's a MooseFS client). The topology: PC----(LAN)-----SAMBA-----MooseFS cluster | Novell Netware The PC was mapping a few network drives both from Novell and Samba servers, and it's a Windows 7 SP1 x64. So, I opened a Novell drive and chose all and "Send To" Samba drive, then a moment later this message appears "0x8007003b, An unexpected network error occurred.". Google led to MS page https://support.microsoft.com/en-us/kb/2649905 but applied hotfix didn't resolve the error. It doesn't look like MooseFS's fault, but I appreciate your comments on this case. Thanks, ~Nedd |
From: Eduardo K. <edu...@do...> - 2015-04-06 17:43:48
|
Krzysztof Kielak, thank you!! Best Regards, Eduardo Kellenberger El Lunes 06 abril 2015 19:33:17 Krzysztof Kielak escribió: > Dear Eduardo, > > We'll look at this issue definitely tomorrow. It should be possible to > change those parameters in MooseFS version 2.0. > > Best Regards, > Krzysztof Kielak > Director of Operations and Customer Support > Mobile: +48 601 476 440 > > -----Original Message----- > From: Eduardo Kellenberger [mailto:edu...@do...] > Sent: Monday, April 6, 2015 2:02 PM > To: Aleksander Wieliczko > Cc: moo...@li... > Subject: Re: [MooseFS-Users] CHUNKS_SOFT_DEL_LIMIT and CHUNKS_HARD_DEL_LIMIT > on moosefs ce > > Hi, thanks for your answer. > > The number of erasure is variable, I neesito is that from 8am to 11pm files > are not deleted, or the least amount is removed to prevent overloading of > chunckserver and between 11pm and 8am deletion rise. > > This I could control with version 1.6 but with the 2 EC no. > > > best regard, > > Eduardo Kellenberger > > > > > > El Jueves 02 abril 2015 10:33:34 Aleksander Wieliczko escribió: > > Hello > > First of all I would like to explain what these parameters mean and > > how does it work: > > > > CHUNKS_SOFT_DEL_LIMIT - Soft maximum number of chunks being > > simultaneously deleted on one chunkserver (default is 10) > > CHUNKS_HARD_DEL_LIMIT - Hard maximum number of chunks being > > simultaneously deleted on one chunkserver (default is 25) > > > > MooseFS mainly uses the soft delete limit. If the system is able to > > remove all "to delete" chunks in one loop run, deletion limit will not > > be changed. > > But if the number of chunks to delete increase after a loop run, > > deletion limit will be increased by factor of 1.5 but never to a value > > higher than CHUNKS_HARD_DEL_LIMIT. > > > > This mechanism was implemented to prevent MooseFS cluster from > > freezing because of too many delete operations. > > So if you have small number of delete operations in your system hard > > limit will never be reached. > > > > How many files are you deleting in your system? > > -- > ------------------------------------------------- > Eduardo Kellenberger > Departamento de Infraestructura Tecnológica DonWeb "La actitud es todo" > Donweb.com > > Nota de confidencialidad: Este mensaje y archivos adjuntos al mismo son > confidenciales, de uso exclusivo para el destinatario del mismo. La > divulgación y/o uso del mismo sin autorización por parte de DonWeb.com queda > prohibida. > DonWeb.com no se hace responsable del mensaje por la falsificación y/o > alteración del mismo. > De no ser Ud el destinatario del mismo y lo ha recibido por error, por > favor, notifique al remitente y elimínelo de su sistema. > > Confidentiality Note: This message and any attachments (the message) are > confidential and intended solely for the addressees. Any unauthorised use or > dissemination is prohibited by DonWeb.com. > DonWeb.com shall not be liable for the message if altered or falsified. > If you are not the intended addressee of this message, please cancel it > immediately and inform the sender > > Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem > conter dados confidenciais ou privilegiados. > Se você os recebeu por engano ou não é um dos destinatários aos quais ela > foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou > copias realizadas, imediatamente. > É proibida a retenção, distribuição, divulgação ou utilização de quaisquer > informações aqui contidas. > > Por favor, informenos sobre o recebimento indevido desta mensagem, > retornando-a para o autor. > > > ---------------------------------------------------------------------------- > -- > BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT Develop your > own process in accordance with the BPMN 2 standard Learn Process modeling > best practices with Bonita BPM through live exercises > http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ > source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- ------------------------------------------------- Eduardo Kellenberger Departamento de Infraestructura Tecnológica DonWeb "La actitud es todo" Donweb.com Nota de confidencialidad: Este mensaje y archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de DonWeb.com queda prohibida. DonWeb.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud el destinatario del mismo y lo ha recibido por error, por favor, notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by DonWeb.com. DonWeb.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informenos sobre o recebimento indevido desta mensagem, retornando-a para o autor. |
From: Krzysztof K. <krz...@mo...> - 2015-04-06 17:33:22
|
Dear Eduardo, We'll look at this issue definitely tomorrow. It should be possible to change those parameters in MooseFS version 2.0. Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 -----Original Message----- From: Eduardo Kellenberger [mailto:edu...@do...] Sent: Monday, April 6, 2015 2:02 PM To: Aleksander Wieliczko Cc: moo...@li... Subject: Re: [MooseFS-Users] CHUNKS_SOFT_DEL_LIMIT and CHUNKS_HARD_DEL_LIMIT on moosefs ce Hi, thanks for your answer. The number of erasure is variable, I neesito is that from 8am to 11pm files are not deleted, or the least amount is removed to prevent overloading of chunckserver and between 11pm and 8am deletion rise. This I could control with version 1.6 but with the 2 EC no. best regard, Eduardo Kellenberger El Jueves 02 abril 2015 10:33:34 Aleksander Wieliczko escribió: > Hello > First of all I would like to explain what these parameters mean and > how does it work: > > CHUNKS_SOFT_DEL_LIMIT - Soft maximum number of chunks being > simultaneously deleted on one chunkserver (default is 10) > CHUNKS_HARD_DEL_LIMIT - Hard maximum number of chunks being > simultaneously deleted on one chunkserver (default is 25) > > MooseFS mainly uses the soft delete limit. If the system is able to > remove all "to delete" chunks in one loop run, deletion limit will not > be changed. > But if the number of chunks to delete increase after a loop run, > deletion limit will be increased by factor of 1.5 but never to a value > higher than CHUNKS_HARD_DEL_LIMIT. > > This mechanism was implemented to prevent MooseFS cluster from > freezing because of too many delete operations. > So if you have small number of delete operations in your system hard > limit will never be reached. > > How many files are you deleting in your system? -- ------------------------------------------------- Eduardo Kellenberger Departamento de Infraestructura Tecnológica DonWeb "La actitud es todo" Donweb.com Nota de confidencialidad: Este mensaje y archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de DonWeb.com queda prohibida. DonWeb.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud el destinatario del mismo y lo ha recibido por error, por favor, notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by DonWeb.com. DonWeb.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informenos sobre o recebimento indevido desta mensagem, retornando-a para o autor. ---------------------------------------------------------------------------- -- BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT Develop your own process in accordance with the BPMN 2 standard Learn Process modeling best practices with Bonita BPM through live exercises http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve W. <st...@pu...> - 2015-04-06 13:24:09
|
Oh, okay... sorry I missed that you were already doing that. Steve On 04/06/2015 09:20 AM, Eduardo Kellenberger wrote: > Steve, > > It's exactly what I do, but I see no difference between the two config.- > > As indicated earlier, this same work correctly in version 1.6 > > Best regard > > Eduardo > > > > El Lunes 06 abril 2015 12:21:34 moo...@li... escribió: >> ------------------------------ >> >> Message: 5 >> Date: Mon, 06 Apr 2015 08:21:26 -0400 >> From: Steve Wilson <st...@pu...> >> Subject: Re: [MooseFS-Users] CHUNKS_SOFT_DEL_LIMIT and >> CHUNKS_HARD_DEL_LIMIT on moosefs ce >> To: moo...@li... >> Message-ID: <552...@pu...> >> Content-Type: text/plain; charset=windows-1252; format=flowed >> >> Perhaps you could have a cron job that would run every day at 8 a.m. and >> another one at 11 p.m. The 8 a.m. cron job would create a symbolic link >> from /etc/mfs/mfsmaster.cfg to /etc/mfs/mfsmaster-day.cfg and then send >> a signal to the mfsmaster to reload its configuration. And similarly, >> the 11 p.m. cron job would create a symbolic link from >> /etc/mfs/mfsmaster.cfg to /etc/mfs/mfsmaster-night.cfg and send a reload >> signal to mfsmaster. The two different config files would, of course, >> have the two different settings that you want, one for the daytime hours >> and another for the nighttime hours. >> >> Steve |
From: Eduardo K. <edu...@do...> - 2015-04-06 13:17:27
|
Steve, It's exactly what I do, but I see no difference between the two config.- As indicated earlier, this same work correctly in version 1.6 Best regard Eduardo El Lunes 06 abril 2015 12:21:34 moo...@li... escribió: > > ------------------------------ > > Message: 5 > Date: Mon, 06 Apr 2015 08:21:26 -0400 > From: Steve Wilson <st...@pu...> > Subject: Re: [MooseFS-Users] CHUNKS_SOFT_DEL_LIMIT and > CHUNKS_HARD_DEL_LIMIT on moosefs ce > To: moo...@li... > Message-ID: <552...@pu...> > Content-Type: text/plain; charset=windows-1252; format=flowed > > Perhaps you could have a cron job that would run every day at 8 a.m. and > another one at 11 p.m. The 8 a.m. cron job would create a symbolic link > from /etc/mfs/mfsmaster.cfg to /etc/mfs/mfsmaster-day.cfg and then send > a signal to the mfsmaster to reload its configuration. And similarly, > the 11 p.m. cron job would create a symbolic link from > /etc/mfs/mfsmaster.cfg to /etc/mfs/mfsmaster-night.cfg and send a reload > signal to mfsmaster. The two different config files would, of course, > have the two different settings that you want, one for the daytime hours > and another for the nighttime hours. > > Steve -- ------------------------------------------------- Eduardo Kellenberger Departamento de Infraestructura Tecnológica DonWeb "La actitud es todo" Donweb.com Nota de confidencialidad: Este mensaje y archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de DonWeb.com queda prohibida. DonWeb.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud el destinatario del mismo y lo ha recibido por error, por favor, notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by DonWeb.com. DonWeb.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informenos sobre o recebimento indevido desta mensagem, retornando-a para o autor. |
From: Steve W. <st...@pu...> - 2015-04-06 12:21:33
|
Perhaps you could have a cron job that would run every day at 8 a.m. and another one at 11 p.m. The 8 a.m. cron job would create a symbolic link from /etc/mfs/mfsmaster.cfg to /etc/mfs/mfsmaster-day.cfg and then send a signal to the mfsmaster to reload its configuration. And similarly, the 11 p.m. cron job would create a symbolic link from /etc/mfs/mfsmaster.cfg to /etc/mfs/mfsmaster-night.cfg and send a reload signal to mfsmaster. The two different config files would, of course, have the two different settings that you want, one for the daytime hours and another for the nighttime hours. Steve On 04/06/2015 08:01 AM, Eduardo Kellenberger wrote: > Hi, thanks for your answer. > > The number of erasure is variable, I neesito is that from 8am to 11pm files are not deleted, or the least amount is removed to prevent overloading of chunckserver and between 11pm and 8am deletion rise. > > This I could control with version 1.6 but with the 2 EC no. > > > best regard, > > Eduardo Kellenberger > > > > > > El Jueves 02 abril 2015 10:33:34 Aleksander Wieliczko escribió: >> Hello >> First of all I would like to explain what these parameters mean and how >> does it work: >> >> CHUNKS_SOFT_DEL_LIMIT - Soft maximum number of chunks being >> simultaneously deleted on one chunkserver (default is 10) >> CHUNKS_HARD_DEL_LIMIT - Hard maximum number of chunks being >> simultaneously deleted on one chunkserver (default is 25) >> >> MooseFS mainly uses the soft delete limit. If the system is able to >> remove all "to delete" chunks in one loop run, deletion limit will not >> be changed. >> But if the number of chunks to delete increase after a loop run, >> deletion limit will be increased by factor of 1.5 but never to a value >> higher than CHUNKS_HARD_DEL_LIMIT. >> >> This mechanism was implemented to prevent MooseFS cluster from freezing >> because of too many delete operations. >> So if you have small number of delete operations in your system hard >> limit will never be reached. >> >> How many files are you deleting in your system? |
From: Eduardo K. <edu...@do...> - 2015-04-06 11:58:12
|
Hi, thanks for your answer. The number of erasure is variable, I neesito is that from 8am to 11pm files are not deleted, or the least amount is removed to prevent overloading of chunckserver and between 11pm and 8am deletion rise. This I could control with version 1.6 but with the 2 EC no. best regard, Eduardo Kellenberger El Jueves 02 abril 2015 10:33:34 Aleksander Wieliczko escribió: > Hello > First of all I would like to explain what these parameters mean and how > does it work: > > CHUNKS_SOFT_DEL_LIMIT - Soft maximum number of chunks being > simultaneously deleted on one chunkserver (default is 10) > CHUNKS_HARD_DEL_LIMIT - Hard maximum number of chunks being > simultaneously deleted on one chunkserver (default is 25) > > MooseFS mainly uses the soft delete limit. If the system is able to > remove all "to delete" chunks in one loop run, deletion limit will not > be changed. > But if the number of chunks to delete increase after a loop run, > deletion limit will be increased by factor of 1.5 but never to a value > higher than CHUNKS_HARD_DEL_LIMIT. > > This mechanism was implemented to prevent MooseFS cluster from freezing > because of too many delete operations. > So if you have small number of delete operations in your system hard > limit will never be reached. > > How many files are you deleting in your system? -- ------------------------------------------------- Eduardo Kellenberger Departamento de Infraestructura Tecnológica DonWeb "La actitud es todo" Donweb.com Nota de confidencialidad: Este mensaje y archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de DonWeb.com queda prohibida. DonWeb.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud el destinatario del mismo y lo ha recibido por error, por favor, notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by DonWeb.com. DonWeb.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informenos sobre o recebimento indevido desta mensagem, retornando-a para o autor. |
From: Aleksander W. <ale...@mo...> - 2015-04-03 12:36:09
|
Hi. Yes. Of course every thing depend on your system characteristic but if you have a lot of replications, other operations can be starved by internal chunkserver rebalancing process. Hold to the view that one chunkserver will be overloaded by internal replication process, whole cluster will have less number of I/O operations. We recommend using default settings for HDD_REBALANCE_UTILIZATION option. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 04/03/2015 10:39 AM, Neddy, NH. Nam wrote: > I increased HDD_REBALANCE_UTILIZATION value and the performance is > improved significantly. So, is there any drawback if this value is too > high? > > Thanks, > > On Fri, Apr 3, 2015 at 3:16 PM, Aleksander Wieliczko > <ale...@mo...> wrote: >> Hello. >> >> Starting from version MooseFS 2.0.x there's a new mechanism for internal >> replication within chunkserver that can be controlled using >> HDD_REBALANCE_UTILIZATION parameter in mfschunkserver.cfg file. >> >> So you have two options: >> 1. You can set it to minimal value of 1, to decrease the impact of internal >> rebalancing on i/o performance of the chunkserver. >> HDD_REBALANCE_UTILIZATION = 1 >> >> 2. You can set it to higher number during off-peak hours so the rebalancing >> between chunkservers disks will finish faster in some maintenance window. >> >> Best regards >> Aleksander Wieliczko >> Technical Support Engineer >> MooseFS.com >> >> >> On 04/03/2015 09:45 AM, Davies Liu wrote: >> >> You can have a script to copy some files into new disk and create a >> symlink for copied files. >> >> Once the disks got balanced, you can stop the chunk server, REMOVE all >> symlinks, then start chunk server. >> >> On Thu, Apr 2, 2015 at 11:30 PM, Neddy, NH. Nam <na...@nd...> wrote: >> >> Hi, >> >> One disk was added to a chunkserver (2.0.60 CE) recently, but the >> performance seems to be not good compared to adding totally new >> chunkserver. >> >> All the disks are the same capacity but different brands. The >> previously new added chunkserver has performed at limit write speed of >> SATA2 (avg 60 MB/s), but the current server with brand disk added that >> only has been wrting avg. 25 MB/s. (as seen in server graphs) >> >> Does anybody have idea on this? >> Thanks, >> ~Neddy >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming The Go Parallel Website, >> sponsored >> by Intel and developed in partnership with Slashdot Media, is your hub for >> all >> things parallel software development, from weekly thought leadership blogs >> to >> news, videos, case studies, tutorials and more. Take a look and join the >> conversation now. http://goparallel.sourceforge.net/ >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> >> |
From: Krzysztof K. <krz...@mo...> - 2015-04-03 11:10:45
|
Dear MooseFS Community, A new version of MooseFS is coming to you on Easter with some new exciting features and specially built packages for “MooseFS 3.0 Easter Edition for Raspberry Pi 2” running on Raspbian Wheezy :) Among many updates and fixes, that are laboriously listed in source packages change log, here’s a quick view on most important changes and new features: - Added new functionality for Storage Tiering (possibility to assign labels to chunkservers and use those labels in mfssetgoal expressions when defining policies for storing, keeping and archiving data), - Performance improvement for small file random I/O due to the new semantics for fsync() system call, which now by default has the same behaviour as on any local Linux/FreeBSD filesystem, - Support for global locks compatible with POSIX locks and flock() advisory lock mechanism when using fuse 2.9+. Simple test of unpacking complete source tree for latest Linux kernel (more than 50k objects with cumulative size of more than 630 MB when unpacked) shows 100% speed improvement when going from MooseFS version 2.0.x to version 3.0.x. Be sure to check the documentation on new storage tiering feature and installation instructions at http://moosefs.com. Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |
From: Neddy, N. N. <na...@nd...> - 2015-04-03 08:39:42
|
I increased HDD_REBALANCE_UTILIZATION value and the performance is improved significantly. So, is there any drawback if this value is too high? Thanks, On Fri, Apr 3, 2015 at 3:16 PM, Aleksander Wieliczko <ale...@mo...> wrote: > Hello. > > Starting from version MooseFS 2.0.x there's a new mechanism for internal > replication within chunkserver that can be controlled using > HDD_REBALANCE_UTILIZATION parameter in mfschunkserver.cfg file. > > So you have two options: > 1. You can set it to minimal value of 1, to decrease the impact of internal > rebalancing on i/o performance of the chunkserver. > HDD_REBALANCE_UTILIZATION = 1 > > 2. You can set it to higher number during off-peak hours so the rebalancing > between chunkservers disks will finish faster in some maintenance window. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com > > > On 04/03/2015 09:45 AM, Davies Liu wrote: > > You can have a script to copy some files into new disk and create a > symlink for copied files. > > Once the disks got balanced, you can stop the chunk server, REMOVE all > symlinks, then start chunk server. > > On Thu, Apr 2, 2015 at 11:30 PM, Neddy, NH. Nam <na...@nd...> wrote: > > Hi, > > One disk was added to a chunkserver (2.0.60 CE) recently, but the > performance seems to be not good compared to adding totally new > chunkserver. > > All the disks are the same capacity but different brands. The > previously new added chunkserver has performed at limit write speed of > SATA2 (avg 60 MB/s), but the current server with brand disk added that > only has been wrting avg. 25 MB/s. (as seen in server graphs) > > Does anybody have idea on this? > Thanks, > ~Neddy > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > |
From: Aleksander W. <ale...@mo...> - 2015-04-03 08:16:46
|
Hello. Starting from version MooseFS 2.0.x there's a new mechanism for internal replication within chunkserver that can be controlled using HDD_REBALANCE_UTILIZATION parameter in mfschunkserver.cfg file. So you have two options: 1. You can set it to minimal value of 1, to decrease the impact of internal rebalancing on i/o performance of the chunkserver. HDD_REBALANCE_UTILIZATION = 1 2. You can set it to higher number during off-peak hours so the rebalancing between chunkservers disks will finish faster in some maintenance window. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 04/03/2015 09:45 AM, Davies Liu wrote: > You can have a script to copy some files into new disk and create a > symlink for copied files. > > Once the disks got balanced, you can stop the chunk server, REMOVE all > symlinks, then start chunk server. > > On Thu, Apr 2, 2015 at 11:30 PM, Neddy, NH. Nam <na...@nd...> wrote: >> Hi, >> >> One disk was added to a chunkserver (2.0.60 CE) recently, but the >> performance seems to be not good compared to adding totally new >> chunkserver. >> >> All the disks are the same capacity but different brands. The >> previously new added chunkserver has performed at limit write speed of >> SATA2 (avg 60 MB/s), but the current server with brand disk added that >> only has been wrting avg. 25 MB/s. (as seen in server graphs) >> >> Does anybody have idea on this? >> Thanks, >> ~Neddy >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming The Go Parallel Website, sponsored >> by Intel and developed in partnership with Slashdot Media, is your hub for all >> things parallel software development, from weekly thought leadership blogs to >> news, videos, case studies, tutorials and more. Take a look and join the >> conversation now. http://goparallel.sourceforge.net/ >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Davies L. <dav...@gm...> - 2015-04-03 07:45:51
|
You can have a script to copy some files into new disk and create a symlink for copied files. Once the disks got balanced, you can stop the chunk server, REMOVE all symlinks, then start chunk server. On Thu, Apr 2, 2015 at 11:30 PM, Neddy, NH. Nam <na...@nd...> wrote: > Hi, > > One disk was added to a chunkserver (2.0.60 CE) recently, but the > performance seems to be not good compared to adding totally new > chunkserver. > > All the disks are the same capacity but different brands. The > previously new added chunkserver has performed at limit write speed of > SATA2 (avg 60 MB/s), but the current server with brand disk added that > only has been wrting avg. 25 MB/s. (as seen in server graphs) > > Does anybody have idea on this? > Thanks, > ~Neddy > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- - Davies |
From: Neddy, N. N. <na...@nd...> - 2015-04-03 06:30:08
|
Hi, One disk was added to a chunkserver (2.0.60 CE) recently, but the performance seems to be not good compared to adding totally new chunkserver. All the disks are the same capacity but different brands. The previously new added chunkserver has performed at limit write speed of SATA2 (avg 60 MB/s), but the current server with brand disk added that only has been wrting avg. 25 MB/s. (as seen in server graphs) Does anybody have idea on this? Thanks, ~Neddy |
From: Steve W. <st...@pu...> - 2015-04-02 12:50:58
|
On 04/02/2015 02:54 AM, Aleksander Wieliczko wrote: > Hello everyone. > We had similar problem with listing folders with more than one million > of files. > This problem was resolved in MooseFS 2.0.60 > Which version of MooseFS are you using? > We're using version 2.0.50. I mentioned earlier that I thought that Ubuntu clients could list the contents okay while RedHat clients couldn't. After some more testing today, it looks like the only computer that can actually list the contents (that I've found so far) is the one which is also running mfs-master. I'll need to try MooseFS 2.0.60 at some point to see if it helps get around the problem. Thanks, Steve > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <moosefs.com> > > On 04/01/2015 06:06 PM, Steve Wilson wrote: >> On 04/01/2015 11:12 AM, Ricardo J. Barberis wrote: >>> El Miércoles 01/04/2015, Steve Wilson escribió: >>>> Hi, >>>> >>>> One of our MooseFS installations has a problem with one of its >>>> directories. mfsdirinfo shows: >>>> >>>> [root@mckinley IPAB]# mfsdirinfo database >>>> database: >>>> inodes: 2381944 >>>> directories: 1 >>>> files: 2381943 >>>> chunks: 2381942 >>>> length: 4863750852 >>>> size: 168298493952 >>>> realsize: 336596987904 >>>> >>>> >>>> When going into the directory and attempting to list the contents, I get >>>> the following error from ls: >>>> >>>> ls: reading directory .: Input/output error >>>> >>>> I can create new entries in this directory, list them, and remove them: >>>> >>>> [root@mckinley x]# touch a >>>> [root@mckinley x]# ls -l a >>>> -rw-r--r-- 1 root root 0 Apr 1 10:45 a >>>> [root@mckinley x]# rm a >>>> rm: remove regular empty file `a'? y >>>> [root@mckinley x]# >>>> >>>> I don't see any MooseFS error messages in the logs and the MooseFS >>>> filesystem check information is clean. Any suggestions??? >>> I'm not sure, but I'd check connectivity from clients to chunkservers (i.e: >>> firewall, routing) because creating an empty file only contacts mfsmaster and >>> maybe ls'ing an empty file does the same. >>> >>> Does a simple '/bin/ls' inside database work? >>> >> It turns out that my RHEL 6 clients are unable to deal with the contents >> of this directory while my Ubuntu 14.04 clients were able to access it. >> The users who created this directory are on Ubuntu workstations but my >> backup servers run RHEL or CentOS. I'm fairly certain that I'm reaching >> some OS limit since this directory has over 2 million entries in it. >> >> Thanks, >> Steve >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming The Go Parallel Website, sponsored >> by Intel and developed in partnership with Slashdot Media, is your hub for all >> things parallel software development, from weekly thought leadership blogs to >> news, videos, case studies, tutorials and more. Take a look and join the >> conversation now.http://goparallel.sourceforge.net/ >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Aleksander W. <ale...@mo...> - 2015-04-02 08:33:43
|
Hello First of all I would like to explain what these parameters mean and how does it work: CHUNKS_SOFT_DEL_LIMIT - Soft maximum number of chunks being simultaneously deleted on one chunkserver (default is 10) CHUNKS_HARD_DEL_LIMIT - Hard maximum number of chunks being simultaneously deleted on one chunkserver (default is 25) MooseFS mainly uses the soft delete limit. If the system is able to remove all "to delete" chunks in one loop run, deletion limit will not be changed. But if the number of chunks to delete increase after a loop run, deletion limit will be increased by factor of 1.5 but never to a value higher than CHUNKS_HARD_DEL_LIMIT. This mechanism was implemented to prevent MooseFS cluster from freezing because of too many delete operations. So if you have small number of delete operations in your system hard limit will never be reached. How many files are you deleting in your system? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 04/01/2015 09:48 PM, Eduardo Kellenberger wrote: > > To make changes to the configuration file on the parameters mfsmaster.conf > > CHUNKS_SOFT_DEL_LIMIT > CHUNKS_HARD_DEL_LIMIT > > these are not reflected in the graphs. > > Anyone have idea about it. > > > > |
From: Aleksander W. <ale...@mo...> - 2015-04-02 06:54:49
|
Hello everyone. We had similar problem with listing folders with more than one million of files. This problem was resolved in MooseFS 2.0.60 Which version of MooseFS are you using? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 04/01/2015 06:06 PM, Steve Wilson wrote: > On 04/01/2015 11:12 AM, Ricardo J. Barberis wrote: >> El Miércoles 01/04/2015, Steve Wilson escribió: >>> Hi, >>> >>> One of our MooseFS installations has a problem with one of its >>> directories. mfsdirinfo shows: >>> >>> [root@mckinley IPAB]# mfsdirinfo database >>> database: >>> inodes: 2381944 >>> directories: 1 >>> files: 2381943 >>> chunks: 2381942 >>> length: 4863750852 >>> size: 168298493952 >>> realsize: 336596987904 >>> >>> >>> When going into the directory and attempting to list the contents, I get >>> the following error from ls: >>> >>> ls: reading directory .: Input/output error >>> >>> I can create new entries in this directory, list them, and remove them: >>> >>> [root@mckinley x]# touch a >>> [root@mckinley x]# ls -l a >>> -rw-r--r-- 1 root root 0 Apr 1 10:45 a >>> [root@mckinley x]# rm a >>> rm: remove regular empty file `a'? y >>> [root@mckinley x]# >>> >>> I don't see any MooseFS error messages in the logs and the MooseFS >>> filesystem check information is clean. Any suggestions??? >> I'm not sure, but I'd check connectivity from clients to chunkservers (i.e: >> firewall, routing) because creating an empty file only contacts mfsmaster and >> maybe ls'ing an empty file does the same. >> >> Does a simple '/bin/ls' inside database work? >> > It turns out that my RHEL 6 clients are unable to deal with the contents > of this directory while my Ubuntu 14.04 clients were able to access it. > The users who created this directory are on Ubuntu workstations but my > backup servers run RHEL or CentOS. I'm fairly certain that I'm reaching > some OS limit since this directory has over 2 million entries in it. > > Thanks, > Steve > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Aleksander W. <ale...@mo...> - 2015-04-02 06:40:36
|
Hi MooseFS in "Disk" tab shows transfer speed from system - not the direct hdd speed. Usually OS uses write-back cache to speed up writes. As a result if there are not so many writes then in MFS stats you see "fake" write speed because it's only speed of copying data to kernel write-back cache. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 04/01/2015 12:33 PM, Neddy, NH. Nam wrote: > My "servers" are old Intel Pentium Dual-core bpxes with SATA2 HDDs, > but the transfer rate are enormous: > > https://i.imgur.com/ouN8oFs.png > > How did MooseFS calculated those rates? > > Note that all servers are connected to 1G switch, and usage is about > 90% of 1G when there's traffic. > > Best regards, > ~Nam > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Eduardo K. <edu...@do...> - 2015-04-01 19:45:39
|
To make changes to the configuration file on the parameters mfsmaster.conf CHUNKS_SOFT_DEL_LIMIT CHUNKS_HARD_DEL_LIMIT these are not reflected in the graphs. Anyone have idea about it. -- ------------------------------------------------- Eduardo Kellenberger Departamento de Infraestructura Tecnológica DonWeb "La actitud es todo" Donweb.com Nota de confidencialidad: Este mensaje y archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de DonWeb.com queda prohibida. DonWeb.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud el destinatario del mismo y lo ha recibido por error, por favor, notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by DonWeb.com. DonWeb.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informenos sobre o recebimento indevido desta mensagem, retornando-a para o autor. |
From: Steve W. <st...@pu...> - 2015-04-01 16:06:35
|
On 04/01/2015 11:12 AM, Ricardo J. Barberis wrote: > El Miércoles 01/04/2015, Steve Wilson escribió: >> Hi, >> >> One of our MooseFS installations has a problem with one of its >> directories. mfsdirinfo shows: >> >> [root@mckinley IPAB]# mfsdirinfo database >> database: >> inodes: 2381944 >> directories: 1 >> files: 2381943 >> chunks: 2381942 >> length: 4863750852 >> size: 168298493952 >> realsize: 336596987904 >> >> >> When going into the directory and attempting to list the contents, I get >> the following error from ls: >> >> ls: reading directory .: Input/output error >> >> I can create new entries in this directory, list them, and remove them: >> >> [root@mckinley x]# touch a >> [root@mckinley x]# ls -l a >> -rw-r--r-- 1 root root 0 Apr 1 10:45 a >> [root@mckinley x]# rm a >> rm: remove regular empty file `a'? y >> [root@mckinley x]# >> >> I don't see any MooseFS error messages in the logs and the MooseFS >> filesystem check information is clean. Any suggestions??? > I'm not sure, but I'd check connectivity from clients to chunkservers (i.e: > firewall, routing) because creating an empty file only contacts mfsmaster and > maybe ls'ing an empty file does the same. > > Does a simple '/bin/ls' inside database work? > It turns out that my RHEL 6 clients are unable to deal with the contents of this directory while my Ubuntu 14.04 clients were able to access it. The users who created this directory are on Ubuntu workstations but my backup servers run RHEL or CentOS. I'm fairly certain that I'm reaching some OS limit since this directory has over 2 million entries in it. Thanks, Steve |
From: Ricardo J. B. <ric...@do...> - 2015-04-01 15:34:40
|
El Miércoles 01/04/2015, Steve Wilson escribió: > Hi, > > One of our MooseFS installations has a problem with one of its > directories. mfsdirinfo shows: > > [root@mckinley IPAB]# mfsdirinfo database > database: > inodes: 2381944 > directories: 1 > files: 2381943 > chunks: 2381942 > length: 4863750852 > size: 168298493952 > realsize: 336596987904 > > > When going into the directory and attempting to list the contents, I get > the following error from ls: > > ls: reading directory .: Input/output error > > I can create new entries in this directory, list them, and remove them: > > [root@mckinley x]# touch a > [root@mckinley x]# ls -l a > -rw-r--r-- 1 root root 0 Apr 1 10:45 a > [root@mckinley x]# rm a > rm: remove regular empty file `a'? y > [root@mckinley x]# > > I don't see any MooseFS error messages in the logs and the MooseFS > filesystem check information is clean. Any suggestions??? I'm not sure, but I'd check connectivity from clients to chunkservers (i.e: firewall, routing) because creating an empty file only contacts mfsmaster and maybe ls'ing an empty file does the same. Does a simple '/bin/ls' inside database work? > Thanks! > > Steve Regards, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com _____ |
From: Steve W. <st...@pu...> - 2015-04-01 14:48:08
|
Hi, One of our MooseFS installations has a problem with one of its directories. mfsdirinfo shows: [root@mckinley IPAB]# mfsdirinfo database database: inodes: 2381944 directories: 1 files: 2381943 chunks: 2381942 length: 4863750852 size: 168298493952 realsize: 336596987904 When going into the directory and attempting to list the contents, I get the following error from ls: ls: reading directory .: Input/output error I can create new entries in this directory, list them, and remove them: [root@mckinley x]# touch a [root@mckinley x]# ls -l a -rw-r--r-- 1 root root 0 Apr 1 10:45 a [root@mckinley x]# rm a rm: remove regular empty file `a'? y [root@mckinley x]# I don't see any MooseFS error messages in the logs and the MooseFS filesystem check information is clean. Any suggestions??? Thanks! Steve |
From: Neddy, N. N. <na...@nd...> - 2015-04-01 10:33:45
|
My "servers" are old Intel Pentium Dual-core bpxes with SATA2 HDDs, but the transfer rate are enormous: https://i.imgur.com/ouN8oFs.png How did MooseFS calculated those rates? Note that all servers are connected to 1G switch, and usage is about 90% of 1G when there's traffic. Best regards, ~Nam |
From: Alexey S. <al...@si...> - 2015-03-31 17:09:02
|
How to avoid this restart? -- С уважением, Алексей Силк With best regards, Aleksey Silk +7 (981) 849-12-36 skype - rootiks -----Исходное сообщение----- От: "Neddy, NH. Nam" <na...@nd...> Отправлено: 31.03.2015 17:56 Кому: "Alexey Silk" <al...@si...> Копия: "moo...@li..." <moo...@li...> Тема: Re: [MooseFS-Users] >>: Native windows client HI, I'm running a similar scenario, that looked like: MooseFS ---- SAMBA ------ LAN. My procedure looks like: 1. install moosefs cluster up and running (currently on Debian Wheezy) 1.i. configure mfsexports (plan for client mounting points) 2. set up a samba server, samba users, and share folders the same as previous step 3. mfsmount and start samba service It works but with any small changes, I have to restart samba service, unmount/remount on windows clients. This could be awful when there are many users, not only the possibly bottleneck if the filesystem size increases and there are too many small files as in my case. Best regards. ~Nedd On Tue, Mar 31, 2015 at 12:31 PM, Alexey Silk <al...@si...> wrote: > It will be a great example how to use samba as gateway. > Please do it. > > > > -- > > С уважением, Алексей Силк > With best regards, Aleksey Silk > +7 (981) 849-12-36 > skype - rootiks > ________________________________ > От: Krzysztof Kielak > Отправлено: 31.03.2015 8:06 > Кому: web user > Копия: moo...@li... > Тема: Re: [MooseFS-Users] Native windows client > > Dear Vineet, > > One option for using MooseFS on Windows is to setup a gateway through Samba > servers possibly running on few chunkservers to spread the load from the > clients. If you need any assistance with such setup we would be happy to > help you. > > LizardFS native Windows client seems to be only “youtube-ware”. > > On 30 Mar 2015, at 19:55, web user <web...@gm...> wrote: > > Hi Guys, > > I got the client working on Mac and Linux. Both work great. Any chance to > get a client working on Windows? I see that LizardFS has a native windows > client, but about 15 minutes of googling did not get me anything. > > Any pointers would be much appreciated. > > Regards, > > Vineet > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. > http://goparallel.sourceforge.net/_______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > Best Regards, > Krzysztof Kielak > Director of Operations and Customer Support > Mobile: +48 601 476 440 > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Neddy, N. N. <na...@nd...> - 2015-03-31 15:25:05
|
HI, I'm running a similar scenario, that looked like: MooseFS ---- SAMBA ------ LAN. My procedure looks like: 1. install moosefs cluster up and running (currently on Debian Wheezy) 1.i. configure mfsexports (plan for client mounting points) 2. set up a samba server, samba users, and share folders the same as previous step 3. mfsmount and start samba service It works but with any small changes, I have to restart samba service, unmount/remount on windows clients. This could be awful when there are many users, not only the possibly bottleneck if the filesystem size increases and there are too many small files as in my case. Best regards. ~Nedd On Tue, Mar 31, 2015 at 12:31 PM, Alexey Silk <al...@si...> wrote: > It will be a great example how to use samba as gateway. > Please do it. > > > > -- > > С уважением, Алексей Силк > With best regards, Aleksey Silk > +7 (981) 849-12-36 > skype - rootiks > ________________________________ > От: Krzysztof Kielak > Отправлено: 31.03.2015 8:06 > Кому: web user > Копия: moo...@li... > Тема: Re: [MooseFS-Users] Native windows client > > Dear Vineet, > > One option for using MooseFS on Windows is to setup a gateway through Samba > servers possibly running on few chunkservers to spread the load from the > clients. If you need any assistance with such setup we would be happy to > help you. > > LizardFS native Windows client seems to be only “youtube-ware”. > > On 30 Mar 2015, at 19:55, web user <web...@gm...> wrote: > > Hi Guys, > > I got the client working on Mac and Linux. Both work great. Any chance to > get a client working on Windows? I see that LizardFS has a native windows > client, but about 15 minutes of googling did not get me anything. > > Any pointers would be much appreciated. > > Regards, > > Vineet > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. > http://goparallel.sourceforge.net/_______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > Best Regards, > Krzysztof Kielak > Director of Operations and Customer Support > Mobile: +48 601 476 440 > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Alexey S. <al...@si...> - 2015-03-31 06:27:24
|
It will be a great example how to use samba as gateway. Please do it. -- С уважением, Алексей Силк With best regards, Aleksey Silk +7 (981) 849-12-36 skype - rootiks -----Исходное сообщение----- От: "Krzysztof Kielak" <krz...@mo...> Отправлено: 31.03.2015 8:06 Кому: "web user" <web...@gm...> Копия: "moo...@li..." <moo...@li...> Тема: Re: [MooseFS-Users] Native windows client Dear Vineet, One option for using MooseFS on Windows is to setup a gateway through Samba servers possibly running on few chunkservers to spread the load from the clients. If you need any assistance with such setup we would be happy to help you. LizardFS native Windows client seems to be only “youtube-ware”. On 30 Mar 2015, at 19:55, web user <web...@gm...> wrote: Hi Guys, I got the client working on Mac and Linux. Both work great. Any chance to get a client working on Windows? I see that LizardFS has a native windows client, but about 15 minutes of googling did not get me anything. Any pointers would be much appreciated. Regards, Vineet ------------------------------------------------------------------------------ Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/_______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |