You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jakub M. <ja...@31...> - 2012-01-06 12:26:11
|
Hello, Im trying to create sparse files on mfs but it isn't working. On MFS: local:/mfs# dd if=/dev/zero of=jakub.test bs=4096 count=0 seek=1000 0 bytes (0 B) copied, 0.000205242 s, 0.0 kB/s local:/mfs# du -h jakub.test 4.0M jakub.test On local disk: local:~# dd if=/dev/zero of=jakub.test bs=4096 count=0 seek=1000 0 bytes (0 B) copied, 1.0481e-05 s, 0.0 kB/s local:~# du -h jakub.test 0 jakub.test As you can see on my local hdd everything is fine, file size is 0. On MFS file size is 4Mb. What about cp? local:~# cp --sparse=always jakub.test /mfs/ local:/mfs# du -h /mfs/jakub.test 4.0M jakub.test local:/mfs# cp --sparse=always jakub.test /root/jakub.test2 local:~# du -h /root/jakub.test2 0 jakub.test2 After cp from MFS to local hdd sparse file has good size. What im doing wrong? Why all sparse files sizes on MFS are >0? -- Jakub Mroziński |
From: Ólafur Ó. <osv...@ne...> - 2012-01-06 11:36:55
|
Hi, We run a customized version of Ubuntu and to be honest I've not set it up in a distributable form. I'll check if I can gather it up next week and send the info to the list. /Oli On 6.1.2012, at 11:20, Steve wrote: > > That's really cool, are you able to share the scripts, image and/or how the > usb image was made ? > > > > Steve > > > > > > > > -------Original Message------- > > > > From: Ólafur Ósvaldsson > > Date: 06/01/2012 10:20:28 > > To: moo...@li... > > Subject: Re: [Moosefs-users] moosefs distro > > > > Hi, > > We run the chunkservers completely from memory, they boot from USB and > > only use a small partition of the USB drive for the graph data so that the > > history isn't lost between reboots. > > > > Every chunkserver has 6x1TB disks and 3GB of ram and there are startup > > scripts that initialize new disks on boot if required, so a new server can > just > > be put into the rack with a USB stick plugged in and it will clear all the > disks > > and setup for MFS if it is not like that already. > > > > /Oli > > > > On 5.1.2012, at 16:11, Travis Hein wrote: > > > >> The chunk server daemons are very low footprint for system resource > >> requirements. Enough so they are suitable to coexist with other system > >> services. Where if you have a cluster of physical machines each with a > >> local disk, just making every compute node also be a chunk server for > >> aggregated file system capacity. > >> > >> Most of the time lately though we put everything in virtual machines on > >> virtual machine hosting platforms now. Which I guess I kind of feel it > >> is to as efficient to have the chunk servers spread out everywhere, all > >> VMs are backed by the same SAN anyway, so the performance over spread > >> out disks goes away right. > >> > >> So lately I create a Virtual machine just for running the chunk server > >> process. We have a "standard" of using CentOS for our VMs. Which is > >> arguably kind of wasteful to just use it as a chunk server process, but > >> it is pretty much set and forget and appliance-ized. > >> > >> I have often thought about creating a moose fs stand alone appliance. An > >> embedded nano itx board in a 1U rackmount chassis, solid state boot, and > >> a minimal linux distribution, with a large sata drives. Both low power > >> and efficient, out of our virtualized platform. At least probably > >> cheaper to grow capacity than buying more iSCSI RAID SAN products :P > >> But this is still in my to do some day pile. > >> On 12-01-04 10:28 AM, Steve wrote: > >>> Do people use moose boxes for other roles ? > >>> > >>> > >>> > >>> Sometime ago I made a moosefs linux ISO cd (not a respin) however the > >>> installer wasn't insert cd and job done. Taking it any further was beyond > my > >>> capabilities. > >>> > >>> > >>> > >>> Is such a thing needed or desired ? Any collaborators > >>> > >>> > >>> > >>> Steve > >>> > >>> > ----------------------------------------------------------------------------- > > >>> Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > > >>> infrastructure or vast IT resources to deliver seamless, secure access to > > >>> virtual desktops. With this all-in-one solution, easily deploy virtual > >>> desktops for less than the cost of PCs and save 60% on VDI infrastructure > > >>> costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > >>> _______________________________________________ > >>> moosefs-users mailing list > >>> moo...@li... > >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users > >> > >> > >> -- > >> Travis > >> > >> > >> > ----------------------------------------------------------------------------- > > >> Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > >> infrastructure or vast IT resources to deliver seamless, secure access to > >> virtual desktops. With this all-in-one solution, easily deploy virtual > >> desktops for less than the cost of PCs and save 60% on VDI infrastructure > >> costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > >> _______________________________________________ > >> moosefs-users mailing list > >> moo...@li... > >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > -- > > Ólafur Osvaldsson > > System Administrator > > Nethonnun ehf. > > e-mail: osv...@ne... > > phone: +354 517 3400 > > > > > > ----------------------------------------------------------------------------- > > > Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > > infrastructure or vast IT resources to deliver seamless, secure access to > > virtual desktops. With this all-in-one solution, easily deploy virtual > > desktops for less than the cost of PCs and save 60% on VDI infrastructure > > costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- Ólafur Osvaldsson System Administrator Nethonnun ehf. e-mail: osv...@ne... phone: +354 517 3400 |
From: Steve <st...@bo...> - 2012-01-06 11:32:24
|
Michal, That's really great! Did the patch submitted on the 9/11 to the list by dav...@gm... get worked in ? Steve -------Original Message------- From: Michał Borychowski Date: 06/01/2012 10:45:52 To: 'Steve' Cc: moo...@li...; 'Andrey Karadjov' Subject: RE: [Moosefs-users] moosefs status Hi Steve! Coming a lot... Soon... Now apart from developing the code itself we focus on marketing & PR :) and we need to put everything together. Teaser: No more 2TB size limit per file. Quotas are really close :) But not in the next release yet. Regards Michal -----Original Message----- From: Steve [mailto:st...@bo...] Sent: Friday, January 06, 2012 10:50 AM To: Andrey Karadjov; Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] moosefs status Michal, Patiently waiting. Any update or teaser of what's coming ? Steve -------Original Message------- From: Michał Borychowski Date: 22/12/2011 10:03:22 To: 'Andrey Karadjov' Cc: moo...@li... Subject: Re: [Moosefs-users] moosefs status Hi! The project is not at all dead. You can soon expect more information on our blog. In the first days of January we plan to publish the next release. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Andrey Karadjov [mailto:and...@gm...] Sent: Thursday, December 22, 2011 10:48 AM To: moo...@li... Subject: [Moosefs-users] moosefs status Hello, Can you tell me what's the current status of the MooseFS project? Now it's almost an year since the last release. It's a very good FS it's not perfect yet, so I guess that a lot of us are waiting for the next big release, but when's that going to be? Or we can consider this project as dead? I hope the developers will shed some light. Best Regards. |
From: Steve <st...@bo...> - 2012-01-06 11:21:56
|
That's really cool, are you able to share the scripts, image and/or how the usb image was made ? Steve -------Original Message------- From: Ólafur Ósvaldsson Date: 06/01/2012 10:20:28 To: moo...@li... Subject: Re: [Moosefs-users] moosefs distro Hi, We run the chunkservers completely from memory, they boot from USB and only use a small partition of the USB drive for the graph data so that the history isn't lost between reboots. Every chunkserver has 6x1TB disks and 3GB of ram and there are startup scripts that initialize new disks on boot if required, so a new server can just be put into the rack with a USB stick plugged in and it will clear all the disks and setup for MFS if it is not like that already. /Oli On 5.1.2012, at 16:11, Travis Hein wrote: > The chunk server daemons are very low footprint for system resource > requirements. Enough so they are suitable to coexist with other system > services. Where if you have a cluster of physical machines each with a > local disk, just making every compute node also be a chunk server for > aggregated file system capacity. > > Most of the time lately though we put everything in virtual machines on > virtual machine hosting platforms now. Which I guess I kind of feel it > is to as efficient to have the chunk servers spread out everywhere, all > VMs are backed by the same SAN anyway, so the performance over spread > out disks goes away right. > > So lately I create a Virtual machine just for running the chunk server > process. We have a "standard" of using CentOS for our VMs. Which is > arguably kind of wasteful to just use it as a chunk server process, but > it is pretty much set and forget and appliance-ized. > > I have often thought about creating a moose fs stand alone appliance. An > embedded nano itx board in a 1U rackmount chassis, solid state boot, and > a minimal linux distribution, with a large sata drives. Both low power > and efficient, out of our virtualized platform. At least probably > cheaper to grow capacity than buying more iSCSI RAID SAN products :P > But this is still in my to do some day pile. > On 12-01-04 10:28 AM, Steve wrote: >> Do people use moose boxes for other roles ? >> >> >> >> Sometime ago I made a moosefs linux ISO cd (not a respin) however the >> installer wasn't insert cd and job done. Taking it any further was beyond my >> capabilities. >> >> >> >> Is such a thing needed or desired ? Any collaborators >> >> >> >> Steve >> >> ----------------------------------------------------------------------------- >> Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex >> infrastructure or vast IT resources to deliver seamless, secure access to >> virtual desktops. With this all-in-one solution, easily deploy virtual >> desktops for less than the cost of PCs and save 60% on VDI infrastructure >> costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > -- > Travis > > > ----------------------------------------------------------------------------- > Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > infrastructure or vast IT resources to deliver seamless, secure access to > virtual desktops. With this all-in-one solution, easily deploy virtual > desktops for less than the cost of PCs and save 60% on VDI infrastructure > costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Ólafur Osvaldsson System Administrator Nethonnun ehf. e-mail: osv...@ne... phone: +354 517 3400 ----------------------------------------------------------------------------- Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex infrastructure or vast IT resources to deliver seamless, secure access to virtual desktops. With this all-in-one solution, easily deploy virtual desktops for less than the cost of PCs and save 60% on VDI infrastructure costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2012-01-06 10:45:55
|
Hi Steve! Coming a lot... Soon... Now apart from developing the code itself we focus on marketing & PR :) and we need to put everything together. Teaser: No more 2TB size limit per file. Quotas are really close :) But not in the next release yet. Regards Michal -----Original Message----- From: Steve [mailto:st...@bo...] Sent: Friday, January 06, 2012 10:50 AM To: Andrey Karadjov; Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] moosefs status Michal, Patiently waiting. Any update or teaser of what's coming ? Steve -------Original Message------- From: Michał Borychowski Date: 22/12/2011 10:03:22 To: 'Andrey Karadjov' Cc: moo...@li... Subject: Re: [Moosefs-users] moosefs status Hi! The project is not at all dead. You can soon expect more information on our blog. In the first days of January we plan to publish the next release. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Andrey Karadjov [mailto:and...@gm...] Sent: Thursday, December 22, 2011 10:48 AM To: moo...@li... Subject: [Moosefs-users] moosefs status Hello, Can you tell me what's the current status of the MooseFS project? Now it's almost an year since the last release. It's a very good FS it's not perfect yet, so I guess that a lot of us are waiting for the next big release, but when's that going to be? Or we can consider this project as dead? I hope the developers will shed some light. Best Regards. |
From: Ólafur Ó. <osv...@ne...> - 2012-01-06 10:19:36
|
Hi, We run the chunkservers completely from memory, they boot from USB and only use a small partition of the USB drive for the graph data so that the history isn't lost between reboots. Every chunkserver has 6x1TB disks and 3GB of ram and there are startup scripts that initialize new disks on boot if required, so a new server can just be put into the rack with a USB stick plugged in and it will clear all the disks and setup for MFS if it is not like that already. /Oli On 5.1.2012, at 16:11, Travis Hein wrote: > The chunk server daemons are very low footprint for system resource > requirements. Enough so they are suitable to coexist with other system > services. Where if you have a cluster of physical machines each with a > local disk, just making every compute node also be a chunk server for > aggregated file system capacity. > > Most of the time lately though we put everything in virtual machines on > virtual machine hosting platforms now. Which I guess I kind of feel it > is ot as efficient to have the chunk servers spread out everywhere, all > VMs are backed by the same SAN anyway, so the performance over spread > out disks goes away right. > > So lately I create a Virtual machine just for running the chunk server > process. We have a "standard" of using CentOS for our VMs. Which is > arguably kind of wasteful to just use it as a chunk server process, but > it is pretty much set and forget and appliance-ized. > > I have often thought about creating a moose fs stand alone appliance. An > embedded nano itx board in a 1U rackmount chassis, solid state boot, and > a minimal linux distribution, with a large sata drives. Both low power > and efficient, out of our virtualized platform. At least probably > cheaper to grow capacity than buying more iSCSI RAID SAN products :P > But this is still in my to do some day pile. > On 12-01-04 10:28 AM, Steve wrote: >> Do people use moose boxes for other roles ? >> >> >> >> Sometime ago I made a moosefs linux ISO cd (not a respin) however the >> installer wasn't insert cd and job done. Taking it any further was beyond my >> capabilities. >> >> >> >> Is such a thing needed or desired ? Any collaborators >> >> >> >> Steve >> >> ------------------------------------------------------------------------------ >> Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex >> infrastructure or vast IT resources to deliver seamless, secure access to >> virtual desktops. With this all-in-one solution, easily deploy virtual >> desktops for less than the cost of PCs and save 60% on VDI infrastructure >> costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > -- > Travis > > > ------------------------------------------------------------------------------ > Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > infrastructure or vast IT resources to deliver seamless, secure access to > virtual desktops. With this all-in-one solution, easily deploy virtual > desktops for less than the cost of PCs and save 60% on VDI infrastructure > costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Ólafur Osvaldsson System Administrator Nethonnun ehf. e-mail: osv...@ne... phone: +354 517 3400 |
From: Steve <st...@bo...> - 2012-01-06 10:11:13
|
Travis, Yes I guess the world is moving on with VM's only something ive recently dabbled first ESxi and Proxmox which I currently have a vm mounted that is stored on moosefs. As only a hobbyist/home user most of my moose boxes are on various recycled PC's bits however most have ssd boot. I do have one atom based chunk although just in regular case for cheapness. I did some years back try 1u cases with the intention of having a rack but found them very noisy and the hardware expensive for home use and now have a custom homemade walk in cupboard with ample power points and LAN points for each shelf. With all cabling hidden behind including patch panel. Yes a standalone unit was in my mind when I dabbled with the distro some years ago with the idea of inserting the cd and answering little more than one question to determine the boxes role. Again hardware has moved on with usb stick more the norm for booting and ssd drives. Steve -------Original Message------- From: Travis Hein Date: 05/01/2012 16:13:03 To: moo...@li... Subject: Re: [Moosefs-users] moosefs distro The chunk server daemons are very low footprint for system resource requirements. Enough so they are suitable to coexist with other system services. Where if you have a cluster of physical machines each with a local disk, just making every compute node also be a chunk server for aggregated file system capacity. Most of the time lately though we put everything in virtual machines on virtual machine hosting platforms now. Which I guess I kind of feel it is to as efficient to have the chunk servers spread out everywhere, all VMs are backed by the same SAN anyway, so the performance over spread out disks goes away right. So lately I create a Virtual machine just for running the chunk server process. We have a "standard" of using CentOS for our VMs. Which is arguably kind of wasteful to just use it as a chunk server process, but it is pretty much set and forget and appliance-ized. I have often thought about creating a moose fs stand alone appliance. An embedded nano itx board in a 1U rackmount chassis, solid state boot, and a minimal linux distribution, with a large sata drives. Both low power and efficient, out of our virtualized platform. At least probably cheaper to grow capacity than buying more iSCSI RAID SAN products :P But this is still in my to do some day pile. On 12-01-04 10:28 AM, Steve wrote: > Do people use moose boxes for other roles ? > > > > Sometime ago I made a moosefs linux ISO cd (not a respin) however the > installer wasn't insert cd and job done. Taking it any further was beyond my > capabilities. > > > > Is such a thing needed or desired ? Any collaborators > > > > Steve > > ----------------------------------------------------------------------------- > Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > infrastructure or vast IT resources to deliver seamless, secure access to > virtual desktops. With this all-in-one solution, easily deploy virtual > desktops for less than the cost of PCs and save 60% on VDI infrastructure > costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Travis ----------------------------------------------------------------------------- Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex infrastructure or vast IT resources to deliver seamless, secure access to virtual desktops. With this all-in-one solution, easily deploy virtual desktops for less than the cost of PCs and save 60% on VDI infrastructure costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve <st...@bo...> - 2012-01-06 09:49:20
|
Michal, Patiently waiting. Any update or teaser of what's coming ? Steve -------Original Message------- From: Michał Borychowski Date: 22/12/2011 10:03:22 To: 'Andrey Karadjov' Cc: moo...@li... Subject: Re: [Moosefs-users] moosefs status Hi! The project is not at all dead. You can soon expect more information on our blog. In the first days of January we plan to publish the next release. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Andrey Karadjov [mailto:and...@gm...] Sent: Thursday, December 22, 2011 10:48 AM To: moo...@li... Subject: [Moosefs-users] moosefs status Hello, Can you tell me what's the current status of the MooseFS project? Now it's almost an year since the last release. It's a very good FS it's not perfect yet, so I guess that a lot of us are waiting for the next big release, but when's that going to be? Or we can consider this project as dead? I hope the developers will shed some light. Best Regards. |
From: Travis H. <th...@tr...> - 2012-01-05 16:11:31
|
The chunk server daemons are very low footprint for system resource requirements. Enough so they are suitable to coexist with other system services. Where if you have a cluster of physical machines each with a local disk, just making every compute node also be a chunk server for aggregated file system capacity. Most of the time lately though we put everything in virtual machines on virtual machine hosting platforms now. Which I guess I kind of feel it is ot as efficient to have the chunk servers spread out everywhere, all VMs are backed by the same SAN anyway, so the performance over spread out disks goes away right. So lately I create a Virtual machine just for running the chunk server process. We have a "standard" of using CentOS for our VMs. Which is arguably kind of wasteful to just use it as a chunk server process, but it is pretty much set and forget and appliance-ized. I have often thought about creating a moose fs stand alone appliance. An embedded nano itx board in a 1U rackmount chassis, solid state boot, and a minimal linux distribution, with a large sata drives. Both low power and efficient, out of our virtualized platform. At least probably cheaper to grow capacity than buying more iSCSI RAID SAN products :P But this is still in my to do some day pile. On 12-01-04 10:28 AM, Steve wrote: > Do people use moose boxes for other roles ? > > > > Sometime ago I made a moosefs linux ISO cd (not a respin) however the > installer wasn't insert cd and job done. Taking it any further was beyond my > capabilities. > > > > Is such a thing needed or desired ? Any collaborators > > > > Steve > > ------------------------------------------------------------------------------ > Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > infrastructure or vast IT resources to deliver seamless, secure access to > virtual desktops. With this all-in-one solution, easily deploy virtual > desktops for less than the cost of PCs and save 60% on VDI infrastructure > costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Travis |
From: Steve <st...@bo...> - 2012-01-04 15:28:43
|
Do people use moose boxes for other roles ? Sometime ago I made a moosefs linux ISO cd (not a respin) however the installer wasn't insert cd and job done. Taking it any further was beyond my capabilities. Is such a thing needed or desired ? Any collaborators Steve |
From: Michał B. <mic...@ge...> - 2011-12-29 10:37:35
|
Hi You just use regular "rm -rf". You can delete "original" or/and "copy". You should treat the outcome of the "mfsmakesnapshot" exactly command as "cp -Rp" Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Anh K. Huynh [mailto:ky...@vi...] Sent: Thursday, December 29, 2011 11:01 AM To: moo...@li... Subject: [Moosefs-users] How to delete snapshots? Hi, Is there anyway to delete snapshots created by the command mfsmakesnapshot? Thank you. Regards, -- Anh K. Huynh System administrator ---------------------------------------------------------------------------- -- Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex infrastructure or vast IT resources to deliver seamless, secure access to virtual desktops. With this all-in-one solution, easily deploy virtual desktops for less than the cost of PCs and save 60% on VDI infrastructure costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Anh K. H. <ky...@vi...> - 2011-12-29 10:09:22
|
On Thu, 29 Dec 2011 11:59:45 +0200 Chris Picton <ch...@ec...> wrote: > > Is there anyway to delete snapshots created by the command > > mfsmakesnapshot? > > They are treated as standard files/directories, so you can 'rm' them > as normal Oh, that's so simple :) Thank you so much -- Anh K. Huynh System administrator |
From: Chris P. <ch...@ec...> - 2011-12-29 10:00:05
|
On Thu, 2011-12-29 at 17:01 +0700, Anh K. Huynh wrote: > Hi, > > Is there anyway to delete snapshots created by the command > mfsmakesnapshot? Hi Anh They are treated as standard files/directories, so you can 'rm' them as normal Chris Disclaimer The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful. This email has been scanned for viruses and malware, and automatically archived by Mimecast SA (Pty) Ltd, an innovator in Software as a Service (SaaS) for business. Mimecast Unified Email Management ™ (UEM) offers email continuity, security, archiving and compliance with all current legislation. To find out more, contact Mimecast. |
From: Anh K. H. <ky...@vi...> - 2011-12-29 09:58:02
|
Hi, Is there anyway to delete snapshots created by the command mfsmakesnapshot? Thank you. Regards, -- Anh K. Huynh System administrator |
From: youngcow <you...@gm...> - 2011-12-22 23:21:28
|
That's good news. > Hi! > > The project is not at all dead. You can soon expect more information on > our blog. In the first days of January we plan to publish the next release. > > Kind regards > > Michał Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wołoska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > *From:*Andrey Karadjov [mailto:and...@gm...] > *Sent:* Thursday, December 22, 2011 10:48 AM > *To:* moo...@li... > *Subject:* [Moosefs-users] moosefs status > > Hello, > > Can you tell me what's the current status of the MooseFS project? Now > it's almost an year since the last release. It's a very good FS it's not > perfect yet, so I guess that a lot of us are waiting for the next big > release, but when's that going to be? Or we can consider this project as > dead? I hope the developers will shed some light. > > Best Regards. > > > > ------------------------------------------------------------------------------ > Write once. Port to many. > Get the SDK and tools to simplify cross-platform app development. Create > new or port existing apps to sell to consumers worldwide. Explore the > Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join > http://p.sf.net/sfu/intel-appdev > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@da...> - 2011-12-22 20:10:00
|
El Jue 22 Diciembre 2011, Ud. escribió: > The chunks were zeroed out. Ah, then I guess that in this case the corruption was in fact caused by those zeroed out chunks and (my hope is that) if those chunks were instead recovered from an older version corruption wouldn't be a problem. I also had zeroed chunks one time but as noted, I recovered the affected files from a local copy. > >> On Thu, Dec 22, 2011 at 3:11 AM, Chris Picton <ch...@ec...> wrote: > >> > In my case, I had subsequently done a mfscheckfile recursively on my > >> > entire filesystem, and noted the files with 0 chunks > >> > > >> > I ran a mfsfilerepair on those (not checking my messages file too > >> > closely, and expecting the blocks to be zeroed out). My vms booted > >> > correctly, and after running a forced fsck and rpm -Va, no errors were > >> > detected. > >> > > >> > I am assuming I had the same scenario as you, where the previous chunk > >> > version was restored, instead of being zeroed out (or by a large > >> > coincidence, all failed chunks had not yet been allocated in the VM > >> > disks yet) > >> > >> I went through the EXACT same scenario. All VM images had one missing > >> block. I did a mfsfilerepair on all of them. They all made it > >> through a fsck with flying colors. They all started and ran. Over a > >> month later, when the OS (FreeBSD) actually tried to access that > >> block, things went berserk. Long story short, I ended up recovering > >> that VM from backups. > >> > >> Elliot > > > > That's my fear also: possible silent corruption *inside* the VM. > > > > In my case the files were not VM images but simple files from one of our > > websites, easily restored from local copies if necessary. > > > > I'm curious if you had any erased chunks or all of them were restored to > > a previous version? > > > > Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! |
From: Chris P. <ch...@ec...> - 2011-12-22 18:40:05
|
On 2011/12/22 5:47 PM, Elliot Finley wrote: > I went through the EXACT same scenario. All VM images had one missing > block. I did a mfsfilerepair on all of them. They all made it through > a fsck with flying colors. They all started and ran. Over a month > later, when the OS (FreeBSD) actually tried to access that block, > things went berserk. Long story short, I ended up recovering that VM > from backups. Hi Elliot Thanks for the info I have just taken one of my vms which is not too important (and had 3 lost blocks before I ran mfsfilerepair), and filled up its partitions by dd'ing to temp files. So far I have seen no problems on that machine, but I will keep an eye on all of them. Regards Chris |
From: Ricardo J. B. <ric...@da...> - 2011-12-22 18:23:54
|
Resending to the list, which was my original intention... ---------- Mensaje reenviado ---------- Subject: Re: [Moosefs-users] moosefs losing chunks Date: Jue 22 Diciembre 2011 From: "Ricardo J. Barberis" <ric...@da...> To: Elliot Finley <efi...@gm...> El Jue 22 Diciembre 2011, Ud. escribió: > On Thu, Dec 22, 2011 at 3:11 AM, Chris Picton <ch...@ec...> wrote: > > In my case, I had subsequently done a mfscheckfile recursively on my > > entire filesystem, and noted the files with 0 chunks > > > > I ran a mfsfilerepair on those (not checking my messages file too > > closely, and expecting the blocks to be zeroed out). My vms booted > > correctly, and after running a forced fsck and rpm -Va, no errors were > > detected. > > > > I am assuming I had the same scenario as you, where the previous chunk > > version was restored, instead of being zeroed out (or by a large > > coincidence, all failed chunks had not yet been allocated in the VM > > disks yet) > > I went through the EXACT same scenario. All VM images had one missing > block. I did a mfsfilerepair on all of them. They all made it > through a fsck with flying colors. They all started and ran. Over a > month later, when the OS (FreeBSD) actually tried to access that > block, things went berserk. Long story short, I ended up recovering > that VM from backups. > > Elliot That's my fear also: possible silent corruption *inside* the VM. In my case the files were not VM images but simple files from one of our websites, easily restored from local copies if necessary. I'm curious if you had any erased chunks or all of them were restored to a previous version? Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! |
From: Elliot F. <efi...@gm...> - 2011-12-22 15:47:34
|
On Thu, Dec 22, 2011 at 3:11 AM, Chris Picton <ch...@ec...> wrote: > > In my case, I had subsequently done a mfscheckfile recursively on my > entire filesystem, and noted the files with 0 chunks > > I ran a mfsfilerepair on those (not checking my messages file too > closely, and expecting the blocks to be zeroed out). My vms booted > correctly, and after running a forced fsck and rpm -Va, no errors were > detected. > > I am assuming I had the same scenario as you, where the previous chunk > version was restored, instead of being zeroed out (or by a large > coincidence, all failed chunks had not yet been allocated in the VM > disks yet) I went through the EXACT same scenario. All VM images had one missing block. I did a mfsfilerepair on all of them. They all made it through a fsck with flying colors. They all started and ran. Over a month later, when the OS (FreeBSD) actually tried to access that block, things went berserk. Long story short, I ended up recovering that VM from backups. Elliot |
From: Michał B. <mic...@ge...> - 2011-12-22 14:40:12
|
Quotas should be ready in Q1 2012, so - quite soon :) We'd also like to wish all of you Merry Xmas :) Thanks for your support, suggestions, tests and patches! :) Kind regards Michal From: Andrey Karadjov [mailto:and...@gm...] Sent: Thursday, December 22, 2011 3:34 PM To: moo...@li... Subject: Re: [Moosefs-users] moosefs status OK that's really great news, thanks for all answers! I didn't want to offend anyone, MooseFS is one of the best network file systems around and i really like it. One of the things I'm waiting for is in the roadmap for a very long time: . Setting quota to folders (would be introduced in 1.7) So it would be great to see it implemented :) Marry Xmas to all! 2011/12/22 Steve <st...@bo...> Its not dead. We have been asked for tools suggestions. I have recently personally been asked for suggestions, improvements, comments. Its expected due to the time length that its a major revision. I hope so and its worth the wait! Shame it wasn't an Xmas present. If its not perfect then tell the devs why. Steve -------Original Message------- From: Andrey Karadjov <mailto:and...@gm...> Date: 22/12/2011 09:50:15 To: moo...@li... Subject: [Moosefs-users] moosefs status Hello, Can you tell me what's the current status of the MooseFS project? Now it's almost an year since the last release. It's a very good FS it's not perfect yet, so I guess that a lot of us are waiting for the next big release, but when's that going to be? Or we can consider this project as dead? I hope the developers will shed some light. Best Regards. _____ ---------------------------------------------------------------------------- -- Write once. Port to many. Get the SDK and tools to simplify cross-platform app development. Create new or port existing apps to sell to consumers worldwide. Explore the Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join http://p.sf.net/sfu/intel-appdev _____ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Sébastien M. <seb...@u-...> - 2011-12-22 10:39:39
|
Yes, thanks GEMIUS for providing this software FREE and with help of the moosefs's community. Thanks Michal Boryshowski for your help also. Sébastien On 22/12/2011 11:24, Steve wrote: > Great news and look forward to upgrading. > Merry Christmas to you and all the dev team. Thanks for providing this > software FREE > Steve > /-------Original Message-------/ > /*From:*/ Micha? Borychowski <mailto:mic...@ge...> > /*Date:*/ 22/12/2011 10:03:22 > /*To:*/ 'Andrey Karadjov' <mailto:and...@gm...> > /*Cc:*/ moo...@li... > <mailto:moo...@li...> > /*Subject:*/ Re: [Moosefs-users] moosefs status > > Hi! > > The project is not at all dead. You can soon expect more information > on our blog. In the first days of January we plan to publish the next > release. > > Kind regards > > Micha? Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wo?oska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > *From:*Andrey Karadjov [mailto:and...@gm...] > *Sent:* Thursday, December 22, 2011 10:48 AM > *To:* moo...@li... > *Subject:* [Moosefs-users] moosefs status > > Hello, > > Can you tell me what's the current status of the MooseFS project? Now > it's almost an year since the last release. It's a very good FS it's > not perfect yet, so I guess that a lot of us are waiting for the next > big release, but when's that going to be? Or we can consider this > project as dead? I hope the developers will shed some light. > > Best Regards. > > > > > > ------------------------------------------------------------------------------ > Write once. Port to many. > Get the SDK and tools to simplify cross-platform app development. Create > new or port existing apps to sell to consumers worldwide. Explore the > Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join > http://p.sf.net/sfu/intel-appdev > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Chris P. <ch...@ec...> - 2011-12-22 10:11:37
|
On Wed, 2011-12-21 at 18:32 -0300, Ricardo J. Barberis wrote: > El Martes 20 Diciembre 2011, Chris Picton escribió: > > Hi List > > Hi Chris, > I grep'ed /var/log/messages for "currently unavailable file" and ran > mfsfileinfo on all the files with missing chunks (i.e.: chunks marked as "no > valid copies"). > > Then I tan mfsfilrepair on those files, expecting to get those missing > chunks zeroed out but their version got downgraded instead. > In my case, I had subsequently done a mfscheckfile recursively on my entire filesystem, and noted the files with 0 chunks I ran a mfsfilerepair on those (not checking my messages file too closely, and expecting the blocks to be zeroed out). My vms booted correctly, and after running a forced fsck and rpm -Va, no errors were detected. I am assuming I had the same scenario as you, where the previous chunk version was restored, instead of being zeroed out (or by a large coincidence, all failed chunks had not yet been allocated in the VM disks yet) The odd thing is that I had done a search on the chunkservers for some of the chunk names (ignoring version - find . -iname "*010B7E*" for example), and didnt find the physical chunks on the chunkservers at all. But all is running fine now - thanks Chris > I think this is because the master might have received an update operation but > the chunkservers didn't get a chance to complete the update, so the > master "restored" the last good version of the chunk. > > Since (AFAIK) there is no way to know beforehand what mfsfilerepair is going > to do with the missing chunks, I'd recommend you to be careful. > > Regards, Disclaimer The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful. This email has been scanned for viruses and malware, and automatically archived by Mimecast SA (Pty) Ltd, an innovator in Software as a Service (SaaS) for business. Mimecast Unified Email Management ™ (UEM) offers email continuity, security, archiving and compliance with all current legislation. To find out more, contact Mimecast. |
From: Michał B. <mic...@ge...> - 2011-12-22 10:02:37
|
Hi! The project is not at all dead. You can soon expect more information on our blog. In the first days of January we plan to publish the next release. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Andrey Karadjov [mailto:and...@gm...] Sent: Thursday, December 22, 2011 10:48 AM To: moo...@li... Subject: [Moosefs-users] moosefs status Hello, Can you tell me what's the current status of the MooseFS project? Now it's almost an year since the last release. It's a very good FS it's not perfect yet, so I guess that a lot of us are waiting for the next big release, but when's that going to be? Or we can consider this project as dead? I hope the developers will shed some light. Best Regards. |
From: Andrey K. <and...@gm...> - 2011-12-22 09:48:34
|
Hello, Can you tell me what's the current status of the MooseFS project? Now it's almost an year since the last release. It's a very good FS it's not perfect yet, so I guess that a lot of us are waiting for the next big release, but when's that going to be? Or we can consider this project as dead? I hope the developers will shed some light. Best Regards. |
From: Ricardo J. B. <ric...@da...> - 2011-12-21 21:33:05
|
El Martes 20 Diciembre 2011, Chris Picton escribió: > Hi List Hi Chris, > I have a moosefs installation in my test environment consisting of 4 > pcs, each with 2x80 Gb and 2x 1TB drives. They are running a > corosync/pacemaker cluster with any one of the 4 machines acting as the > master, and all 4 running metaloggers (I am happy to share the ocf > script if anyone would like to look at it) > > They are running chunk servers (and I have 4 extra chunk servers as well) > > We are hosting kvm images on the cluster (goal=3) > > However, we had a problem where the temperature in the lab went too > high, and some of the HDDs shut down. > > No new files were being created at the time, but there was read/write > access to most of the existing files. > > The current master kernel paniced, and a backup metalogger was promoted > to master (using mfsmetarestore). Two of the other chunk servers had a 1Tb > drive fail in each of them. > > So all-in-all, a fairly bad problem, where we lost 3 copies of some of > the data (and goal was 3). This was overnight, and I only saw the > problem this morning. > > > However. The missing data should have still been on the drives (kernel > paniced master, and failed hdds on other machines) > > So I have now done: > Power off all machines > Reseat all drives > fsck all drives (no filesystem errors found) > restart the master, metalogger and chunkservers > > The CGI is showing 44 chunks which have zero copies. > (It also shows some chunks with 4 copies, and some chunks with 5 copies > - which implies that the undergoal chunks were being replicated after > the problem happened.) > > My question is - why would there be any chunks with zero copies? No new > files were being added or deleted - the metalogger/masters would all > have had the same data. The failed drives have started again, with no > filesystem errors. Where are my missing chunks?? > > Any help would be appreciated > > Chris I had a similar problem last week in a small MFS cluster (1 master, 1 metalogger, 2 chunkservers) and had several chunks with zero copies. I grep'ed /var/log/messages for "currently unavailable file" and ran mfsfileinfo on all the files with missing chunks (i.e.: chunks marked as "no valid copies"). Then I tan mfsfilrepair on those files, expecting to get those missing chunks zeroed out but their version got downgraded instead. I think this is because the master might have received an update operation but the chunkservers didn't get a chance to complete the update, so the master "restored" the last good version of the chunk. Since (AFAIK) there is no way to know beforehand what mfsfilerepair is going to do with the missing chunks, I'd recommend you to be careful. Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |