From: Marc S. <mar...@mc...> - 2012-04-03 15:47:50
|
Hi, We've been using SCST in production at our institution for a little over a year now, and we have been very pleased with the performance and functionality that it provides. We are now planning on implementing additional SCST-based storage arrays/servers this summer. For our first round of SCST storage servers, we used Gentoo as the Linux distribution and developed/tested a good working base, and then cloned that OS image to the other production storage servers. We found this a little cumbersome with updates/patches and it was missing a few desired features. We looked at using OpenFiler as a platform for our storage servers, but it seemed to be a bit much for what we needed (unified storage -> CIFS, NFS, etc.). We decided to create our own Linux distribution (sort of) based on SCST that we could use on our storage arrays/servers. We initially started developing this is an internal project, but then decided to turn it into an open source project that hopefully others might find useful. Its called Enterprise Storage OS (ESOS) and is hosted on Google Code: http://code.google.com/p/enterprise-storage-os/ A few brief feature highlights: - Boots off a USB flash drive and is loaded into a tmpfs filesystem making it fault-tolerant in case of USB flash drive failures - Two kernels installed; one "production" with kernel/modules built for performance (SCST make 2perf), and one "debug" with kernel/modules built for debugging/troubleshooting (SCST make 2debug) -- this allows you to reboot into the debug kernel/modules in case you are having a problem and need increased debug information - Kernel crash dump capture via kexec -- on kernel panic, it will boot crash dump kernel and grab/compress vmcore, and boot back into production (fully automated) - Popular, mainstream RAID controller CLI tools (eg, MegaCLI) included for configuring volumes "inside" OS - Coming in the near future: A full text-based user interface (TUI) for storage provisioning and system configuration ** ESOS is very early in the development process, and is not recommended for production use yet. That being said, we have been testing ESOS extensively with our SAN setup and expect to have it in production by early summer. --Marc |
From: Riccardo B. <r.b...@gm...> - 2012-04-03 20:12:54
|
Hey! I am developing a similar project but focused on iSCSI and HA. I'm using a stripped down gentoo x64 booting off of a USB key. For now In production stage I have a cluster of two nodes. In development I'm testing data deduplication and adding some features, like a text menu interface for common operations (written in bash). After dedup will ready I'm planning to give the possibility for my storage appliance to work in two operation mode: San cluster iSCSI Standalone backup server: with dedup and some facilities for backing up xenserver and even ESXi. Cheers! Il giorno 03/apr/2012, alle ore 17:47, Marc Smith <mar...@mc...> ha scritto: > Hi, > > We've been using SCST in production at our institution for a little > over a year now, and we have been very pleased with the performance > and functionality that it provides. We are now planning on > implementing additional SCST-based storage arrays/servers this summer. > > For our first round of SCST storage servers, we used Gentoo as the > Linux distribution and developed/tested a good working base, and then > cloned that OS image to the other production storage servers. We found > this a little cumbersome with updates/patches and it was missing a few > desired features. We looked at using OpenFiler as a platform for our > storage servers, but it seemed to be a bit much for what we needed > (unified storage -> CIFS, NFS, etc.). > > > We decided to create our own Linux distribution (sort of) based on > SCST that we could use on our storage arrays/servers. We initially > started developing this is an internal project, but then decided to > turn it into an open source project that hopefully others might find > useful. > > Its called Enterprise Storage OS (ESOS) and is hosted on Google Code: > http://code.google.com/p/enterprise-storage-os/ > > A few brief feature highlights: > - Boots off a USB flash drive and is loaded into a tmpfs filesystem > making it fault-tolerant in case of USB flash drive failures > - Two kernels installed; one "production" with kernel/modules built > for performance (SCST make 2perf), and one "debug" with kernel/modules > built for debugging/troubleshooting (SCST make 2debug) -- this allows > you to reboot into the debug kernel/modules in case you are having a > problem and need increased debug information > - Kernel crash dump capture via kexec -- on kernel panic, it will boot > crash dump kernel and grab/compress vmcore, and boot back into > production (fully automated) > - Popular, mainstream RAID controller CLI tools (eg, MegaCLI) included > for configuring volumes "inside" OS > - Coming in the near future: A full text-based user interface (TUI) > for storage provisioning and system configuration > > ** ESOS is very early in the development process, and is not > recommended for production use yet. That being said, we have been > testing ESOS extensively with our SAN setup and expect to have it in > production by early summer. > > > --Marc > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > Scst-devel mailing list > https://lists.sourceforge.net/lists/listinfo/scst-devel |
From: Vladislav B. <vs...@vl...> - 2012-04-05 02:26:07
|
Riccardo Bicelli, on 04/03/2012 04:12 PM wrote: > Hey! > I am developing a similar project but focused on iSCSI and HA. I'm using a stripped down gentoo x64 booting off of a USB key. > For now In production stage I have a cluster of two nodes. > In development I'm testing data deduplication and adding some features, like a text menu interface for common operations (written in bash). > After dedup will ready I'm planning to give the possibility for my storage appliance to work in two operation mode: > San cluster iSCSI > Standalone backup server: with dedup and some facilities for backing up xenserver and even ESXi. If you put it on some page and send a link to it, I'd be glad to add the link to SCST "Downloads" page as well! Thanks, Vlad |
From: Riccardo <r.b...@gm...> - 2012-04-06 10:49:35
|
Currently I put a page on my blog: http://think-brick.blogspot.it/p/my-own-storage-linux-os.html Soon I'll add links for downloading a test virtual appliance. Thanks. Il giorno mer, 04/04/2012 alle 22.25 -0400, Vladislav Bolkhovitin ha scritto: > Riccardo Bicelli, on 04/03/2012 04:12 PM wrote: > > Hey! > > I am developing a similar project but focused on iSCSI and HA. I'm using a stripped down gentoo x64 booting off of a USB key. > > For now In production stage I have a cluster of two nodes. > > In development I'm testing data deduplication and adding some features, like a text menu interface for common operations (written in bash). > > After dedup will ready I'm planning to give the possibility for my storage appliance to work in two operation mode: > > San cluster iSCSI > > Standalone backup server: with dedup and some facilities for backing up xenserver and even ESXi. > > If you put it on some page and send a link to it, I'd be glad to add the link to > SCST "Downloads" page as well! > > Thanks, > Vlad |
From: Vladislav B. <vs...@vl...> - 2012-04-13 20:48:12
|
Riccardo, on 04/06/2012 06:49 AM wrote: > Currently I put a page on my blog: > http://think-brick.blogspot.it/p/my-own-storage-linux-os.html > > Soon I'll add links for downloading a test virtual appliance. Added to the download page, thanks. Vlad |
From: Chris W. <cw...@gm...> - 2012-04-03 20:16:47
|
On Tue, Apr 3, 2012 at 3:12 PM, Riccardo Bicelli <r.b...@gm...> wrote: > In development I'm testing data deduplication and adding some features, like a text menu interface for common operations (written in bash). > After dedup will ready I'm planning to give the possibility for my storage appliance to work in two operation mode: what are you using for dedup? |
From: Riccardo B. <r.b...@gm...> - 2012-04-03 20:42:25
|
Currently I'm testing dedup with lessfs 1.5.9. Riccardo Il giorno 03/apr/2012, alle ore 22:16, Chris Weiss <cw...@gm...> ha scritto: > On Tue, Apr 3, 2012 at 3:12 PM, Riccardo Bicelli <r.b...@gm...> wrote: >> In development I'm testing data deduplication and adding some features, like a text menu interface for common operations (written in bash). >> After dedup will ready I'm planning to give the possibility for my storage appliance to work in two operation mode: > > what are you using for dedup? |
From: Chris W. <cw...@gm...> - 2012-04-03 20:47:22
|
On Tue, Apr 3, 2012 at 3:42 PM, Riccardo Bicelli <r.b...@gm...> wrote: > Currently I'm testing dedup with lessfs 1.5.9. I've played with that some, not sure why but I've had some serious stability problems with it. maybe I'm not giving it enough ram or something. |
From: Vladislav B. <vs...@vl...> - 2012-04-05 02:26:01
|
Hi, Interesting! I added link to it from the "Downloads" page. Thanks, Vlad Marc Smith, on 04/03/2012 11:47 AM wrote: > Hi, > > We've been using SCST in production at our institution for a little > over a year now, and we have been very pleased with the performance > and functionality that it provides. We are now planning on > implementing additional SCST-based storage arrays/servers this summer. > > For our first round of SCST storage servers, we used Gentoo as the > Linux distribution and developed/tested a good working base, and then > cloned that OS image to the other production storage servers. We found > this a little cumbersome with updates/patches and it was missing a few > desired features. We looked at using OpenFiler as a platform for our > storage servers, but it seemed to be a bit much for what we needed > (unified storage -> CIFS, NFS, etc.). > > > We decided to create our own Linux distribution (sort of) based on > SCST that we could use on our storage arrays/servers. We initially > started developing this is an internal project, but then decided to > turn it into an open source project that hopefully others might find > useful. > > Its called Enterprise Storage OS (ESOS) and is hosted on Google Code: > http://code.google.com/p/enterprise-storage-os/ > > A few brief feature highlights: > - Boots off a USB flash drive and is loaded into a tmpfs filesystem > making it fault-tolerant in case of USB flash drive failures > - Two kernels installed; one "production" with kernel/modules built > for performance (SCST make 2perf), and one "debug" with kernel/modules > built for debugging/troubleshooting (SCST make 2debug) -- this allows > you to reboot into the debug kernel/modules in case you are having a > problem and need increased debug information > - Kernel crash dump capture via kexec -- on kernel panic, it will boot > crash dump kernel and grab/compress vmcore, and boot back into > production (fully automated) > - Popular, mainstream RAID controller CLI tools (eg, MegaCLI) included > for configuring volumes "inside" OS > - Coming in the near future: A full text-based user interface (TUI) > for storage provisioning and system configuration > > ** ESOS is very early in the development process, and is not > recommended for production use yet. That being said, we have been > testing ESOS extensively with our SAN setup and expect to have it in > production by early summer. > > > --Marc > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > Scst-devel mailing list > https://lists.sourceforge.net/lists/listinfo/scst-devel > |