You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
(4) |
Jul
(1) |
Aug
(2) |
Sep
(1) |
Oct
(2) |
Nov
(2) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(1) |
Feb
(2) |
Mar
(4) |
Apr
|
May
(2) |
Jun
(4) |
Jul
(3) |
Aug
(3) |
Sep
(1) |
Oct
(1) |
Nov
(16) |
Dec
(3) |
2003 |
Jan
(5) |
Feb
(3) |
Mar
(3) |
Apr
|
May
(2) |
Jun
(3) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
(2) |
2004 |
Jan
(1) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
(1) |
2005 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2006 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Kevin C. <kev...@us...> - 2003-05-02 21:12:15
|
The EVMS team is announcing the first bug-fix release in the Enterprise Volume Management System 2.0 series. Package 2.0.1 is now available for download at the project web site: http://www.sourceforge.net/projects/evms This release is for the new EVMS design, which is based on user-space volume discovery and communication with existing kernel drivers, such as MD/Software-RAID and Device-Mapper. Please see the INSTALL file in the 2.0.1 package for information about installing and getting started with the new EVMS. Since this new design is based on user-space discovery, there will no longer be boot-time volume activation. Please see the INSTALL file for information about how to activate your volumes using the EVMS user-interfaces and utilities. The INSTALL file also contains instructions for how users may easily "upgrade" an existing 2.4 kernel with Device-Mapper and EVMS 2.0.0 patches. Please see the README file in the 2.0.1 package for more detailed notes concerning this release. EVMS 2.0.1 is supported on 2.4.20, 2.4.19, and 2.5.68 kernels. If you have any questions, find any bugs, or simply want to report success stories, please send email to the EVMS mailing list (evm...@li...) or visit the EVMS IRC channel (irc.freenode.net, #evms). -- Kevin Corry kev...@us... http://evms.sourceforge.net/ EVMS ChangeLog ============== 2.0.1 (2003-05-02): - First bug-fix release for 2.0 - Core Engine - Attempt to load Device-Mapper kernel module if not present. - Prevent error messages from modprobe when trying to check for the EVMS 1.2 kernel driver by checking for the /proc/evms directory first. - Clustering - Several minor fixes in the cluster-segment-manager. - GUI and Text-Mode - Add command to mount volumes from the EVMS UIs. - Disk Manager - Accepts "*" and "?" wild cards and bracket notation ("[...]") when specifying block devices in evms.conf. - Combine the 2.4-kernel and 2.5-kernel versions into a single plugin. - LVM - New "Move-PV" interface to complement the "Move-Extent" interface. - Fix bug with some migrating some setups from LVM1 directly to EVMS 2.0. - Fix bug with activation of certain striped LVM regions. - MD - Fix bug with RAID-5 on BBR segments. - Kernel - Fix bug with loading the BBR kernel component as a module. |
From: Kevin C. <co...@us...> - 2003-03-29 17:43:36
|
The EVMS team is announcing the first full release of the new Enterprise Volume Management System. Package 2.0.0 is now available for download at the project web site: http://www.sourceforge.net/projects/evms This release is for the new EVMS design, which is based on user-space volume discovery and communication with existing kernel drivers, such as MD/Software-RAID and Device-Mapper. Please see the INSTALL file in the 2.0.0 package for information about installing and getting started with the new EVMS. Since this new design is based on user-space discovery, there will no longer be boot-time volume activation. Please see the INSTALL file for information about how to activate your volumes using the EVMS user-interfaces and utilities. Please see the README file in the 2.0.0 package for more detailed notes concerning this release. EVMS 2.0.0 is supported on 2.4.20, 2.4.19, and 2.5.66 kernels. If you have any questions, find any bugs, or simply want to report success stories, please send email to the EVMS mailing list (evm...@li...) or visit the EVMS IRC channel (irc.freenode.net, #evms). -- Kevin Corry co...@us... http://evms.sourceforge.net/ EVMS ChangeLog ============== 2.0.0 (2003-03-29): - First full release of new EVMS design - Completed new text-mode UI - Completed clustering failover testing - Updated Users-Guide - Clustering information - Plug-in appendices - Various testing and bug fixes 1.9.2 (2003-03-17): - Third beta release - Clustering / HA Plug-in - Improved failover script. - Testing various failover and loss-of-quorum scenarios. - Core Engine - Allow engine to open if Device-Mapper isn't loaded. - Move Support (Offline) - Add move support to the MD plug-in. - Add move support to the GPT plug-in. - Text-Mode UI - Support "Create Feature" from context popup menu. - Support "Modify Properties". - Support volume conversions. - Snapshotting - Lots of testing and bug-fixes. - DOS Segment Manager - Resolve differences between active DM devices and partition-table metadata. - Disk Manager - Support for devfs without requiring devfsd. - Configuration File - New entries affecting the engine log. - Better comments explaining disk-manager entries wrt devfs. - Documentation - Instructions for creating init-ramdisks for use with EVMS, along with a sample /linuxrc script. 1.9.1 (2003-03-04): - Second beta release - Clustering / HA Plug-in - Finished remote administration capabilities - Add necessary reassign support to the cluster-segment-manager - Adding support for failover and loss-of-quorum recovery. - Move Support (Offline) - Add move support to Drive-Linking. - Text-Mode UI - Most functionality complete. - Still working on: - Convert volume - Modify attributes - Display details "extra" information - Tree views - Snapshotting - Initial support for snapshotting using Device-Mapper - Currently on supported on 2.4 kernels. - Bad-Block-Relocation - Reworked BBR Device-Mapper module. - Various bug reports and feedback from users - Steve Centrone, Erik Tews, Matt Zimmerman 1.9.0 (2003-02-14): - First beta release - Clustering / HA Plug-in - Improved handling for large messages. - Add installation instructions for settig up EVMS on Linux-HA clusters. - Add failover script for reassigning a cluster-container after a node failure. - Engine Core - Updates to allow remote administration within a cluster. - Move Support (Offline) - New plugin and services to assist in copying data from source to destination. - DOS Segment Manager - Enable moving existing segments into available freespace on the same disk. - LVM Plugin - Enable moving logical extents to new physical extent location. - S/390 Segment Manager - Enable DASD formatting. - New Text-Mode UI - Supports most common operations - Create, delete - Installation - New default installation directories. See INSTALL for details. 1.9.0-pre3 (2003-02-04): - Third alpha release - Clustering / HA Plug-in - Add evmsd, a small command line UI that simply launches the engine in daemon mode. - First successful tests of launching engine and locking all remote daemons. - Lots of testing and bug fixes. - Disk Manager - Fixes to properly handle symlinks to device files. - MD - Add option to restore original major/minor numbers of children in MD objects. This allows full backwards-compatibility with MD raidtools. - Text-Mode UI - Initial re-write of the ncurses-based UI. - Still working towards full functionality. - Old ncurses UI is still available, but does not build by default. - AIX - Add initial support for activating AIX regions using Device-Mapper. - BSD - New plugin for managing disks partitioned by the BSD operating systems. - New engine configuration and build scripts. 1.9.0-pre2 (2003-01-21): - Second alpha release - Clustering - Initial code for HA plugin, for interfacing with the Linux High-Availability clustering package. This plugin will help support basic fail-over capabilities for EVMS. - Add cluster-segment-manager - Provides support for defining disk groups when a cluster has access to shared storage. - D-List - Lots of updates to simplify several APIs and remove a lot of internal code. - OS/2 - Add initial support for activating OS/2 regions with Device-Mapper. 1.9.0-pre1 (2002-12-31): - First alpha release of EVMS running with Device-Mapper and MD/Software-RAID. - All plug-ins ported to new design except: - Snapshot - OS/2 - AIX - Issue with backwards-compatibility with MD. |
From: Kevin C. <co...@us...> - 2003-03-17 23:10:45
|
The EVMS team is announcing the third beta-level release of the new Enterprise Volume Management System. Package 1.9.2 is now available for download at the project web site: http://www.sf.net/projects/evms This beta release is for the new EVMS design, which is based on user-space volume discovery and communication with existing kernel drivers, such as MD/Software-RAID and Device-Mapper. As of this release, much of EVMS has stabilized and has undergone significant testing. However, certain parts of the project are still undergoing testing or development (see the list of outstanding issues below). Please continue to use caution when using this release. We greatly appreciate any testing, feedback and comments from our users about your experiences with the new design. Please see the INSTALL file in the 1.9.2 package for information about installing and getting started with the new EVMS. Since this new design is based on user-space discovery, there will no longer be boot-time volume activation. Please see the INSTALL file for information about how to activate your volumes using the EVMS user-interfaces. EVMS 1.9.2 is supported on the 2.4.20 and 2.5.64 kernels. Users have reported successfully running on earlier 2.4 kernels as well. Remaining issues: - Clustering The HA clustering components are undergoing testing of various failover and remote-administration scenarios, and documentation about using the clustering components is still being added to the Users Guide. Also, a new clustering plugin has been included, for use with RSCT clusters. This plugin is still very experimental, and must be built separately in the plugins/rsct/ directory in the 1.9.2 package. - Snapshot As with the 1.9.1 release, snapshots created using the EVMS 1.x releases will not work with the new EVMS design, due to a metadata format change. Please delete your snapshots from 1.x before using 1.9.x. Snapshot rollback is still disabled. Device-Mapper support for snapshots is only included for the 2.4 kernels. Snapshots will not work on 2.5 kernels. - Root on EVMS Instructions are now available for creating an init-ramdisk image for use with EVMS. Please see the INSTALL.initrd file in the 1.9.2 package. These instructions have been tested on 2.4 kernels. They have not yet been tested on 2.5 kernels. However, init-ramdisks should work similarly on 2.4 and 2.5. Please send any questions, problem reports or bugs to the EVMS mailing list: evm...@li... -- Kevin Corry co...@us... http://evms.sourceforge.net/ EVMS ChangeLog ============== 1.9.2 (2003-03-17): - Third beta release - Clustering / HA Plug-in - Improved failover script. - Testing various failover and loss-of-quorum scenarios. - Core Engine - Allow engine to open if Device-Mapper isn't loaded. - Move Support (Offline) - Add move support to the MD plug-in. - Add move support to the GPT plug-in. - Text-Mode UI - Support "Create Feature" from context popup menu. - Support "Modify Properties". - Support volume conversions. - Snapshotting - Lots of testing and bug-fixes. - DOS Segment Manager - Resolve differences between active DM devices and partition-table metadata. - Disk Manager - Support for devfs without requiring devfsd. - Configuration File - New entries affecting the engine log. - Better comments explaining disk-manager entries wrt devfs. - Documentation - Instructions for creating init-ramdisks for use with EVMS, along with a sample /linuxrc script. 1.9.1 (2003-03-04): - Second beta release - Clustering / HA Plug-in - Finished remote administration capabilities - Add necessary reassign support to the cluster-segment-manager - Adding support for failover and loss-of-quorum recovery. - Move Support (Offline) - Add move support to Drive-Linking. - Text-Mode UI - Most functionality complete. - Still working on: - Convert volume - Modify attributes - Display details "extra" information - Tree views - Snapshotting - Initial support for snapshotting using Device-Mapper - Currently on supported on 2.4 kernels. - Bad-Block-Relocation - Reworked BBR Device-Mapper module. - Various bug reports and feedback from users - Steve Centrone, Erik Tews, Matt Zimmerman 1.9.0 (2003-02-14): - First beta release - Clustering / HA Plug-in - Improved handling for large messages. - Add installation instructions for settig up EVMS on Linux-HA clusters. - Add failover script for reassigning a cluster-container after a node failure. - Engine Core - Updates to allow remote administration within a cluster. - Move Support (Offline) - New plugin and services to assist in copying data from source to destination. - DOS Segment Manager - Enable moving existing segments into available freespace on the same disk. - LVM Plugin - Enable moving logical extents to new physical extent location. - S/390 Segment Manager - Enable DASD formatting. - New Text-Mode UI - Supports most common operations - Create, delete - Installation - New default installation directories. See INSTALL for details. 1.9.0-pre3 (2003-02-04): - Third alpha release - Clustering / HA Plug-in - Add evmsd, a small command line UI that simply launches the engine in daemon mode. - First successful tests of launching engine and locking all remote daemons. - Lots of testing and bug fixes. - Disk Manager - Fixes to properly handle symlinks to device files. - MD - Add option to restore original major/minor numbers of children in MD objects. This allows full backwards-compatibility with MD raidtools. - Text-Mode UI - Initial re-write of the ncurses-based UI. - Still working towards full functionality. - Old ncurses UI is still available, but does not build by default. - AIX - Add initial support for activating AIX regions using Device-Mapper. - BSD - New plugin for managing disks partitioned by the BSD operating systems. - New engine configuration and build scripts. 1.9.0-pre2 (2003-01-21): - Second alpha release - Clustering - Initial code for HA plugin, for interfacing with the Linux High-Availability clustering package. This plugin will help support basic fail-over capabilities for EVMS. - Add cluster-segment-manager - Provides support for defining disk groups when a cluster has access to shared storage. - D-List - Lots of updates to simplify several APIs and remove a lot of internal code. - OS/2 - Add initial support for activating OS/2 regions with Device-Mapper. 1.9.0-pre1 (2002-12-31): - First alpha release of EVMS running with Device-Mapper and MD/Software-RAID. - All plug-ins ported to new design except: - Snapshot - OS/2 - AIX - Issue with backwards-compatibility with MD. |
From: Kevin C. <co...@us...> - 2003-03-04 23:05:08
|
The EVMS team is announcing the second beta-level release of the new Enterprise Volume Management System. Package 1.9.1 is now available for download at the project web site: http://www.sf.net/projects/evms This beta release is for the new EVMS design, which is based on user-space volume discovery and communication with existing kernel drivers, such as MD/Software-RAID and Device-Mapper. As of this release, much of EVMS has stabilized and has undergone significant testing. However, certain parts of the project are still undergoing testing or development (see the list of outstanding issues below). Please continue to use caution when using this release. We greatly appreciate any testing, feedback and comments from our users about your experiences with the new design. Please see the INSTALL file in the 1.9.1 package for information about installing and getting started with the new EVMS. Since this new design is based on user-space discovery, there will no longer be boot-time volume activation. Please see the INSTALL file for information about how to activate your volumes using the EVMS user-interfaces. EVMS 1.9.1 is supported on the 2.4.20 and 2.5.63 kernels. Users have reported successfully running on earlier 2.4 kernels as well. Remaining issues: - Clustering The clustering capabilities are still undergoing integration and testing. Initial instructions for installing EVMS in an HA cluster environment have been included in this release, but the full details of failover and remote administration are still being worked out. - Snapshot The EVMS snapshot plugin has now been ported to use Device-Mapper. It has only undergone limited testing thus far, so be gentle. Since the Device-Mapper snapshot support is only included for the 2.4 kernels, snapshots will not work under 2.5. Also, the metadata format has changed in order to be compatible with Device-Mapper, so existing EVMS snapshots from 1.2.1 or earlier will not be recognized by 1.9.1. Please delete your old snapshots before running the new EVMS. Also, snapshot rollback has been temporarily disabled. This functionality was implemented in the EVMS kernel driver in 1.2.x, and will need to be reimplemented in user-space. - Move Support for "moving/replacing" objects in DriveLinking has been added. Move support for the MD/Software-RAID plugin will be added soon. As with 1.9.0, all move operations must be performed offline. Online move will be added after the 2.0 release. - New Text-Mode UI The new ncurses UI is very close to complete. The vast majority of operations now work in the new ncurses, and only a small subset of the GUI functionality still needs to be added. The previous text-mode UI is still included, but does is not built by default. - Root on EVMS We are still working on instructions and/or scripts to enable users to have their root filesystem on an EVMS volume, using an init ramdisk or initramfs. At this time, if you have your root filesystem on an EVMS volume, please continue to use the 1.2.1 release. However, there have been reports from users who have sucessfully created initrd's that work with EVMS 1.9.0 and allow them to mount their root filesystem through EVMS. Please send any questions, problem reports or bugs to the EVMS mailing list: evm...@li... -- Kevin Corry co...@us... http://evms.sourceforge.net/ EVMS ChangeLog ============== 1.9.1 (2003-03-04): - Second beta release - Clustering / HA Plug-in - Finished remote administration capabilities - Add necessary reassign support to the cluster-segment-manager - Adding support for failover and loss-of-quorum recovery. - Move Support (Offline) - Add move support to Drive-Linking. - Text-Mode UI - Most functionality complete. - Still working on: - Convert volume - Modify attributes - Display details "extra" information - Tree views - Snapshotting - Initial support for snapshotting using Device-Mapper - Currently only supported on 2.4 kernels. - Bad-Block-Relocation - Reworked BBR Device-Mapper module. - Various bug reports and feedback from users - Steve Centrone, Erik Tews, Matt Zimmerman 1.9.0 (2003-02-14): - First beta release - Clustering / HA Plug-in - Improved handling for large messages. - Add installation instructions for settig up EVMS on Linux-HA clusters. - Add failover script for reassigning a cluster-container after a node failure. - Engine Core - Updates to allow remote administration within a cluster. - Move Support (Offline) - New plugin and services to assist in copying data from source to destination. - DOS Segment Manager - Enable moving existing segments into available freespace on the same disk. - LVM Plugin - Enable moving logical extents to new physical extent location. - S/390 Segment Manager - Enable DASD formatting. - New Text-Mode UI - Supports most common operations - Create, delete - Installation - New default installation directories. See INSTALL for details. 1.9.0-pre3 (2003-02-04): - Third alpha release - Clustering / HA Plug-in - Add evmsd, a small command line UI that simply launches the engine in daemon mode. - First successful tests of launching engine and locking all remote daemons. - Lots of testing and bug fixes. - Disk Manager - Fixes to properly handle symlinks to device files. - MD - Add option to restore original major/minor numbers of children in MD objects. This allows full backwards-compatibility with MD raidtools. - Text-Mode UI - Initial re-write of the ncurses-based UI. - Still working towards full functionality. - Old ncurses UI is still available, but does not build by default. - AIX - Add initial support for activating AIX regions using Device-Mapper. - BSD - New plugin for managing disks partitioned by the BSD operating systems. - New engine configuration and build scripts. 1.9.0-pre2 (2003-01-21): - Second alpha release - Clustering - Initial code for HA plugin, for interfacing with the Linux High-Availability clustering package. This plugin will help support basic fail-over capabilities for EVMS. - Add cluster-segment-manager - Provides support for defining disk groups when a cluster has access to shared storage. - D-List - Lots of updates to simplify several APIs and remove a lot of internal code. - OS/2 - Add initial support for activating OS/2 regions with Device-Mapper. 1.9.0-pre1 (2002-12-31): - First alpha release of EVMS running with Device-Mapper and MD/Software-RAID. - All plug-ins ported to new design except: - Snapshot - OS/2 - AIX - Issue with backwards-compatibility with MD. 6@ |
From: Kevin C. <kev...@sb...> - 2003-02-15 16:57:14
|
On Saturday 15 February 2003 10:22, Kevin Corry wrote: > The EVMS team is announcing the first beta-level release of the new > Enterprise Volume Management System. Package 1.9.0 is now available for > download at the project web site: http://www.sf.net/projects/evms Along with the 1.9.0 package, an expanded and improved Users Guide has been posted on the EVMS web site. You can find it at: http://evms.sf.net/users_guide/ -- Kevin Corry co...@us... http://evms.sourceforge.net/ |
From: Kevin C. <kev...@sb...> - 2003-02-15 16:20:30
|
The EVMS team is announcing the first beta-level release of the new Enterprise Volume Management System. Package 1.9.0 is now available for download at the project web site: http://www.sf.net/projects/evms This beta release is for the new EVMS design, which is based on user-space volume discovery and communication with existing kernel drivers, such as MD/Software-RAID and Device-Mapper. As of this release, much of EVMS has stabilized and has undergone significant testing. However, certain parts of the project are still undergoing testing or development (see the list of outstanding issues below). Please continue to use caution when using this release. We greatly appreciate any testing, feedback and comments from our users about your experiences with the new design. Please see the INSTALL file in the 1.9.0 package for information about installing and getting started with the new EVMS. Since this new design is based on user-space discovery, there will no longer be boot-time volume activation. Please see the INSTALL file for information about how to activate your volumes using the EVMS user-interfaces. EVMS 1.9.0 is supported on the 2.4.20 and 2.5.60 kernels. Users have reported successfully running on earlier 2.4 kernels as well. Remaining issues: - Clustering The clustering capabilities are still undergoing integration and testing. Initial instructions for installing EVMS in an HA cluster environment have been included in this release, but the full details of failover and remote administration are still being worked out. - Snapshot We are still working on porting the snapshot plugin to the Device-Mapper framework. For this release, the snapshot code has been removed from the package. - Move The initial support for "moving" volumes has been included in this release. Currently, the DOS segment manager and the LVM plugin support move. Support will soon be added for DriveLinking and MD/Software-RAID. All move operations must be performed offline. Online move will be added after the 2.0 release. - New Text-Mode UI The new ncurses UI is still under development, but already supports many common operations. It has a much closer look-and-feel to the GUI than did the previous text-mode UI. The previous text-mode UI is still included, but does is not built by default. - Root on EVMS We are still working on instructions and scripts to enable users to have their root filesystem on an EVMS volume, using an init ramdisk or initramfs. At this time, if you have your root filesystem on an EVMS volume, please continue to use the 1.2.1 release. Please send any questions, problem reports or bugs to the EVMS mailing list: evm...@li... -- Kevin Corry co...@us... http://evms.sourceforge.net/ EVMS ChangeLog =============== 1.9.0 (2003-02-14): - First beta release - Clustering / HA Plug-in - Improved handling for large messages. - Add installation instructions for settig up EVMS on Linux-HA clusters. - Add failover script for reassigning a cluster-container after a node failure. - Engine Core - Updates to allow remote administration within a cluster. - Move Support (Offline) - New plugin and services to assist in copying data from source to destination. - DOS Segment Manager - Enable moving existing segments into available freespace on the same disk. - LVM Plugin - Enable moving logical extents to new physical extent location. - S/390 Segment Manager - Enable DASD formatting. - New Text-Mode UI - Supports most common operations - Create, delete - Installation - New default installation directories. See INSTALL for details. 1.9.0-pre3 (2003-02-04): - Third alpha release - Clustering / HA Plug-in - Add evmsd, a small command line UI that simply launches the engine in daemon mode. - First successful tests of launching engine and locking all remote daemons. - Lots of testing and bug fixes. - Disk Manager - Fixes to properly handle symlinks to device files. - MD - Add option to restore original major/minor numbers of children in MD objects. This allows full backwards-compatibility with MD raidtools. - Text-Mode UI - Initial re-write of the ncurses-based UI. - Still working towards full functionality. - Old ncurses UI is still available, but does not build by default. - AIX - Add initial support for activating AIX regions using Device-Mapper. - BSD - New plugin for managing disks partitioned by the BSD operating systems. - New engine configuration and build scripts. 1.9.0-pre2 (2003-01-21): - Second alpha release - Clustering - Initial code for HA plugin, for interfacing with the Linux High-Availability clustering package. This plugin will help support basic fail-over capabilities for EVMS. - Add cluster-segment-manager - Provides support for defining disk groups when a cluster has access to shared storage. - D-List - Lots of updates to simplify several APIs and remove a lot of internal code. - OS/2 - Add initial support for activating OS/2 regions with Device-Mapper. 1.9.0-pre1 (2002-12-31): - First alpha release of EVMS running with Device-Mapper and MD/Software-RAID. - All plug-ins ported to new design except: - Snapshot - OS/2 - AIX - Issue with backwards-compatibility with MD. |
From: Kevin C. <co...@us...> - 2003-02-04 21:16:15
|
The EVMS team is announcing the third alpha-level release of the new Enterprise Volume Management System. Package 1.9.0-pre3 is now available for download at the project web site: http://www.sf.net/projects/evms This alpha-level release is for the new EVMS design, which is based on user-space volume discovery and communication with existing kernel drivers, such as MD/Software-RAID and Device-Mapper. Please note that as of this release, EVMS is undergoing very active development. You should use the appropriate level of caution before using this release! With that said, we will greatly appreciate any testing, feedback and comments from our users about your experiences with this new design. EVMS 1.9.0-pre3 is supported on the 2.4.20 and 2.5.59 kernels. The plugin for Snapshotting has not yet been ported to the new design, so this plugin has been removed from the 1.9.0-pre3 package. The AIX and OS/2 plugins have now been ported to work with Device-Mapper. However, they are still undergoing testing, and may not activate their devices correctly. Please see the INSTALL file in the 1.9.0-pre3 package for information about installing and getting started with the new EVMS. Since this new design is based on user-space discovery, there will no longer be boot-time volume activation. Please see the INSTALL file for information about how to activate your volumes using the EVMS user-interfaces. Also, root filesystems on EVMS volumes is not currently supported with 1.9.0-pre3. We are still working on the details of activating volumes with an init ramdisk or initramfs, and will add those details to the INSTALL instructions in a future release. If you currently have your root filesystem on an EVMS volume, please continue using the 1.2.1 release for the time being. Please send any questions, problem reports or bugs to the EVMS mailing list: evm...@li.... ChangeLog: 1.9.0-pre3 (2003-02-04): - Third alpha release - Clustering / HA Plug-in - Add evmsd, a small command line UI that simply launches the engine in daemon mode. - First successful tests of launching engine and locking all remote daemons. - Lots of testing and bug fixes. - Disk Manager - Fixes to properly handle symlinks to device files. - MD - Add option to restore original major/minor numbers of children in MD objects. This allows full backwards-compatibility with MD raidtools. - Text-Mode UI - Initial re-write of the ncurses-based UI. - Still working towards full functionality. - Old ncurses UI is still available, but does not build by default. - AIX - Add initial support for activating AIX regions using Device-Mapper. - BSD - New plugin for managing disks partitioned by the BSD operating systems. - New engine configuration and build scripts. -- Kevin Corry co...@us... http://evms.sourceforge.net/ |
From: Kevin C. <co...@us...> - 2003-01-21 18:12:46
|
The EVMS team is announcing the second alpha-level release of the new Enterprise Volume Management System. Package 1.9.0-pre2 is now available for download at the project web site: http://www.sf.net/projects/evms This alpha-level release is for the new EVMS design, which is based on user-space volume discovery and communication with existing kernel drivers, such as MD/Software-RAID and Device-Mapper. Please note that as of this release, EVMS is undergoing very active development. You should use the appropriate level of caution before using this release! With that said, we will greatly appreciate any testing, feedback and comments from our users about your experiences with this new design. EVMS 1.9.0-pre2 is supported on the 2.4.20 and 2.5.59 kernels. The plugins for Snapshotting, AIX, and OS/2 have not yet been ported to the new design, so these plugins have been removed from the 1.9.0-pre2 package. As they are updated for the new design, they will be added to upcoming releases. Please see the INSTALL file in the 1.9.0-pre2 package for information about installing and getting started with the new EVMS. Since this new design is based on user-space discovery, there will no longer be boot-time volume activation. Please see the INSTALL file for information about how to activate your volumes using the EVMS user-interfaces. Also, root filesystems on EVMS volumes is not currently supported with 1.9.0-pre2. We are still working on the details of activating volumes with an init ramdisk or initramfs, and will add those details to the INSTALL instructions in a future release. If you currently have your root filesystem on an EVMS volume, please continue using the 1.2.1 release for the time being. The 1.9.0-pre2 now contains most of the code required to run EVMS in an HA-clustering environment. We are still working on integrating all of the pieces, which include an HA plugin, a clustering-segment-manager, and an engine daemon to be run on each machine in the cluster. We are also still writing the installation and usage instructions, but brave users may wish to preview the code in engine/engine/, engine/plugins/csm/, and engine/plugins/ha/. Please send any questions, problem reports or bugs to the EVMS mailing list: evm...@li.... -- Kevin Corry co...@us... http://evms.sourceforge.net/ |
From: BODENES M. <kfr...@ya...> - 2003-01-14 04:43:47
|
SUBJECT=3A UNCLAIMED DIVIDEND Greetings=2E I sourced your details from the Trade Promotion Council=2E Iam a serious officer with one of our main branches of the bank=2C consumer Banking Department=2EAlthough we didn't have previous correspondence till date=2E However=2C based on the standard value place on everything about you=2C I believed we could discuss=2C analyze and execute this transaction=2E The transaction is thus=3A I've a wealthy client=2C an Australian Citizen who had been residing in West Africa for decades now=2C he operates a fix deposit account with us which I am his account officer=2C and also receives standing orders for his chains of dividends share=2E However=2C this client Mr=2E David Brown died as an out come of a fatal accident=2Cand the funds in his current account has been claimed by his family=2Cwho had gone back finally to Sydney=2C Australia=2E At the end of Fiscal year=2C March 2001=2C an accumulated share of US$12=2E360M was transferred into his account from the Stock Exchange=2E This was alone the instructions we issued to the Stock House to dispose all his stocks=2E Now the funds has arrived=2C I needed an associate whom I would present as the inheritor of this fund i=2Ee=2E Associate partner of Mr=2E David Brown so as to receive this fund=2E Please note=2C his family has left since=2Cand never know about this stocks in the exchange and the funds has subsequently matured in his Fix Deposit Account=2C so I as the account officer has prepared all documents for easy claims of this fund=2E Immediately=2C you reach me=2C I will furnish you with further details and then negotiate on the sharing ratio once you show your sincere involvement to go along with me=2E It will be of need for you to furnish me also your personal phone and fax numbers for urgent messages=2E I am anxiously waiting to hear your consent=2E Be guided=2E Thanking you =2E Yours sincerely=2C Bodenes Martins |
From: Kevin C. <co...@us...> - 2003-01-07 15:50:11
|
On Tuesday 07 January 2003 08:15, Stephen Arden wrote: > All, > > Please excuse me while I get this out of my system. > > I am relatively new on this list as we test and contemplate using EVMS on > Linux on xSeries and zSeries systems and all > this information and mis-information about EVMS being scrapped isn't > helping anyone make a decision. > It is also starting to show up on other Linux forums I am on. > From what I have read I think I understand that all we're talking about is > a move to user-level code and out of the kernel, or at least that a > standard > kernel feature will be used as an interface. That's fine, but it doesn't > seem to be stopping rumors and mis-information from spreading. > > If EVMS is to continue on and live to a ripe old age, it would be nice is > someone (someone high enough up in the food chain so that their statements > carry sufficient weight) would issue a "definitive" statement about EVMS > plans so we can all put this issue to bed. Ok, once more, for the record: EVMS is *NOT* going away. There are several developers here who's sole job this year is to develop EVMS. We are continuing to improve the code and add new features. Just last week we released the first package of the new EVMS design. For those who missed the announcement, please take a look at: http://marc.theaimsgroup.com/?l=evms-devel&m=104136990521938&w=2 New features that will be coming this month include HA clustering support and volume "move" capabilities. The article mentioned on this list the other day (http://news.com.com/2100-1001-979142.html?tag=fd_top) is certainly misleading. The main distinction that many people don't seem to understand is the difference between the kernel device driver and the user-space volume management tools. EVMS is a set of user-space administration tools. LVM2 is also a set of user-space administration tools. Device-mapper is a kernel device driver which is used by both EVMS and LVM2. It was not LVM2, but Device-mapper, that was accepted into the 2.5 kernel. Previously, EVMS had its own kernel device driver. When Device-mapper was accepted into the 2.5 kernel, we decided it would be far simpler in the long run to adjust our user-space tools to work with that driver instead of the driver we had been using. The main affect of that decision is that volume discovery is now performed in user-space instead of directly in the kernel. This is an affect that most users will barely notice. Users who look at the old EVMS and the new EVMS will see almost no difference in how it operates and how various volume management tasks are performed. As for the article's statement that we "would scrap much of [the] project", this is also misleading. Since we switched kernel drivers, we did drop *some* of the code we had been developing. This dropped code does not account for any more than about 20% of the total code we had developed. And this number is also somewhat incorrect, since we have been able to take some of the work from our old kernel driver and use it to write some additional plugins for Device-mapper. So in reality, a minor amount of code has been "scrapped". This is the normal course of evolution for *any* software project. As new ideas are formed and new directions taken, old code is discarded and new code is written. As I said above, we are continuing to actively develop EVMS. We are also working with the LVM developers to shake out any remaining issues with Device-mapper. In my opinion, it is looking to be a very solid, reliable kernel driver which both projects will be able to build on. Stephen, feel free to pass this note along to the other forums/lists you mentioned above. Or let me know the lists in question and I will respond there as well. If anyone has any other concerns or questions about EVMS, please email this list, email me directly, or drop by our IRC channel (irc.freenode.net, #evms). I will be happy to discuss any issues you think have not been addressed. -- Kevin Corry co...@us... http://evms.sourceforge.net/ |
From: Eff N. <eno...@ef...> - 2003-01-04 02:05:29
|
What's this all about? "...Enterprise Volume Management System (EVMS) announced they would scrap much of their project" http://news.com.com/2100-1001-979142.html?tag=fd_top Sounds inaccurate. Thanks, Eff Norwood |
From: Kevin C. <co...@us...> - 2002-12-31 21:21:26
|
The EVMS team is announcing the first alpha-level release of the new Enterprise Volume Management System. Package 1.9.0-pre1 is now available for download at the project web site: http://www.sf.net/projects/evms This alpha-level release is the first for the new EVMS design, which is based on user-space volume discovery and communication with existing kernel drivers, such as MD/Software-RAID and Device-Mapper. Please note that as of this release, EVMS is undergoing very active development. You should use the appropriate level of caution before using this release! With that said, we will greatly appreciate any testing, feedback and comments from our users about your experiences with this new design. EVMS 1.9.0-pre1 is supported on the 2.4.20 and 2.5.53 kernels. The plugins for Snapshotting, AIX, and OS/2 have not yet been ported to the new design, so these plugins have been removed from the 1.9.0-pre1 package. As they are updated for the new design, they will be added to upcoming releases. Please see the INSTALL file in the 1.9.0-pre1 package for information about installing and getting started with the new EVMS. Since this new design is based on user-space discovery, there will no longer be boot-time volume activation. Please see the INSTALL file for information about how to activate your volumes using the EVMS user-interfaces. Also, root filesystems on EVMS volumes is not currently supported with 1.9.0-pre1. We are still working on the details of activating volumes with an init ramdisk or initramfs, and will add those details to the INSTALL instructions in a future release. If you currently have your root filesystem on an EVMS volume, please continue using the 1.2.1 release for the time being. Please send any questions, problem reports or bugs to the EVMS mailing list: evm...@li.... -- Kevin Corry co...@us... http://evms.sourceforge.net/ |
From: Kevin C. <co...@us...> - 2002-12-18 21:56:52
|
Hi, In addition to the normal packages, I have also created a kernel patch that will upgrade an existing kernel tree from EVMS 1.2.0 to 1.2.1. This patch can be found with the regular package files on the project page, or at: http://evms.sourceforge.net/patches/1.2.1/evms-1.2.0-to-1.2.1-runtime.patch -- Kevin Corry co...@us... http://evms.sourceforge.net/ |
From: Kevin C. <co...@us...> - 2002-12-17 22:37:59
|
The EVMS team is announcing the next stable release of the Enterprise Volume Management System. Package 1.2.1 is now available for download at the project web site: http://www.sf.net/projects/evms This release is primarily bug-fixes and minor enhancements for the 1.2.0 release, based on user feedback. EVMS 1.2.1 has full support for the 2.4 kernel, and includes patches for most kernels up to 2.4.20. 2.5 kernels are no longer supported for 1.2.1. Please send any questions, problem reports or bugs to the EVMS mailing list: evm...@li.... v1.2.1 - 12/17/02 Kernel Core - Change several ioctl packet definitions so all fields are fixed-size and all fields are properly aligned for 64-bit architectures. - Always use per-volume request queues. - Fix bug when deciding whether to hard-delete or soft-delete volumes at shutdown time. This fixes the problem of RAID-1 root volumes always resyncing on reboot. GUI - Allow starting a "Create Feature Object" task from context popup. - Set keyboard focus on first entry field in options dialog that needs input. - Show list of plugins that are able to actually do a particular task. - "Pre-select" acceptable objects for certain tasks when only one acceptable object. - Dynamically display only those views that actually have objects in them. - Follow GNOME Human Interface Guideliness for message popup alert dialogs. - Bugfix to message window handling that would cause segfault. - Increase floating point precision of spinbutton widget used in options dialog. - Cleanup views (consistent column ordering and remove superfluous info). - Fix some locale settings. Command Line - Add a "-b" flag to indicate batch mode. - In this mode, all messages requiring user-interaction will automatically use the default selection. LVM Plugin - Some LVM LVs created with older versions (0.8) of the Sistina tools do not have sector-aligned LV structures. Add code to detect this misalignment and read the LV structures from the correct place. Snapshot Plugin - The VFS lock patch only allows one filesystem at a time to be locked and flushed. This means that only one snapshot volume can be created during a single commit/rediscover or the VFS locking code will deadlock. Add a flag to the snapshot engine plugin to prevent more than one snapshot from being activated at a time. In order to activate multiple snapshots, the user must now force a commit between each one. BBR Plugin - The ext3 filesystem accesses fields in the buffer-heads after they have been submitted but before they have completed. This breaks due to the way BBR hooks each I/O to watch for write failures. Change BBR to allocate a whole new buffer-head for each I/O that it has to track and save the original buffer-head in private data. MD Plugin - Fix segfault in RAID-1 sync daemon. Patch suggested by Rory Bolt. - Clean up duplicated info in kernel log messages. - Use a separate kill-sectors list for each raid5 object in the engine. Filesystem Interface Modules - If a filesystem's utilities are missing or out-of-date, still load the FSIM plugin, but disallow actions such as mkfs, fsck, and resize. This prevents the FSIMs from complaining each time the engine is opened. - Reiser FSIM: Don't try to use the "label" option with older versions of the ReiserFS utilities. Disk Manager Engine Plugin - Added a read-cache, which cuts engine discovery time by 66%. AIX Engine Plugin - Several bug fixes and cleanups. Dlist - Several plugins were fixed to pass the correct tags to Dlist when inserting objects into lists. -- Kevin Corry co...@us... http://evms.sourceforge.net/ |
From: Mahdi H. <mh...@ye...> - 2002-11-16 12:42:56
|
Can I use that extra space to make another raid5(with other 7gb disk)? in general,is it possible to use 2 partition of one disk to make raid5(not only one prtn)? ----- Original Message ----- From: "Mahdi Hajimoradi" <mh...@ye...> To: <evm...@li...> Sent: Saturday, November 16, 2002 11:04 AM Subject: Re: [Evms-announce] RAID & EVMS > > ----- Original Message ----- > From: "Mahdi Hajimoradi" <mh...@ye...> > To: "Steve Pratt" <sl...@us...> > Sent: Saturday, November 16, 2002 11:01 AM > Subject: Re: [Evms-announce] RAID & EVMS > > > > > Ok, let me take a shot at this: > > > > > > 1. Make a raid5 with the 8*4gb. This will give you 28gb usable. > > > 2. Partition 17gb ide into 10gb and 7gb partitions > > > 3. Make a 2nd raid5 with the 2*10gb scsi + 3*10gb ide + 1*10gb partition > = > > > 50gb usable > > > > > > Then if you want 1 big volume combine the 2 raid5 array with Drive > Linking > > > to give 78GB volume. If you want multiple volumes, than add both raid5 > > > arrays to an LVM container and create regions of any size you want. > > > > > > As a note, you will have 7gb left over on the one disk that is not > raid5, > > > you can use this for whatever. > > > > Can I use that extra space to make another raid5(with other 7gb disk)? > > in general,is it possible to use 2 partition of one disk to make raid5(not > > only one prtn)? > > > > > > > > This is the best setup I can think of. If if someone has a better idea > > > speak up, but except for the 7gb left out, this is pretty efficient use > of > > > disk space. > > > > > > Steve > > > > > > > ----- Original Message ----- > > From: "Steve Pratt" <sl...@us...> > > To: "Mahdi Hajimoradi" <mh...@ye...> > > Sent: Thursday, November 14, 2002 7:06 PM > > Subject: Re: [Evms-announce] RAID & EVMS > > > > > > > > > > Mahdi Hajimoradi wrote: > > > >> Let me make sure I understand what you want to do: You have 5 disks, > > > >> and want to combine three of them into a single object, and use that > > > >> object along with the remaining 2 disks to create a RAID-5 object? Is > > > >> this correct? > > > > > > >almost yes, but exactly 13 Disks!!:"8*4GB SCSI HDD + 2*10GB SCSI + > 3*10GB > > > >IDE +17GB IDE HDD" and I need almost 80GB RAID5 > > > > > > Ok, let me take a shot at this: > > > > > > 1. Make a raid5 with the 8*4gb. This will give you 28gb usable. > > > 2. Partition 17gb ide into 10gb and 7gb partitions > > > 3. Make a 2nd raid5 with the 2*10gb scsi + 3*10gb ide + 1*10gb partition > = > > > 50gb usable > > > > > > Then if you want 1 big volume combine the 2 raid5 array with Drive > Linking > > > to give 78GB volume. If you want multiple volumes, than add both raid5 > > > arrays to an LVM container and create regions of any size you want. > > > > > > As a note, you will have 7gb left over on the one disk that is not > raid5, > > > you can use this for whatever. > > > > > > This is the best setup I can think of. If if someone has a better idea > > > speak up, but except for the 7gb left out, this is pretty efficient use > of > > > disk space. > > > > > > Steve > > > > > > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by: To learn the basics of securing > your web site with SSL, click here to get a FREE TRIAL of a Thawte > Server Certificate: http://www.gothawte.com/rd524.html > _______________________________________________ > Evms-announce mailing list > Evm...@li... > To subscribe/unsubscribe, please visit: > https://lists.sourceforge.net/lists/listinfo/evms-announce |
From: Mahdi H. <mh...@ye...> - 2002-11-16 07:34:34
|
----- Original Message ----- From: "Mahdi Hajimoradi" <mh...@ye...> To: "Steve Pratt" <sl...@us...> Sent: Saturday, November 16, 2002 11:01 AM Subject: Re: [Evms-announce] RAID & EVMS > > Ok, let me take a shot at this: > > > > 1. Make a raid5 with the 8*4gb. This will give you 28gb usable. > > 2. Partition 17gb ide into 10gb and 7gb partitions > > 3. Make a 2nd raid5 with the 2*10gb scsi + 3*10gb ide + 1*10gb partition = > > 50gb usable > > > > Then if you want 1 big volume combine the 2 raid5 array with Drive Linking > > to give 78GB volume. If you want multiple volumes, than add both raid5 > > arrays to an LVM container and create regions of any size you want. > > > > As a note, you will have 7gb left over on the one disk that is not raid5, > > you can use this for whatever. > > Can I use that extra space to make another raid5(with other 7gb disk)? > in general,is it possible to use 2 partition of one disk to make raid5(not > only one prtn)? > > > > > This is the best setup I can think of. If if someone has a better idea > > speak up, but except for the 7gb left out, this is pretty efficient use of > > disk space. > > > > Steve > > > > ----- Original Message ----- > From: "Steve Pratt" <sl...@us...> > To: "Mahdi Hajimoradi" <mh...@ye...> > Sent: Thursday, November 14, 2002 7:06 PM > Subject: Re: [Evms-announce] RAID & EVMS > > > > > > Mahdi Hajimoradi wrote: > > >> Let me make sure I understand what you want to do: You have 5 disks, > > >> and want to combine three of them into a single object, and use that > > >> object along with the remaining 2 disks to create a RAID-5 object? Is > > >> this correct? > > > > >almost yes, but exactly 13 Disks!!:"8*4GB SCSI HDD + 2*10GB SCSI + 3*10GB > > >IDE +17GB IDE HDD" and I need almost 80GB RAID5 > > > > Ok, let me take a shot at this: > > > > 1. Make a raid5 with the 8*4gb. This will give you 28gb usable. > > 2. Partition 17gb ide into 10gb and 7gb partitions > > 3. Make a 2nd raid5 with the 2*10gb scsi + 3*10gb ide + 1*10gb partition = > > 50gb usable > > > > Then if you want 1 big volume combine the 2 raid5 array with Drive Linking > > to give 78GB volume. If you want multiple volumes, than add both raid5 > > arrays to an LVM container and create regions of any size you want. > > > > As a note, you will have 7gb left over on the one disk that is not raid5, > > you can use this for whatever. > > > > This is the best setup I can think of. If if someone has a better idea > > speak up, but except for the 7gb left out, this is pretty efficient use of > > disk space. > > > > Steve > > > > |
From: Mahdi H. <mh...@ye...> - 2002-11-14 05:25:53
|
> Let me make sure I understand what you want to do: You have 5 disks, > and want to combine three of them into a single object, and use that > object along with the remaining 2 disks to create a RAID-5 object? Is > this correct? almost yes, but exactly 13 Disks!!:"8*4GB SCSI HDD + 2*10GB SCSI + 3*10GB IDE +17GB IDE HDD" and I need almost 80GB RAID5 > Why not just use the five disks directly to make a RAID-5 object? because as you see the size of my disks are not the same. > What are the sizes of each of your disks? If the disks have > dramatically different sizes, there are other tricks to play to make > sure all of the available space is used efficiently. and also protecting my data from disk failure. ----- Original Message ----- From: "Kevin M Corry" <co...@us...> To: "Mahdi Hajimoradi" <mh...@ye...> Cc: <evm...@li...> Sent: Wednesday, November 13, 2002 5:47 PM Subject: Re: [Evms-announce] RAID & EVMS > > > > TNX for your answer but I think you didn't get what I mean.Let me > > explain more. I have 5 HDD with deferent size as a source object > > and wan to concatenate them to make 3 storage object in purpose > > of making RAID 5. Could you tell me if it's possible to do. > > TNX in Advance > > Let me make sure I understand what you want to do: You have 5 disks, > and want to combine three of them into a single object, and use that > object along with the remaining 2 disks to create a RAID-5 object? Is > this correct? > > Why not just use the five disks directly to make a RAID-5 object? > > What are the sizes of each of your disks? If the disks have > dramatically different sizes, there are other tricks to play to make > sure all of the available space is used efficiently. > > Kevin Corry > > > > ----- Original Message ----- > From: "Kylene J Smith" <ky...@us...> > To: "Mahdi Hajimoradi" <ha...@al...> > Cc: <evm...@li...> > Sent: Tuesday, November 12, 2002 6:28 PM > Subject: Re: [Evms-announce] RAID & EVMS > > > > > > Yes you can use EVMS to create a raid 5. However, if the objects are not > > of the same size the excess space on the bigger of the objects will just > be > > wasted. So you might want to create segments of the same size first and > > put the raid on top of these. > > > > > > Kylie > > > > > > > > > > > > > > > > Hi > > First of all, pardon me, this is (nearly) a kind of cross posting. > > Is it possible to use EVMS volumes to make raid 5.?! > > according to software-RAID HOWTO, LinuxRAID > > can work on most block devices. It doesn't matter whether > > you use IDE or SCSI devices, or a mixture. So is it possible > > to use EVMS to use multi hdd (with different size)together and > > use them to create Raid. > > TNX > > --M. Hajimoradi > > > > > > > > > > > > > > ------------------------------------------------------- > > This sf.net email is sponsored by:ThinkGeek > > Welcome to geek heaven. > > http://thinkgeek.com/sf > > _______________________________________________ > > Evms-announce mailing list > > Evm...@li... > > To subscribe/unsubscribe, please visit: > > https://lists.sourceforge.net/lists/listinfo/evms-announce > > > > ------------------------------------------------------- > This sf.net email is sponsored by: > To learn the basics of securing your web site with SSL, > click here to get a FREE TRIAL of a Thawte Server Certificate: > http://www.gothawte.com/rd522.html > _______________________________________________ > Evms-announce mailing list > Evm...@li... > To subscribe/unsubscribe, please visit: > https://lists.sourceforge.net/lists/listinfo/evms-announce > > > |
From: Mahdi H. <mh...@ye...> - 2002-11-13 05:08:52
|
TNX for your answer but I think you didn't get what I mean.Let me explain more. I have 5 HDD with deferent size as a source object and wan to concatenate them to make 3 storage object in purpose of making RAID 5. Could you tell me if it's possible to do. TNX in Advance ----- Original Message ----- From: "Kylene J Smith" <ky...@us...> To: "Mahdi Hajimoradi" <ha...@al...> Cc: <evm...@li...> Sent: Tuesday, November 12, 2002 6:28 PM Subject: Re: [Evms-announce] RAID & EVMS > > Yes you can use EVMS to create a raid 5. However, if the objects are not > of the same size the excess space on the bigger of the objects will just be > wasted. So you might want to create segments of the same size first and > put the raid on top of these. > > > Kylie > > > > > > > > Hi > First of all, pardon me, this is (nearly) a kind of cross posting. > Is it possible to use EVMS volumes to make raid 5.?! > according to software-RAID HOWTO, LinuxRAID > can work on most block devices. It doesn't matter whether > you use IDE or SCSI devices, or a mixture. So is it possible > to use EVMS to use multi hdd (with different size)together and > use them to create Raid. > TNX > --M. Hajimoradi > > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Evms-announce mailing list > Evm...@li... > To subscribe/unsubscribe, please visit: > https://lists.sourceforge.net/lists/listinfo/evms-announce |
From: Kylene J S. <ky...@us...> - 2002-11-12 14:58:28
|
Yes you can use EVMS to create a raid 5. However, if the objects are not of the same size the excess space on the bigger of the objects will just be wasted. So you might want to create segments of the same size first and put the raid on top of these. Kylie Hi First of all, pardon me, this is (nearly) a kind of cross posting. Is it possible to use EVMS volumes to make raid 5.?! according to software-RAID HOWTO, LinuxRAID can work on most block devices. It doesn't matter whether you use IDE or SCSI devices, or a mixture. So is it possible to use EVMS to use multi hdd (with different size)together and use them to create Raid. TNX --M. Hajimoradi |
From: Mahdi H. <ha...@al...> - 2002-11-12 08:02:51
|
Hi First of all, pardon me, this is (nearly) a kind of cross posting. Is it possible to use EVMS volumes to make raid 5.?! according to software-RAID HOWTO, LinuxRAID can work on most block devices. It doesn't matter whether you use IDE or SCSI devices, or a mixture. So is it possible to use EVMS to use multi hdd (with different size)together and=20 use them to create Raid. TNX=20 --M. Hajimoradi |
From: Christoph H. <hc...@in...> - 2002-11-07 20:24:50
|
On Tue, Nov 05, 2002 at 04:19:10PM -0600, Kevin Corry wrote: > Greetings EVMS users, > > On behalf of the EVMS team, we would like to announce a significant change > in direction for the Enterprise Volume Management System project. > > As many of you may know by now, the 2.5 kernel feature freeze has come > and gone, and it seems clear that the EVMS kernel driver is not going > to be included. With this in mind, we have decided to rework the EVMS > user-space administration tools (the Engine) to work with existing > drivers currently in the kernel, including (but not necessarily limited > to) device mapper and MD. Hi Kevin, I think that's a very good move for EVMS in the long term. You will be able to provide the users what they want (an easy to user and integrated volume managment solution) without having the pain of maintaining a large base of kernel-level code. Of course there will be some hassle for you now, like adding DM plugins for higher raid levels, etc.. But in the end I guess it will hope both EVMS and Linux. EVMS by reducing the scope of the project without reducing it's functionality, the Linux by having a modular and leight-weight in kernel volume-managment solution with many eyes looking at it, using it and improving it. I guess you will find a bunch of suboptimal things in DM very soon and I hope you will help the kernel community in fixing it. This of course means also that DM will hopefully get an integral part of the kernel, not an Sistina Project like LVM1. I (and I guess many other kernel developers interested in storage handling) will look forward to the comments who the current kernel-level storage managment facilities are useable by an unified userland engine handling it, and I guess mthere will be many obvious improvement. I'm also looking forward to IBM's opensource cluster volumemanagment integration as competitions to Sistina's propritary addons. I wish you all luck with your new direction and expect me and other kernel developers in that area to help you wherever we can. Christoph |
From: Alexander V. <vi...@ma...> - 2002-11-06 02:50:07
|
On Tue, 5 Nov 2002, Eff Norwood wrote: > > So, you're volunteering to maintain the EVMS subsystem for 2.5 ? > > > > If not, I propose you let Kevin and the other EVMS developers > > make the decision. > > So, having EVMS not included in the kernel was the decision they wanted to > make? > > If not, then I propose you be a little more reasonable and think about what > this decision does to all the work thus far put into EVMS. This decision (to move the bulk of EVMS code to userland and isolate the changes needed in the kernel) *definitely* means less work in the long run - for EVMS people in the first place. Userland code is easier to write. There one has full runtime environment and that alone means a lot. There one has no 8Kb limit on the stack size. There one has memory protection. And there code doesn't have to do anything about the changes of kernel internals. It's also easier to debug - for very obvious reasons. The goal is to provide functionality, not to put it in the kernel - the latter always means harder life. It is the last resort measure ("we have no way to do that in userland with acceptable performance and correctness, damn, time to deal with the kernel side") and finding a way to make do with more compact kernel part (ideally - already maintained by somebody else ;-) is always good news. And I seriously doubt that work thus far put into EVMS goes down the drain from move to userland - they would have to be absolutely incompetent for that to be the case and I don't see what allows you to accuse them in that. What that decision does mean is serious one-time effort that makes life easier once it's done. And that had taken real courage - my applause to them (and not only mine, while we are at). What they had done was pretty amazing and my respect to the team that had chosen to do the right thing, had been able to defend that decision and to their management that had allowed that has just gone _way_ up. Bravo, folks. And best luck - seriously. I respect very few people. These I _do_ respect. A lot. |
From: Eff N. <eno...@ef...> - 2002-11-06 01:41:13
|
> So, you're volunteering to maintain the EVMS subsystem for 2.5 ? > > If not, I propose you let Kevin and the other EVMS developers > make the decision. So, having EVMS not included in the kernel was the decision they wanted to make? If not, then I propose you be a little more reasonable and think about what this decision does to all the work thus far put into EVMS. Eff |
From: Andrew C. <cl...@gn...> - 2002-11-06 00:19:14
|
On Tue, Nov 05, 2002 at 04:00:10PM -0500, Mike Diehl wrote: > Well, I'm a bit disapointed. My experience with LVM has been nothing short > of disasterous; I think you'll find LVM2 much more pleasant than LVM1. It's a reimplementation with a very different (minimalist :) architecture. Cheers, Andrew |
From: Michael N. <mic...@co...> - 2002-11-06 00:18:55
|
This is one sad :( email to read, and Im sure it's even more difficult to write. There can't be any winner when public domain refuses a given work. I commend your=20 past and your continuing development effort. Near term: 1. How long will EVMS1.2.0 & kernel2.4 be supported? Looking further out: 1. Is EVMS runtime a throw away? 2. Is EVMS engine to modify for LVM2 support? 3. What will happen to the modular (plugins)? - AIX LVM - OS2 LVM - Device manager (local/san) - etc.. Thanks, Michael. > -----Original Message----- > From: Kevin Corry [mailto:co...@us...]=20 > Sent: Tuesday, November 05, 2002 2:19 PM > To: evm...@li...;=20 > evm...@li... > Cc: lin...@vg... > Subject: [Evms-devel] EVMS announcement >=20 >=20 > Greetings EVMS users, >=20 > On behalf of the EVMS team, we would like to announce a=20 > significant change in direction for the Enterprise Volume=20 > Management System project. >=20 > As many of you may know by now, the 2.5 kernel feature freeze=20 > has come and gone, and it seems clear that the EVMS kernel=20 > driver is not going to be included. With this in mind, we=20 > have decided to rework the EVMS user-space administration=20 > tools (the Engine) to work with existing drivers currently in=20 > the kernel, including (but not necessarily limited > to) device mapper and MD. >=20 > Why make this change? With EVMS being passed over for=20 > inclusion in 2.5, the future of the EVMS kernel driver=20 > becomes very uncertain. We could obviously continue working=20 > on it and keep it up-to-date as a patch against the latest=20 > kernels. Numerous helpful comments and changes were suggested=20 > during the review of the code last month on the kernel=20 > mailing list. We could spend the time to make many of the=20 > desired fixes, including some architectural and interface=20 > changes. However, the one issue that has not been addressed=20 > at length is EVMS's in-kernel volume discovery mechanism. We=20 > believe that even if the other changes are made, this will=20 > eventually become an issue at a later time. Moving discovery=20 > to user-space is certainly a possibility. However, at that=20 > point, it would become difficult to differentiate the EVMS=20 > driver from the device mapper driver, since they would be=20 > performing very similar tasks. >=20 > In addition, there would be no need to maintain duplicate MD=20 > kernel code in order to provide compatibility with existing=20 > software RAID devices. Obviously this duplication has been a=20 > significant issue, but it was an unfortunate necessity in=20 > order for MD devices to be discovered within the current EVMS=20 > kernel framework. With discovery moving to user-space, the=20 > EVMS tools can simply be rewritten to communicate with the=20 > existing MD driver in the kernel. This approach allows MD to=20 > be used directly, without requiring it to be immediately=20 > ported to device mapper. However, if the decision is made in=20 > the future to make that port, then the EVMS tools should only=20 > become simpler. >=20 > We will also emphasize that this change has not been made=20 > suddenly or without a great deal of thought. We have been=20 > contemplating this possibility since shortly after the Ottawa=20 > Linux Symposium in July. However, we continued to develop the=20 > EVMS kernel driver because of input from our users. We wanted=20 > to go ahead and submit the driver and get the opinion of the=20 > full community before making this decision. In the last few=20 > weeks it has become clear that the current EVMS approach is=20 > not what the kernel community was looking for, so we have=20 > spent that time determining the feasibility and consequences=20 > of making this switch. We have come up with a good initial=20 > plan, and everyone involved now agrees that this is the best=20 > course of action. >=20 > So how will this switch affect the EVMS users? Ideally, we=20 > want the users' experience with EVMS to remain completely=20 > unchanged. Based on our current plans, the user interfaces=20 > will not have to change at all, since we don't see any major=20 > changes to the Engine's external application interface. The=20 > plan is to provide the same, single, coherent method for=20 > performing all volume management tasks. This change will be=20 > almost transparent for most users. The same features,=20 > plugins, and capabilities will be supported. >=20 > There will, of course, be some minor changes. Specifically,=20 > installing EVMS will be slightly different. It will involve=20 > different kernel options than you are used to with the=20 > current version. In the 2.5 kernel, all of the major=20 > components are already present, so little, if any, kernel=20 > patching should be necessary. Since device mapper has not yet=20 > been included in the main 2.4 kernel, 2.4 users will still=20 > require kernel patches. In addition, some functionality still=20 > does not exist in any of the available drivers. Specifically,=20 > we may provide extra device mapper modules for features like=20 > bad block relocation. The installation of the EVMS engine=20 > tools, on the other hand, should not change significantly=20 > from the current method. >=20 > The other major difference will be due to the move to=20 > user-space discovery. First of all, why make this switch? The=20 > most obvious reason is that the kernel drivers become much=20 > simpler, and the only things they need to provide is I/O=20 > handling and a method for activating the volumes. While disk=20 > partitioning and software RAID still perform discovery in the=20 > kernel, the trend seems to be to move these tasks to=20 > user-space. It is likely at some point in the future that=20 > partitioning and MD will also be moved out of the kernel as=20 > well. However, the drawback to making this switch is losing=20 > automatic boot-time volume discovery. Activating EVMS volumes=20 > will now require a call to a user-space utility, which will=20 > need to be added to the system's init scripts in order to=20 > activate the volumes on each boot. >=20 > In addition, this switch complicates having the root=20 > filesystem on an EVMS volume. Currently there is a lot of=20 > work being done on adding initramfs to the 2.5 kernel, which=20 > will provide a pre-root-fs user-space. This new system should=20 > provide a simple method for adding tasks to run during this=20 > early user-space, and those who wish to use root-on-EVMS will=20 > just need to add the EVMS tools to their initramfs. For 2.4=20 > users, this means using an initial ramdisk (initrd) to=20 > provide this same pre-root user-space. Initrd setup is=20 > certainly awkward and often distribution- specific. But we=20 > will do our best to provide adequate instructions and=20 > assistance to those who need help in that situation. >=20 > Looking ahead, we *will* continue to *fully* support the=20 > 1.2.0 version of EVMS on 2.4 kernels, and possibly release a=20 > 1.2.1 version with some recent bug fixes. We will also make a=20 > reasonable effort to maintain the current EVMS kernel driver=20 > on 2.5. It will not go through any other major changes, but=20 > we will try to keep it up-to-date and working with the latest=20 > 2.5 releases, until the new EVMS tools are complete. At that=20 > point, the 2.5 EVMS driver will be dropped. Also, the new=20 > enhancements we have been working on recently, such as=20 > clustering and volume move, will only be developed under the=20 > new Engine model, and will not be available for the current=20 > 1.2.x code base. >=20 > So how long will this take? Currently, we are estimating that=20 > we can have the user-space volume activation framework=20 > working, along with initial support for most of the plugins,=20 > by early 2003. Certain features, such as BBR and=20 > Snapshotting, may take longer to work out the details of=20 > their operation. We will soon open a new CVS tree to hold the=20 > new Engine code, leaving the old trees as a repository for=20 > bug fixes to the 1.2.x version. >=20 > In summary, we feel that this decision is the best way to=20 > support our users for the long term. We want to provide EVMS=20 > on current and future kernels, and we feel this change=20 > provides the best method for achieving that. At the same=20 > time, this addresses all of the concerns voiced by the kernel=20 > community. If anyone has any questions or concerns about=20 > this decision, please email us or the EVMS mailing list at=20 > evm...@li.... We will be happy to answer any=20 > questions or discuss these changes in more detail. >=20 > Thank you, >=20 > The EVMS Team > http://evms.sourceforge.net/ > evm...@li... >=20 >=20 > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm=20 > Tungsten T handheld. Power & Color in a compact size!=20 http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en _______________________________________________ Evms-devel mailing list Evm...@li... To subscribe/unsubscribe, please visit: https://lists.sourceforge.net/lists/listinfo/evms-devel |