Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

UserGuide

Tony Asleson Gris Ge

LibStorageMgmt User Guide

1. Introduction

The LibStorageMgmt(LSM) package is a storage array independent Application
Programming Interface (API). It provides a stable and consistent API that
allows developers the ability to programmatically manage different storage
arrays and leverage the hardware accelerated features that they provide.

It is intended that this library will be used as a building block for other
higher level management tools and applications. It is also intended as a tool
that can be used directly by end system administrators to manually manage
storage and automate storage management tasks with the use of scripts.

libStoragemgmt is intended to manage these type of Storage devices:

2. Features

  • List storage pools/volumes(LUN)/access groups/file systems.
  • Create & delete pool/volumes/access groups/file systems/NFS exports.
  • Grant & remove access to volumes to access groups/initiators.
  • Replicate volumes via snapshots, clones, and copies.
  • Create & delete access groups and edit members of a group.
  • Resize volumes.

Please refer to Storage Array Support Status for detail.

3. Provides

  • Stable C and Python API for client application and plug-in developers.
  • A command line interface which utilizes the library (lsmcli).
  • A daemon which executes the plug-in (lsmd).
  • Simulator plug-in which allows testing of client applications (sim).
  • Plug-in architecture for interfacing with arrays.

4. License

The LibStorageMgmt package is available as free software under the
GNU Lesser General Public License(LGPL).

5. Warning

This library and associated tools have the ability to destroy any and all
data located on arrays that it manages. It is highly recommended to develop
and test applications and scripts against the storage simulator plug-in to
remove any logic errors before working with production systems. Testing
applications and scripts on actual non-production hardware before deploying to
production is strongly encouraged if possible.

6. Architecture

The design of the library provides for process separation between the client
and the plug-in by means of inter-process communication (IPC). This prevents
bugs in the plug-in from crashing the client application. It also provides a
means for plug-in writers to write plug-ins with a license of their own
choosing. When a client opens the library passing a URI, the client library
looks at the URI to determine which plug-in should be used. The plug-ins are
technically stand alone applications, but they are designed to have a file
descriptor passed to them on the command line. The client library opens the
appropriate Unix domain socket, this causes the daemon to fork and exec the
plug-in. At that point the client library has a point to point communication
channel with the plug-in. The daemon can restarted without affecting existing
clients. While the client has the library open for that plug-in, the plug-in
process is running. After the client sends one or more commands and closes
the plug-in, the plug-in process cleans up and exits. For a sequence diagram
see here

7. Terminology

Storage arrays use a wide variety of terminology to define similar functionality. Regardless of what definitions are chosen it will bring confusion to some one. libStorageMgmt uses the following terminology.

Term Meaning Synonym
System Represents a Storage Array or direct attached storage RAID. Examples include:
  • A hardware RAID card, LSI MegaRAID
  • A storage area network (SAN), EMC VNX, NetApp Filer
  • A software solution running on commodity hardware, targetd, Nexenta
None
Pool A group of storage space, typically File Systems or Volumes can be created from a Pool. StoragePool(SNIA Terminology)
Volume Storage Area Network (SAN) Storage Arrays can expose a Volume to the Host Bus Adapter (HBA) over different transports (FC/iSCSI/FCoE/etc.) The host OS treats it as block devices(one volume can be exposed as many disks if multipath[2] is enabled). LUN(Logical Unit Number)[1], StorageVolume(SNIA Terminology), Virtual disk
Filesystem Network Attached Storage (NAS) Storage array can expose a Filesystem to host OS via IP network using NFS or CIFS protocol. The host OS treats it as a mountpoint or a folder containing files depending on client operating system. None
Disk Physical disk holding the data. Pools typically consist of one or more disks. DiskDrive(SNIA Terminology)
Initiator In fibre channel (FC) or fibre channel over ethernet (FCoE), Initiator is WWPN(World Wide Port Name)[3] or/and WWNN(World Wide Node Name). In iSCSI, Initiator is the IQN(iSCSI Qualified Name)[4]. In NFS or CIFS, Initiator is the host name or IP address of host. None
Access group Collections of iSCSI/FC/FCoE initiators which are granted access to one or more Storage volumes. This ensures that only storage volumes are accessible by the specified initiator(s). Initiator group(igroup), Host Group
Volume Mask Exposing a Volume to a specified Access Group. The libStorageMgmt library currently does not support logical unit masking with the ability to choose a specific logical unit number (LUN). The libStoragemgmt library lets the storage array select the next available LUN for assignment. Make sure you read the OS, Storage Array, or HBA documents if you are configuring boot from SAN or masking 256+ Volumes. Volumes are masked to all target ports. Futures versions of libStorageMgmt may allow you to specify specific target port. LUN Mapping, LUN masking
Volume Unmask Reverse of volume mask LUN unmap, LUN unmask
Clone Point in time read writeable space efficient copy of data Read writeable snapshot
Copy Full bitwise copy of the data (occupies full space) None
Mirror SYNC I/O will be blocked until I/O reached both source and target storage systems. There will be no data difference between source and target storage systems. None
Mirror ASYNC I/O will be blocked until I/O reached source storage systems. The source storage system will use copy the changes data to target system in a predefined interval. There will be a small data differences between source and target. None
Snapshot A point in time (PIT), read only, space efficient copy of a file system. Read only snapshot
Child dependency Some arrays have an implicit relationship between the origin (parent Volume or File system) and the child (eg. Snapshot, Clone). For example you cannot delete the parent if it has one or more dependent children. The API provides methods to determine if any such relationship exists and a method to remove the dependency by replicating the required blocks. None

[1] http://en.wikipedia.org/wiki/Logical_unit_number
[2] http://en.wikipedia.org/wiki/Multipath_I/O
[3] https://en.wikipedia.org/wiki/World_Wide_Port_Name
[4] https://en.wikipedia.org/wiki/ISCSI

8. Installation

Currently, we only provide compiled RPMs for libStorageMgmt.

The library is packaged into separated RPMs:

  • libstoragemgmt -- Daemon, CLI util and basic files.

  • libstoragemgmt-python -- Python client libraries.

  • libstoragemgmt-XXX-plugin -- Plugins for certain type of array.

  • libstoragemgmt-devel -- Development files for C language.

The lsmd daemon is required to be started before using libStorageMgmt.

8.1 Stable Release

  • Fedora >= 17 and RHEL6/Centos6 (with EPEL repo), libStorageMgmt has been approved in their repo, this command will install libStorageMgmt:

    yum search libstoragemgmt
    yum install libstoragemgmt-<desired package>
    
  • If you are on a RPM based distribution that isn't supported, please use these commands to compile the RPMs by yourself:

    tar -xzf libstoragemgmt-<version>.tar.gz 
    cd libstoragemgmt-<version>
    ./configure && make rpm
    

8.2 Weekly Snapshot Build

9. Configuration

All LibStorageMgmt need are:

  • The 'lsmd' daemon is running.

    service start lsmd
    
  • URI(Uniform resource identifier)
    URI is used to identify which plug-in LibStorageMgmt should to use via
    which IP address using what parameters. Syntax of URI is:

    plugin://<username>@host:<port>/?<query string parameters>
    plugin+ssl://<username>@host:<port>/?<query string parameters>
    
  • Username/Password
    Valid username and password with sufficient privilege.

9.1 Simulator Configuration

LibStorageMgmt provides these storage array simulators for developing purpose:

  • Python coded simulator
    The URI is:

    sim://
    
  • C coded simulator.
    The URI is:

    simc://
    

No additional configuration needed for simulators.

9.2 EMC CX/VNX/VMAX Configuration

EMC CX/VNX/VMAX is providing a proxy SMI-S provider for storage array
management.

Please contact EMC Support to install and configure a SMI-S provider service
on your server for your storage arrays.

Once done, you could use one of these URIs in LibStorageMgmt:

    smispy://username@host
    smispy+ssl://username@host
    # The 'username' is the username used in EMC SMI-S provider.
    # The default 'username' is 'admin' and the default password
    # is '#1Password'

You can also add "&system=<system_id>" to filter out unneeded storage arrays
like this:

    smispy://username@host?system=<system_id>
    # The 'system_id' could be found via systems() call or 
    # 'lsmcli list --type SYSTEMS' command.

For 'SSL error: certificate verify failed' error, it means LSM failed to
verify EMC SMI-S providers' SSL public keys with system wide PKI/x509.
Please contact EMC support to setup the self-signed certification.

If you want to ignore the SSL check error which is not suggested, please
add 'no_ssl_verify=yes' in your URI, like this:

    smispy+ssl://username@host?no_ssl_verify=yes

The following are the steps used by ourselves on EMC SMI-S provider
installation and configuration.
We are providing this in the favor of saving your time with NO WARRANTY.
Please contact EMC Support instead if are working on production system.

  1. Download SMI-S provider from support.emc.com

  2. Execute these commands to install required rpms (if on x86_64)

    yum install glibc.i686 libgcc.i686
    
  3. Install SMI-S provider (Follow EMC documents if found):

    ./se76012_install.sh -install -smi
    
  4. This command will start EMC SMI-S provider if not started:

    /opt/emc/ECIM/ECOM/bin/ECOM -d
    
  5. Add Storage Array login information to EMC SMI-S provider:

    /opt/emc/ECIM/ECOM/bin/TestSmiProvider
    addsys
    # Follow the instruction of 'addsys'.
    # You need to input both SPA and SPB's management IP address.
    # During our test, 'addsys' will fail if only one IP defined.
    
  6. If you want to remove any array from EMC SMI-S provider,
    but have no idea about the path requested by 'rmsys'.
    Try command 'tv'.

Above steps were tested on:

  • RHEL 6.4 x86_64
  • SMI-S Provider 4.6.0.3 for SMI-S 1.5 for Linux
    • se76012-Linux-i386-SMI.tar.gz
    • symcli-smi64-7.6.0.1707-12.3.x86_64

9.3 NetApp ONTAP Configuration

LibstoragMgmt could use these two ways to management NetApp ONTAP storage array:

  • Navtive NetApp ONTAP SDK.(Preferred)
    You need to install the package libstoragemgmt-ontap-plugin.noarch if you didn't install
    using the distribution tarball.

Please contact NetApp support to enable http/https management interface or use this
command on your own risk:

    options httpd.enable on
    options httpd.admin.enable on

The URI would be:

    ontap://username@host
    ontap://username@host
    # The 'username' is the management username of NetApp ONTAP.
    # The 'host' is the ip/hostname of ONTAP Filer management interface.
  • NetApp ONTAP SMI-S provider.
    NetApp is using proxy SMI-S provider. You need to install the package libstoragemgmt-
    smis-plugin.noarch if you didn't install using the distribution tarball.
    Please contact NetApp support to install and configure a SMI-S provider service on
    your server. The URI would be:

    smispy://username@host:5988?namespace=root/ontap
    smispy+ssl://username@host:5989?namespace=root/ontap
    # The 'username' is the user name of server you installed SMI-S providers on.
    # The 'host' is the ip/hostname of server you installed SMI-S providers on.
    

The following are the steps used by ourselves on NetApp SMI-S provider
installation and configuration.
We are providing this in the favor of saving your time with NO WARRANTY.
Please contact NetApp Support instead if are working on production system.

  1. Download SMI-S provider from NetApp website

  2. Execute these commands to install required rpms (if on x86_64)

    sudo yum install -y ncompress glibc.i686 tog-pegasus-libs.i686 libzip.i686
    
  3. Install SMI-S provider (Follow EMC documents if found):

    tar xf smisagent-5-0.tar
    sudo ./install_smisproxy
    
  4. SMI-S provider configurate:

    su -    # root login is required
    export PEGASUS_ROOT=/usr/ontap/smis/pegasus
    export PEGASUS_HOME=/usr/ontap/smis/pegasus
    export PEGASUS_PLATFORM_=LINUX_IX86_GNU
    export PATH=/usr/bin:$PATH:/usr/ontap/smis/pegasus/bin
    export LD_LIBRARY_PATH=/usr/ontap/smis/pegasus/lib:$LD_LIBRARY_PATH
    export LD_LIBRARY_PATH=/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
    smis cimserver start
    smis addsecure <netapp_filer_ip> <user> <passwd>
    cimconfig -p -s enableAuthentication=true
    cimuser -a -u pegasus -w pegasus   
    # Change the 'pegasus' to your favorite SMI-S user/pass
    
  5. For autostart SMI-S provider, please add these lines to your /etc/rc.local:

    export PEGASUS_ROOT=/usr/ontap/smis/pegasus
    export PEGASUS_HOME=/usr/ontap/smis/pegasus
    export PEGASUS_PLATFORM_=LINUX_IX86_GNU
    export PATH=/usr/bin:$PATH:/usr/ontap/smis/pegasus/bin
    export LD_LIBRARY_PATH=/usr/ontap/smis/pegasus/lib:$LD_LIBRARY_PATH
    export LD_LIBRARY_PATH=/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
    smis cimserver start
    

Above steps were tested on:

  • RHEL 6.5 x86_64
  • NetApp SMI-S Provider 5.0 (smisagent-5-0.tar)

9.4 LSI MegaRAID Configuration

LSI is providing SMI-S provider installed on server with MegaRAId cards.
Please contact LSI support to install and configure a SMI-S provider on
your MegaRAID equipped server.

For libStorageMgmt you will need to install the libstoragemgmt-smis-plugin.noarch package if you didn't install using the distribution tarball.

Once done, the URI will be:

    smispy+ssl://username@host?namespace=root/LsiMr13
    # The 'username' is the neither root, pegasus or other account
    # has privilege to access open-pegasus(tog-pegasus) daemon. 
    # The 'host' is ip/hostname of your MegaRAID equipped server.

For 'SSL error: certificate verify failed' error, it means LSM failed to
verify LSI SMI-S providers' SSL public keys with system wide PKI/x509.
Please contact LSI support to setup the self-signed certification.

If you want to ignore the SSL check error which is not suggested, please
add 'no_ssl_verify=yes' in your URI, like this:

    smispy+ssl://username@host?namespace=root/LsiMr13&no_ssl_verify=yes

The following are the steps used by ourselves on LSI SMI-S provider
installation and configuration.
We are providing this in the favor of saving your time with NO WARRANTY.
Please contact LSI Support instead if are working on production system.

  • Download SMI-S provider from LSI website.
  • Execute these commands to install required rpms:

    yum install tog-pegasus
    
  • Change SELinux to permissive mode.
    Follow Red Hat document if want to disable SELinux permanent:

    setenforce 0
    # For security concern of disabling SELinux,
    # please contact Red Hat support or LSI support.
    
  • Install SMI-S provider (Follow LSI documents if found):

    yum install Lib_Utils-1.00-09.noarch.rpm 
    yum install lsi_mr_hhr-00.38.0003-rhel6.x86_64.rpm
    
  • Recompule MOF files:

    /opt/lsi/mof/compile_mofs.sh
    
  • Restart tog-pegasus:

    service tog-pegasus restart
    # Daemon tog-pegasus is providing http/https
    # access of LSI SMI-S provider.
    
  • If you want LSI SMI-S provider CIM servier start on boot.
    Make sure you change SELinux to permissive or disable mode.
    Daemon 'tog-pegasus' need configured as on boot also.

Above steps were tested on:

  • RHEL 6.4 x86_64
  • Lib_Utils-1.00-09.noarch
  • lsi_mr_hhr-00.39.0003-rhel6.x86_64

9.5 IBM DS/SVC/XIV Configuration

IBM DS/XIV/SVC are using embedded SMI-S provider which
ran by the storage array itself. Please contact IBM
support to enable SMI-S feature.

Once done, the URI is:

    smispy://username@host:5988/?namespace=root/ibm
    smispy+ssl://username@host:5989/?namespace=root/ibm
    # The 'username' is the username of management account of IBM
    # array.
    # The 'host' is the ip/host of management port of IBM array.

For 'SSL error: certificate verify failed' error, it means LSM failed to
verify IBM SMI-S providers' SSL public keys with system wide PKI/x509.
Please contact IBM support to setup the self-signed certification.

If you want to ignore the SSL check error which is not suggested, please
add 'no_ssl_verify=yes' in your URI, like this:

    smispy+ssl://username@host:5988/?namespace=root/ibm&no_ssl_verify=yes

9.6 Huawei HVS Configuration

Huawei HVS is using embedded SMI-S provider which
ran by the storage array itself. Please contact Huawei
support to enable SMI-S feature.

Once done, the URI is:

    smispy://username@host:5988/?namespace=root/huawei
    smispy+ssl://username@host:5989/?namespace=root/huawei
    # The 'username' is the username of management account of 
    # Huawei array.
    # The 'host' is the ip/host of management port of Huawei array.

For 'SSL error: certificate verify failed' error, it means LSM failed to
verify Huawei SMI-S providers' SSL public keys with system wide PKI/x509.
Please contact Huawei support to setup the self-signed certification.

If you want to ignore the SSL check error which is not suggested, please
add 'no_ssl_verify=yes' in your URI, like this:

    smispy+ssl://username@host:5988/?namespace=root/huawei&no_ssl_verify=yes

9.7 SMIS Enabled Device General Configuration

Please contact storage vendor support for enabling SMI-S service.
Once done, the URI is:

    smispy://username@host:5988/?namespace=root/vendor
    smispy+ssl://username@host:5989/?namespace=root/vendor
    # The 'username' is the username of management account of
    # SMI-S service.
    # The 'host' is the ip/host of management port of SMI-S 
    # service.
    # The '5988' and '5989' is the default port number of SMI-S
    # service, change it if your SMI-S service use other ports.

For 'SSL error: certificate verify failed' error, it means LSM failed to
verify your SMI-S providers' SSL public keys with system wide PKI/x509.
Please contact SMI-S vendor's support to setup the self-signed certification.

If you want to ignore the SSL check error which is not suggested, please
add 'no_ssl_verify=yes' in your URI, like this:

    smispy+ssl://username@host:5988/?namespace=root/vendor&no_ssl_verify=yes

10. Command Line Tool Usage

TODO: Provide quick sample of lsmcli.

Please refer to [CLI_Usage] for detail.

11. Python API Usage

Please refer to [Python_API_Usage] for detail.

Sample python codes are:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#!/usr/bin/python2
import lsm

lsm_cli_obj = lsm.Client("sim://")      # Make connection.

pools = lsm_cli_obj.pools()             # Enumerate Storage Pools.

for p in pools:
                                        # Use pool information.
        print 'pool name:', p.name, 'freespace:', p.free_space

if lsm_cli_obj is not None:
    lsm_cli_obj.close()
    print 'We closed'

12. C API Usage

Please refer to [C_API_Usage] for detail.
TODO: Provide quick sample of C codes.

13. Logging

All error messages are logged via syslog.

14. Security

The daemon utilizes local Unix domain sockets for inter-process communication.
No network accessible sockets are presented. No special firewall rules need
be configured. Plug-ins execute in a separate process as non-administrator
privileges. A non-administrator user that has the appropriate credentials for
the array can manage it.


Related

Wiki: CLI_Usage
Wiki: C_API_Usage
Wiki: Python_API_Usage