Re: [SSI-users] Debian Lenny OpenSSI with LVM and RAID 1
Brought to you by:
brucewalker,
rogertsang
|
From: Cumberland, L. <lon...@ni...> - 2010-08-10 16:52:45
|
Hello John, and All, I am looking at various solutions and implementations to test and have come across MHDDDFS (which uses the FUSE libraries) as a stackable system on mount points: http://romanrm.ru/en/mhddfs http://svn.uvw.ru/mhddfs/trunk/README I was trying to load it into the main server but seem to get errors and as being afraid of crashing the cluster, I thought that I would ask the list if anyone had some ideas? ------------------------------------------------- spartan:/mnt# apt-get install mhddfs Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: fuse-utils The following NEW packages will be installed: fuse-utils mhddfs 0 upgraded, 2 newly installed, 0 to remove and 2 not upgraded. Need to get 0B/38.2kB of archives. After this operation, 213kB of additional disk space will be used. Do you want to continue [Y/n]? WARNING: The following packages cannot be authenticated! fuse-utils mhddfs Authentication warning overridden. Selecting previously deselected package fuse-utils. (Reading database ... 102736 files and directories currently installed.) Unpacking fuse-utils (from .../fuse-utils_2.7.4-1.1+lenny1_i386.deb) ... Selecting previously deselected package mhddfs. Unpacking mhddfs (from .../mhddfs_0.1.12-1_i386.deb) ... Processing triggers for man-db ... Setting up fuse-utils (2.7.4-1.1+lenny1) ... creating fuse group... udev active, skipping device node creation. invoke-rc.d: WARNING: Service udev has no entry in rc.nodeinfo invoke-rc.d: Starting only on initnode Usage: /etc/init.d/udev {start|stop|restart|force-reload} invoke-rc.d: initscript udev, action "reload" failed. dpkg: error processing fuse-utils (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mhddfs: mhddfs depends on fuse-utils; however: Package fuse-utils is not configured yet. dpkg: error processing mhddfs (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: fuse-utils mhddfs E: Sub-process /usr/bin/dpkg returned an error code (1) spartan:/mnt# ------------------------------------------------- I would like to load up the FUSE libraries unless the currently loaded OpenSSI stuff can already handle this idea in some way since it seems that I would have to install DRBD and do not know enough about the CFS to work with it heavily yet. Thanks and have a great day, Lonnie Cumberland, Prof. Physicist > -----Original Message----- > From: John Hughes [mailto:jo...@Ca...] > Sent: Tuesday, August 10, 2010 12:31 PM > To: Cumberland, Lonnie > Cc: Scott Walters; Openssi users > Subject: Re: [SSI-users] Debian Lenny OpenSSI with LVM and RAID 1 > > Cumberland, Lonnie wrote: > > Now I am looking at the disk drives that are located on each node for > mapping into a single filespace so that all space appears to be a > single drive to the users. For this, I was reading that OpenSSI appears > to be using DRBD (if I understand correctly) which allows it to mount > the filespace on each node. If this is so then I am guessing that > perhaps I could use LVM to bring all of the drives together into a > volume. > > > > If that is not possible then perhaps I am utilize a stackable files > system from FUSE like XtreemFS, gfs, or some other. Any ideas? > > > > I am also going to be adding RAID 1 to the system so that the main > drive has a complete mirror and failover incase the main drive goes > down then the secondary will pick up. > > > Your problem with linux raid and lvm wil be making sure that things get > failed over from node to node in the event of a node crash. > > This works: > > use DRDB to mirror between the nodes > use CFS to make the filesystem on the DRBD device available to all the > nodes. > > This also works: > > Use shared disk hardware (SAS, SCSI, Fibre Channel, whatever) to make > the disks available to all the nodes. > Use CFS on the disks to make the filesystem to all the nodes. > > This has worked in the past, I have no practical experience: > > Use a cluster aware filesystem (Lustre, even NFS) and mount the > filesystem on all the nodes. > |