Well, maybe DLM could help building a distributed lock server for GFS ?
That certainly has to be part of the plan. Without distributed locking its not clear that OpenGFS is that good for clustering since it has potential single points of failure.
Mark L. Ferrell
DLM would be usefull, but it will be a performance impact. If your cluster is performance oriented you need to invest in DMEP capable hardware.
The basic idea of the DMEP is that it exists at the hardware level. So if your Raid is dead then ... well .. so what.
Silicon-Gear has working DMEP solutions in their Mercury line of Raid arrays. These toys are pretty kewl. They have multiple SCSI ports across the back and use a PCI Mezzazine Module as an upgrade port so you can either add to FC ports or 2 more SCSI ports to the back of the unit. Each port is inteternally arbitrated so you basicly have a SAN in a box.
DotHill, and one other company, don't remember who, are also working on DMEP support w/in their RAID units.
I have actively been using DMEP for a little over 2 months now on our GFS install.
Well, I'm not an expert, but maybe, instead of using IP for message passing (when attempting to get a lock) we can use fibre channel media for low latency communication?
Also, would it be possible to use FC/IP implementation for such thing?
DMEP is probably nice. Some (all?) controllers from Infortrend (www.infortrend.com) supply DMEP - however still buggy in may of last year. Infortrend is affordable as far as these items go, and they do FCAL with IDE backends too.
That said a GFS based on JBOD/software RAID may be interesting too from a price/performance point of view.
Also take note that for avoiding failure points, one needs to have two reaid controllers. My experience with Active-Active Mylex FCAL RAID wasnot that good: a single LUN in use through both controllers would have only 10% the performance of the case where just one controller was in use. Maybe cache pingpong over the backplane? This was with two slices on the same LUN. I can imagine that separate LUNs per controller would work fine, while accessing the same block through 2 controllers may be even more problematic.
Has anyone used DMEP instead of DLM?
The so-called atomic operations to test&set
and test&clear are anything but atomic.
Ladies and gentlemen,
I have 2 redhat 8 boxes with lp7000 emulex
hbas in them. They connect via SC fibre
to a compaq hub and then what Corpsys
says is a eurologic enclosure. I want to use
GPS and test it out but...
a) IS the compaq hub a vixel rebadge?
b) The documentation makes reference
(in manpages) to manipulating vixels
does this mean I have to init the vixel in
Currently I have the linux emulex drivers
which work as far as I can tell in normal
Redhat 7.3 and 8. But how do I init the
drives I have in the enclosure?
I need something akin to sun's probe -a...
What kind of messages are showing up in your logs(dmesg) when you insert the emulex drivers.
As far as the compaq fc hub, I have no idea.
Which documentation talks about manipulating vixels?
do a cat /proc/partiions before and after loading the emulex drivers, and see if there are any differences.
Mar 5 22:10:27 oraclu01 kernel:Emulex LightPulse FC SCSI/IP 4.20p
Mar 5 22:10:27 oraclu01 kernel:PCI: Found IRQ 11 for device 02:06.0
Mar 5 22:10:27 oraclu01 kernel:PCI : Sharing IRQ 11 with 02:0e.0
Mar 5 22:10:27 oraclu01 kernel: scsi1;mulex LPFC(LP7000) SCSI on PCI bus 02 device 30 irq 11
I have to use 'insmod -f' with Emulex's pre-compiled code and
the kernel is considered tainted.
My statement about vixels come from gfs man page at /distiribution/man/gfs.8 .
/proc/partitions does not show any new partitions and
should not since I can not fdisk the partitions. I can
not get the kernel used by Redhat's anaconda
installer to take in the emulex driver: unresolved
symbols...? why? How does anaconda installer
You might want to try an older rh release. When you insert the emulex modules, it should automatically init the drives in the enclosure. There isn't anything you should have to do other than inserting the emulex module.
The only thing you might have to do to the vixels is use them as a STOMITH device, to fence different nodes. I would start out with manual STOMITH first, then build from there.
I'm not at all familiar with redhat, so I can't help you with any distribution specific problems. If you are using the rpm's you might want to try the tar file and see if you have any more luck with that.
You might want to try sending an email to one of the lists, that is where we handle most of the problems. I am probably the only one who checks the forum.