Thanks for the quick response... See below with more =
>From: Ed L Cashin [mailto:ecashin@...]
>Sent: Friday, November 11, 2005 7:49 AM
>"Brian Lanier" <brian@...> writes:
>> Is it possible to have a server export a device through aoe and then =
>> that same device as an aoe device?(vblade y n ethx /dev/sdb1 :: mount
>> /dev/etherd/ey.n /somedir)
>Hi. I'm going to include below my response to a similar question,
>copied from the debian bug tracking system.
> Hi, David Martinez Moreno. Yes, there is such a limitation. The
> vblade and aoe driver do work over a network but do not work on a
> single host.
> Because using the vblade and aoe driver in conjunction was never a
> design goal, the lack of that feature is not a bug. But because the
> prevention of such conjoined functionality was not a goal either, I
> imagine that by clever juggling of network devices on the localhost
> (perhaps with pseudo devices and routing tables) it might be
> possible to access the vblade-exported storage with an aoe driver on
> the same host.
>I imagine that using UML would make it even easier, but I'm not sure.
That's kind of what I figured. I just wanted to double check before I =
a bunch of time banging my head against the wall trying to get it to =
>> Here is more info in case you wonder why I would want this madness =
>> someone can correct me if I am in error in a couple of assumptions I
>> I want to create a cluster that is using aoe disks. I would like to =
>> EVMS/LVM/CLVM to create the volumes. Currently the plan is to use a
>> of servers that will hold 8 disks each and create a raid set. =
>> raid 5 the drives on the server and then mirror the two servers as
>> raid level to the LVM layer.
>Cluster LVM (clvm) mirroring wasn't working yet last I heard. Is that
>what you were intending to use?
I think the plan was to use LVM simply to make it easier to expand the =
later. I was planning on using md to do all of the mirroring and setup.
>> My assumptions is that to have this work
>> correctly, all servers in the cluster would need to see the drives =
>> way. The drives that are local to one server would not be local =
>> the other server, etc... and shelf numbers would be different. Hence, =
>> figure I export all drives to the network and then mount them under =
>> then actually use them. Am I completely nuts and just confused the =
>> of the situation?
>I don't think so. Have you seen PVFS yet? That does something like
>what you're talking about, where multiple hosts share disks over the
>network to create a large, high-performance filesystem available to
>all nodes in the cluster.
How does PVFS compare to GFS with regards to setup and use on a network
using aoe drives. I will look at PVFS today.
>> I can export a disk and use it on a second system, but aoe-stat on =
>> system never shows the exported disk. I would like to create and md
>> out of two devices, but maybe there is a better way.
>Hmm. I probably don't understand your proposed system, because using
>md on each cluster node for mirroring two servers would be impractical
>due to the fact that md isn't cluster aware.
I was under the impression that EVMS made this possible by making sure =
configuration on each server was the same; sort of mimicking a cluster =
md by wrapping it up in some other tools. We are still in planning/lab
stages and trying to get this all sorted out.
Ultimately, we are trying to get to a poor man SAN. Our starting point =
use GFS on some shared storage accessible to a host layer that handles =
applications. We intended to put GFS on top of some sort of LVM layer =
future growth, that was on top of md and hardware raid, on top of a =
aoe disks. We are trying to flatten the layers some so that we didn't =
to have a separate layer to aggregate all the aoe disks and then =
those to the network as another volume. We have looked at the Coraid =
Blade solutions, but my boss doesn't like the idea of a dedicated =
especially when you would need to buy two for fault tolerance.
Is there a better way? We like the Aoe idea for flexibility and ease of =
but really need to have multiple servers read and write to the same data =
the same time.