Thread: [Aoetools-discuss] aoe target question
Brought to you by:
ecashin,
elcapitansam
From: Brian L. <br...@qu...> - 2005-11-11 01:35:38
|
Is it possible to have a server export a device through aoe and then use that same device as an aoe device?(vblade y n ethx /dev/sdb1 :: mount /dev/etherd/ey.n /somedir) Here is more info in case you wonder why I would want this madness and = maybe someone can correct me if I am in error in a couple of assumptions I = make. I want to create a cluster that is using aoe disks. I would like to use EVMS/LVM/CLVM to create the volumes. Currently the plan is to use a = couple of servers that will hold 8 disks each and create a raid set. Basically, raid 5 the drives on the server and then mirror the two servers as = another raid level to the LVM layer. My assumptions is that to have this work correctly, all servers in the cluster would need to see the drives the = same way. The drives that are local to one server would not be local drives = to the other server, etc... and shelf numbers would be different. Hence, I figure I export all drives to the network and then mount them under aoe = to then actually use them. Am I completely nuts and just confused the heck = out of the situation?=20 I can export a disk and use it on a second system, but aoe-stat on the = first system never shows the exported disk. I would like to create and md = volume out of two devices, but maybe there is a better way. Any help or direction here would be great. ---------- Brian Lanier |
From: Ed L C. <ec...@co...> - 2005-11-11 16:06:05
|
"Brian Lanier" <br...@qu...> writes: > Is it possible to have a server export a device through aoe and then use > that same device as an aoe device?(vblade y n ethx /dev/sdb1 :: mount > /dev/etherd/ey.n /somedir) Hi. I'm going to include below my response to a similar question, copied from the debian bug tracking system. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=336772 Hi, David Martinez Moreno. Yes, there is such a limitation. The vblade and aoe driver do work over a network but do not work on a single host. Because using the vblade and aoe driver in conjunction was never a design goal, the lack of that feature is not a bug. But because the prevention of such conjoined functionality was not a goal either, I imagine that by clever juggling of network devices on the localhost (perhaps with pseudo devices and routing tables) it might be possible to access the vblade-exported storage with an aoe driver on the same host. I imagine that using UML would make it even easier, but I'm not sure. ... > Here is more info in case you wonder why I would want this madness and maybe > someone can correct me if I am in error in a couple of assumptions I make. > > I want to create a cluster that is using aoe disks. I would like to use > EVMS/LVM/CLVM to create the volumes. Currently the plan is to use a couple > of servers that will hold 8 disks each and create a raid set. Basically, > raid 5 the drives on the server and then mirror the two servers as another > raid level to the LVM layer. Cluster LVM (clvm) mirroring wasn't working yet last I heard. Is that what you were intending to use? > My assumptions is that to have this work > correctly, all servers in the cluster would need to see the drives the same > way. The drives that are local to one server would not be local drives to > the other server, etc... and shelf numbers would be different. Hence, I > figure I export all drives to the network and then mount them under aoe to > then actually use them. Am I completely nuts and just confused the heck out > of the situation? I don't think so. Have you seen PVFS yet? That does something like what you're talking about, where multiple hosts share disks over the network to create a large, high-performance filesystem available to all nodes in the cluster. > I can export a disk and use it on a second system, but aoe-stat on the first > system never shows the exported disk. I would like to create and md volume > out of two devices, but maybe there is a better way. Hmm. I probably don't understand your proposed system, because using md on each cluster node for mirroring two servers would be impractical due to the fact that md isn't cluster aware. > Any help or direction here would be great. You have lots of possibilities! -- Ed L Cashin <ec...@co...> |
From: Brian L. <br...@qu...> - 2005-11-11 16:22:55
|
Thanks for the quick response... See below with more = questions/intentions. >-----Original Message----- >From: Ed L Cashin [mailto:ec...@co...] >Sent: Friday, November 11, 2005 7:49 AM > >"Brian Lanier" <br...@qu...> writes: > >> Is it possible to have a server export a device through aoe and then = use >> that same device as an aoe device?(vblade y n ethx /dev/sdb1 :: mount >> /dev/etherd/ey.n /somedir) > >Hi. I'm going to include below my response to a similar question, >copied from the debian bug tracking system. > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D336772 > > Hi, David Martinez Moreno. Yes, there is such a limitation. The > vblade and aoe driver do work over a network but do not work on a > single host. > > Because using the vblade and aoe driver in conjunction was never a > design goal, the lack of that feature is not a bug. But because the > prevention of such conjoined functionality was not a goal either, I > imagine that by clever juggling of network devices on the localhost > (perhaps with pseudo devices and routing tables) it might be > possible to access the vblade-exported storage with an aoe driver on > the same host. > >I imagine that using UML would make it even easier, but I'm not sure. That's kind of what I figured. I just wanted to double check before I = spent a bunch of time banging my head against the wall trying to get it to = work. > >... >> Here is more info in case you wonder why I would want this madness = and >maybe >> someone can correct me if I am in error in a couple of assumptions I >make. >> >> I want to create a cluster that is using aoe disks. I would like to = use >> EVMS/LVM/CLVM to create the volumes. Currently the plan is to use a >couple >> of servers that will hold 8 disks each and create a raid set. = Basically, >> raid 5 the drives on the server and then mirror the two servers as >another >> raid level to the LVM layer. > >Cluster LVM (clvm) mirroring wasn't working yet last I heard. Is that >what you were intending to use? I think the plan was to use LVM simply to make it easier to expand the = array later. I was planning on using md to do all of the mirroring and setup. > >> My assumptions is that to have this work >> correctly, all servers in the cluster would need to see the drives = the >same >> way. The drives that are local to one server would not be local = drives to >> the other server, etc... and shelf numbers would be different. Hence, = I >> figure I export all drives to the network and then mount them under = aoe >to >> then actually use them. Am I completely nuts and just confused the = heck >out >> of the situation? > >I don't think so. Have you seen PVFS yet? That does something like >what you're talking about, where multiple hosts share disks over the >network to create a large, high-performance filesystem available to >all nodes in the cluster. > How does PVFS compare to GFS with regards to setup and use on a network using aoe drives. I will look at PVFS today. >> I can export a disk and use it on a second system, but aoe-stat on = the >first >> system never shows the exported disk. I would like to create and md >volume >> out of two devices, but maybe there is a better way. > >Hmm. I probably don't understand your proposed system, because using >md on each cluster node for mirroring two servers would be impractical >due to the fact that md isn't cluster aware. I was under the impression that EVMS made this possible by making sure = the configuration on each server was the same; sort of mimicking a cluster = aware md by wrapping it up in some other tools. We are still in planning/lab stages and trying to get this all sorted out. Ultimately, we are trying to get to a poor man SAN. Our starting point = is to use GFS on some shared storage accessible to a host layer that handles = that applications. We intended to put GFS on top of some sort of LVM layer = for future growth, that was on top of md and hardware raid, on top of a = bunch of aoe disks. We are trying to flatten the layers some so that we didn't = have to have a separate layer to aggregate all the aoe disks and then = represent those to the network as another volume. We have looked at the Coraid = Raid Blade solutions, but my boss doesn't like the idea of a dedicated = appliance, especially when you would need to buy two for fault tolerance. Is there a better way? We like the Aoe idea for flexibility and ease of = use, but really need to have multiple servers read and write to the same data = at the same time. |
From: Ed L C. <ec...@co...> - 2005-11-11 16:57:53
|
"Brian Lanier" <br...@qu...> writes: ... > I was under the impression that EVMS made this possible by making sure the > configuration on each server was the same; sort of mimicking a cluster aware > md by wrapping it up in some other tools. We are still in planning/lab > stages and trying to get this all sorted out. If that's the case I'd like to know more about it. Maybe you could send me some links in email off the mailing list. ... > Is there a better way? We like the Aoe idea for flexibility and ease of use, > but really need to have multiple servers read and write to the same data at > the same time. The same *redundant* data at the same time. That's the kicker. If EVMS can do what you say, then that sounds like a solution for you. -- Ed L Cashin <ec...@co...> |