* [linux-lvm] LVM + raid + san
@ 2010-11-05 1:26 Phillip Susi
2010-11-05 4:39 ` Stuart D. Gathman
0 siblings, 1 reply; 9+ messages in thread
From: Phillip Susi @ 2010-11-05 1:26 UTC (permalink / raw)
To: linux-lvm
I was wondering about using LVM to manage disks on a SAS SAN with one or
more multi disk enclosures and multiple host servers. It looks like LVM
has the capability to manage all of the disks as PVs and coordinate
between the hosts to allow any given host to mount any given lv at any
time, but I could not figure out how raid would fit into the picture.
Ideally you want to group the disks into a few raid5 or raid6 arrays,
and then slice them up into logical volumes. As far as I know the
device mapper raid5/6 support is still highly experimental so most
people just use mdadm to create a raid array and use that as an lvm pv,
but in a san environment, you wouldn't be able to activate the raid pv
on more than one host would you?
Can this be done with mdadm, and/or how is the dm raid5/6 support coming
so that lvm can directly manage the individual disks as pvs while still
having fault tolerance?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] LVM + raid + san
2010-11-05 1:26 [linux-lvm] LVM + raid + san Phillip Susi
@ 2010-11-05 4:39 ` Stuart D. Gathman
2010-11-07 0:51 ` Phillip Susi
0 siblings, 1 reply; 9+ messages in thread
From: Stuart D. Gathman @ 2010-11-05 4:39 UTC (permalink / raw)
To: LVM general discussion and development
On Thu, 4 Nov 2010, Phillip Susi wrote:
> I was wondering about using LVM to manage disks on a SAS SAN with one or more
> multi disk enclosures and multiple host servers. It looks like LVM has the
> capability to manage all of the disks as PVs and coordinate between the hosts
> to allow any given host to mount any given lv at any time, but I could not
> figure out how raid would fit into the picture.
I would run LVM on the SAN server, exporting LVs as SAN units, and each host
would get a virtual SAN disk to do with as it pleased, including running
LVM on it. Then you don't have to deal with locking issues for a shared
volume group. If your SAN server is embedded, it must already have some sort
of management interface to parcel out disk space as virtual disks.
If you don't like its interface, then consider replacing it with a
general purpose host running LVM as described above. That said, many
do use shared volume groups with no problem.
Generally, your SAN (whether embedded or a dedicated general purpose host)
already has the raid built in. The exported virtual disks are raid
reliable. If not, replace the SAN. The whole point of SAN is to not
worry about physical disks anymore on the client systems. If you had multiple
SANs on separate physical LANs, you could stripe them for super speed, but
otherwise raid is already built in. And you can bond multiple 1000BT
interfaces with a gigabit switch to get really fast transfer from
the SAN anyway.
If the SAN server is a general purpose host, I would run raid10, or linux md
extensions to it that get most of the benefits with fewer disks:
http://en.wikipedia.org/wiki/Non-standard_RAID_levels
raid5 has the read/modify/rewrite problem.
I would not use the device-mapper raid, as you note.
Caveat: I've never actually setup a SAN, just used them.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] LVM + raid + san
2010-11-05 4:39 ` Stuart D. Gathman
@ 2010-11-07 0:51 ` Phillip Susi
2010-11-07 3:38 ` Eugene Vilensky
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Phillip Susi @ 2010-11-07 0:51 UTC (permalink / raw)
To: LVM general discussion and development
My understanding of a SAN is where you get a few drive enclosures and a
few servers and plug them all into a sas expander so all of the servers
can see all of the disks. You seem to be talking about having all of
the disks on one server that then serves them over ethernet with iscsi.
I wouldn't want to do that because it adds a good deal of overhead to
the disk access and introduces a single point of failure.
I'd rather just use LVM to manage all of the disks as part of a single
volume group so you can immediately transfer a lv from one server to
another, but I can't work out how to still manage to get raid without
having lvm do it with the dm-raid5 support.
On 11/05/2010 12:39 AM, Stuart D. Gathman wrote:
> I would run LVM on the SAN server, exporting LVs as SAN units, and each host
> would get a virtual SAN disk to do with as it pleased, including running
> LVM on it. Then you don't have to deal with locking issues for a shared
> volume group. If your SAN server is embedded, it must already have some sort
> of management interface to parcel out disk space as virtual disks.
> If you don't like its interface, then consider replacing it with a
> general purpose host running LVM as described above. That said, many
> do use shared volume groups with no problem.
>
> Generally, your SAN (whether embedded or a dedicated general purpose host)
> already has the raid built in. The exported virtual disks are raid
> reliable. If not, replace the SAN. The whole point of SAN is to not
> worry about physical disks anymore on the client systems. If you had multiple
> SANs on separate physical LANs, you could stripe them for super speed, but
> otherwise raid is already built in. And you can bond multiple 1000BT
> interfaces with a gigabit switch to get really fast transfer from
> the SAN anyway.
>
> If the SAN server is a general purpose host, I would run raid10, or linux md
> extensions to it that get most of the benefits with fewer disks:
>
> http://en.wikipedia.org/wiki/Non-standard_RAID_levels
>
> raid5 has the read/modify/rewrite problem.
>
> I would not use the device-mapper raid, as you note.
>
> Caveat: I've never actually setup a SAN, just used them.
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] LVM + raid + san
2010-11-07 0:51 ` Phillip Susi
@ 2010-11-07 3:38 ` Eugene Vilensky
2010-11-07 4:03 ` allan
2010-11-07 22:27 ` Stuart D. Gathman
2 siblings, 0 replies; 9+ messages in thread
From: Eugene Vilensky @ 2010-11-07 3:38 UTC (permalink / raw)
To: LVM general discussion and development
On Sat, Nov 6, 2010 at 7:51 PM, Phillip Susi <psusi@cfl.rr.com> wrote:
> My understanding of a SAN is where you get a few drive enclosures and a few
> servers and plug them all into a sas expander so all of the servers can see
> all of the disks.
This is not the generally accepted definition of a Storage Area Network.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] LVM + raid + san
2010-11-07 0:51 ` Phillip Susi
2010-11-07 3:38 ` Eugene Vilensky
@ 2010-11-07 4:03 ` allan
2010-11-07 19:55 ` Phillip Susi
2010-11-07 22:27 ` Stuart D. Gathman
2 siblings, 1 reply; 9+ messages in thread
From: allan @ 2010-11-07 4:03 UTC (permalink / raw)
To: LVM general discussion and development
Have you considered using mdadm for the RAID configuration and lvm to carve it up?
Phillip Susi wrote:
> My understanding of a SAN is where you get a few drive enclosures and a
> few servers and plug them all into a sas expander so all of the servers
> can see all of the disks. You seem to be talking about having all of
> the disks on one server that then serves them over ethernet with iscsi.
> I wouldn't want to do that because it adds a good deal of overhead to
> the disk access and introduces a single point of failure.
>
> I'd rather just use LVM to manage all of the disks as part of a single
> volume group so you can immediately transfer a lv from one server to
> another, but I can't work out how to still manage to get raid without
> having lvm do it with the dm-raid5 support.
>
> On 11/05/2010 12:39 AM, Stuart D. Gathman wrote:
>> I would run LVM on the SAN server, exporting LVs as SAN units, and
>> each host
>> would get a virtual SAN disk to do with as it pleased, including running
>> LVM on it. Then you don't have to deal with locking issues for a shared
>> volume group. If your SAN server is embedded, it must already have
>> some sort
>> of management interface to parcel out disk space as virtual disks.
>> If you don't like its interface, then consider replacing it with a
>> general purpose host running LVM as described above. That said, many
>> do use shared volume groups with no problem.
>>
>> Generally, your SAN (whether embedded or a dedicated general purpose
>> host)
>> already has the raid built in. The exported virtual disks are raid
>> reliable. If not, replace the SAN. The whole point of SAN is to not
>> worry about physical disks anymore on the client systems. If you had
>> multiple
>> SANs on separate physical LANs, you could stripe them for super speed,
>> but
>> otherwise raid is already built in. And you can bond multiple 1000BT
>> interfaces with a gigabit switch to get really fast transfer from
>> the SAN anyway.
>>
>> If the SAN server is a general purpose host, I would run raid10, or
>> linux md
>> extensions to it that get most of the benefits with fewer disks:
>>
>> http://en.wikipedia.org/wiki/Non-standard_RAID_levels
>>
>> raid5 has the read/modify/rewrite problem.
>>
>> I would not use the device-mapper raid, as you note.
>>
>> Caveat: I've never actually setup a SAN, just used them.
>>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] LVM + raid + san
2010-11-07 4:03 ` allan
@ 2010-11-07 19:55 ` Phillip Susi
0 siblings, 0 replies; 9+ messages in thread
From: Phillip Susi @ 2010-11-07 19:55 UTC (permalink / raw)
To: allane, LVM general discussion and development
On 11/07/2010 12:03 AM, allan wrote:
> Have you considered using mdadm for the RAID configuration and lvm to
> carve it up?
As I said in my original message, I understand that is what most people
do, but multiple servers can not have mdadm mount the same disks at the
same time AFAIK.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] LVM + raid + san
2010-11-07 0:51 ` Phillip Susi
2010-11-07 3:38 ` Eugene Vilensky
2010-11-07 4:03 ` allan
@ 2010-11-07 22:27 ` Stuart D. Gathman
2010-11-09 22:15 ` Stuart D. Gathman
2 siblings, 1 reply; 9+ messages in thread
From: Stuart D. Gathman @ 2010-11-07 22:27 UTC (permalink / raw)
To: Phillip Susi; +Cc: LVM general discussion and development
On Sat, 6 Nov 2010, Phillip Susi wrote:
> My understanding of a SAN is where you get a few drive enclosures and a few
> servers and plug them all into a sas expander so all of the servers can see
> all of the disks. You seem to be talking about having all of the disks on one
> server that then serves them over ethernet with iscsi. I wouldn't want to do
> that because it adds a good deal of overhead to the disk access and introduces
> a single point of failure.
Your idea is not typical for SAN, where the point is to centralize dealing with
physical disks. The SAN server can be internally highly redundant, addressing
the single point of failure issue.
However, thinking how to do what you want, how about getting enclosures with
built in RAID? For instance, There are low cost NAS and USB enclosures with
built in RAID-1. Perhaps there are some more expensive ones with RAID 10.
(I still wouldn't use RAID5). Then you can just use them with LVM without
as much worry about disk failure. If an enclosure fails, however, that
PV will be offline (disrupting your service) until you replace it (moving
disks to new enclosure - so you don't lose data, other than the chaos from
any LVs that are partially offline).
> I'd rather just use LVM to manage all of the disks as part of a single volume
> group so you can immediately transfer a lv from one server to another, but I
> can't work out how to still manage to get raid without having lvm do it with
> the dm-raid5 support.
You'd have some serious locking issues. With RAID5, each server would have to
lock each chunk before writing to it (which involves a read/modify/write
cycle). This would create serious overhead. And you were complaining about
SAN server overhead! :-) RAID5 *must* be centralized. Your scheme might work
with RAID10, but then you'd still have to ensure that writes to mirror legs
don't get out of order, with updates from multiple servers flying over the
wire.
I still think you want a traditional SAN with an enterprise SAN server with
lots of built in redundancy (or build your own . You'll help feed the family
of a hard working salesman :-)
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] LVM + raid + san
2010-11-07 22:27 ` Stuart D. Gathman
@ 2010-11-09 22:15 ` Stuart D. Gathman
2010-11-10 0:21 ` Phillip Susi
0 siblings, 1 reply; 9+ messages in thread
From: Stuart D. Gathman @ 2010-11-09 22:15 UTC (permalink / raw)
To: LVM general discussion and development; +Cc: Phillip Susi
On Sun, 7 Nov 2010, Stuart D. Gathman wrote:
> You'd have some serious locking issues. With RAID5, each server would have to
> lock each chunk before writing to it (which involves a read/modify/write
> cycle). This would create serious overhead. And you were complaining about
> SAN server overhead! :-) RAID5 *must* be centralized. Your scheme might work
> with RAID10, but then you'd still have to ensure that writes to mirror legs
> don't get out of order, with updates from multiple servers flying over the
> wire.
Actually, this would be an interesting driver to develop. If each
server primarily works on its own LV, then there shouldn't be much
lock contention. You would need a lock manager, and a special network
RAID driver that uses the lock manager to coordinate updates.
Each server would hold the lock for a chunk until it is needed by another
server. With the assumed mostly independent access, this should be
rare, and the locking should be optimized with that in mind. I.e.,
if you already hold the lock, just go ahead an update. If not, then
notify the holder via the lock manager, and wait until you do hold it.
You could probably avoid the lock manager by using a broadcast based protocol
(Who has chunk 12345678?)
Oh wait, this is an LVM list...
Is anything like this contemplated for devicemapper? There is already
locking involved with shared VGs on a traditional SAN.
It is an interesting idea to avoid a traditional SAN as a single point of
failure (the switch connecting the hosts and disks would still be a single
point of failure, but a switch is simpler than a SAN server). All hosts would
have to be trusted.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [linux-lvm] LVM + raid + san
2010-11-09 22:15 ` Stuart D. Gathman
@ 2010-11-10 0:21 ` Phillip Susi
0 siblings, 0 replies; 9+ messages in thread
From: Phillip Susi @ 2010-11-10 0:21 UTC (permalink / raw)
To: Stuart D. Gathman; +Cc: LVM general discussion and development
On 11/09/2010 05:15 PM, Stuart D. Gathman wrote:
> Actually, this would be an interesting driver to develop. If each
> server primarily works on its own LV, then there shouldn't be much
I think the existing dm raid driver would work for this; it just needs
to be finished and integrated with lvm. Unlike mdadm, the dm raid
driver is only activated on the physical extents of the physical volumes
that make up the logical volume, rather than the whole disk or
partition, so each host should be able to use the cluster locking daemon
to make sure only one activates any given lv at a time. If a server
goes down then another can activate the lv and take over.
> It is an interesting idea to avoid a traditional SAN as a single point of
> failure (the switch connecting the hosts and disks would still be a single
> point of failure, but a switch is simpler than a SAN server). All hosts would
> have to be trusted.
And switches can be redundant. I understand that at least some sas
drives have two redundant ports that can each be connected to a
different expander and the servers can be connected to both expanders.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2010-11-10 0:21 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-05 1:26 [linux-lvm] LVM + raid + san Phillip Susi
2010-11-05 4:39 ` Stuart D. Gathman
2010-11-07 0:51 ` Phillip Susi
2010-11-07 3:38 ` Eugene Vilensky
2010-11-07 4:03 ` allan
2010-11-07 19:55 ` Phillip Susi
2010-11-07 22:27 ` Stuart D. Gathman
2010-11-09 22:15 ` Stuart D. Gathman
2010-11-10 0:21 ` Phillip Susi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).