From: Doug Ledford <dledford@redhat.com>
To: "Labun, Marcin" <Marcin.Labun@intel.com>
Cc: Neil Brown <neilb@suse.de>,
Linux RAID Mailing List <linux-raid@vger.kernel.org>,
"Williams, Dan J" <dan.j.williams@intel.com>,
"Ciechanowski, Ed" <ed.ciechanowski@intel.com>,
"Hawrylewicz Czarnowski,
Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@intel.com>
Subject: Re: More Hot Unplug/Plug work
Date: Wed, 28 Apr 2010 13:47:55 -0400 [thread overview]
Message-ID: <4BD874CB.5040803@redhat.com> (raw)
In-Reply-To: <905EDD02F158D948B186911EB64DB3D11C954788@irsmsx503.ger.corp.intel.com>
[-- Attachment #1: Type: text/plain, Size: 5107 bytes --]
On 04/28/2010 12:08 PM, Labun, Marcin wrote:
>
>
>> -----Original Message-----
>> From: Doug Ledford [mailto:dledford@redhat.com]
>> Sent: Tuesday, April 27, 2010 6:45 PM
>> To: Linux RAID Mailing List; Neil Brown; Labun, Marcin; Williams, Dan J
>> Subject: More Hot Unplug/Plug work
>>
>> So I pulled down Neil's git repo and started working from his hotunplug
>> branch, which was his version of my hotunplug patch. I had to do a
>> couple minor fixes to it to make it work. I then simply continued on
>> from there. I have a branch in my git repo that tracks his hotunplug
>> branch and is also called hotunplug. That's where my current work is
>> at.
>>
>> What I've done since then:
>>
>> 1) I've implemented a new config file line type: DOMAIN
>> a) Each DOMAIN line must have at least one valid path= entry, but
>> may
>> have more than one path= entry. path= entries are file globs and
>> must match something in /dev/disk/by-path
>
> DOMAIN is defined per container or raid volume for native metadata.
No, a DOMAIN can encompass more than a single volume, array, or container.
> Each DOMAIN can have more than one path, so actually list of path define if a given disk belongs to domain or not.
Correct.
> Do you plan to allow for the same path to be assigned to different containers (so path is shared between domains)?
I had planned that a single DOMAIN can encompass multiple containers.
So I didn't planned on it a single path being in multiple DOMAINs, but I
did plan that a single domain could allow a device to be placed in
multiple different containers based upon need. I don't have checks in
place to make sure the same path isn't listed in more than one domain,
although that would be a next step.
> If so the domains will have some or all paths overlapped, and some containers will share some paths.
> Going further, thus causes that a new disk can be potentially grabbed by more than one container (because of shared path).
> For example:
> DOMAIN1: path=a path=b path=c
> DOMAIN2: path=a path=d
> DOMAIN3: path=d path=c
> In this example disks from path c can appear in DOMAIN 1 and DOMAIN 3, but not in DOMAIN 2.
What exactly is the use case for overlapping paths in different domains?
I'm happy to rework the code to support it if there's a valid use case,
but so far my design goal has been to have a path only appear in one
domain, and to then perform the appropriate action based upon that
domain. So if more than one container array was present in a single
DOMAIN entry (lets assume that the domain entry path encompassed all 6
sata ports on a motherboard and therefore covered the entire platform
capability of the imsm motherboard bios), then we would add the new
drive as a spare to one of the imsm arrays. It's not currently
deterministic which one we would add it to, but that would change as the
code matures and we would search for a degraded array that we could add
it to. Only if there are no degraded arrays would we add it as a spare
to one of the arrays (non-deterministic which one). If we add it as a
spare to one of the arrays, then monitor mode can move that spare around
as needed later based upon the spare-group settings. Currently, there
is no correlation between spare-group and DOMAIN entries, but that might
change.
> So, in case of Monitor, sharing a spare device will be per path basis.
Currently, monitor mode still uses spare-group for controlling what
arrays can share spares. It does not yet check any DOMAIN information.
> The same for new disks in hot-plug feature.
>
>
> In your repo domain_ent is a struct that contains domain paths.
> The function arrays_in_domain returns a list of mdstat entries that are in the same domain as the constituent device name.
> (so it requires devname and domain as input parameter).
> In which case two containers will share the same DOMAIN?
You get the list of containers, not just one. See above about searching
the list for a degraded container and adding to it before a healthy
container.
> It seems that this function shall return a list of mdstat entries that share a path to which a devname device belongs.
> So, a given new device is tried to be grabbed by a list of a containers (or native volumes).
Yes. There can be more than one array/container that this device might
go to.
> Can you send a config file example?
The first two entries are good, the third is a known bad line that I
just leave in there to make sure I don't partition the wrong thing.
DOMAIN path=pci-0000:00:1f.2-scsi-[2345]:0:0:0 action=partition
table=/etc/mdadm.table program=sfdisk
DOMAIN path=pci-0000:00:1f.2-scsi-[2345]:0:0:0-part? action=spare
DOMAIN path=pci-0000:00:1f.2-scsi-[2345]:0:0:0*
path=pci-0000:00:1f.2-scsi-[2345]:0:0:0-part* action=partition
--
Doug Ledford <dledford@redhat.com>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/dledford/Infiniband
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
next prev parent reply other threads:[~2010-04-28 17:47 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-27 16:45 More Hot Unplug/Plug work Doug Ledford
2010-04-27 19:41 ` Christian Gatzemeier
2010-04-28 16:08 ` Labun, Marcin
2010-04-28 17:47 ` Doug Ledford [this message]
2010-04-28 18:34 ` Labun, Marcin
2010-04-28 21:05 ` Doug Ledford
2010-04-28 21:13 ` Dan Williams
2010-04-30 13:38 ` Doug Ledford
2010-04-29 1:01 ` Neil Brown
2010-04-29 1:19 ` Dan Williams
2010-04-29 2:37 ` Neil Brown
2010-04-29 18:22 ` Labun, Marcin
2010-04-29 21:55 ` Dan Williams
2010-05-03 5:58 ` Neil Brown
2010-05-08 1:06 ` Dan Williams
2010-04-30 16:13 ` Doug Ledford
2010-04-30 11:14 ` John Robinson
2010-04-30 15:52 ` Doug Ledford
2010-04-28 20:59 ` Luca Berra
2010-04-28 21:16 ` Doug Ledford
2010-04-29 20:32 ` Dan Williams
2010-04-29 21:22 ` Dan Williams
2010-04-30 16:26 ` Doug Ledford
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BD874CB.5040803@redhat.com \
--to=dledford@redhat.com \
--cc=Marcin.Labun@intel.com \
--cc=dan.j.williams@intel.com \
--cc=ed.ciechanowski@intel.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
--cc=przemyslaw.hawrylewicz.czarnowski@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).