From: NeilBrown <neilb@suse.de>
To: Jes.Sorensen@redhat.com
Cc: linux-raid@vger.kernel.org, lukasz.dorau@intel.com,
adam.kwolek@intel.com, dledford@redhat.com
Subject: Re: [PATCH 1/3] Remove race for starting container devices.
Date: Sat, 22 Oct 2011 11:49:06 +1100 [thread overview]
Message-ID: <20111022114906.2e26e66b@notabene.brown> (raw)
In-Reply-To: <1319204000-6661-2-git-send-email-Jes.Sorensen@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 1659 bytes --]
On Fri, 21 Oct 2011 15:33:18 +0200 Jes.Sorensen@redhat.com wrote:
> From: Jes Sorensen <Jes.Sorensen@redhat.com>
>
> This moves the lock handling out of Incremental_container() and relies
> on the caller holding the lock. This prevents conflict with a
> follow-on mdadm comment which may try and launch the device in
> parallel.
>
> Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Hi Jes,
I have applied the 7 patches you sent.
However I have question mark over part of this one:
> @@ -451,9 +456,10 @@ int Incremental(char *devname, int verbose, int runstop,
> devnum = fd2devnum(mdfd);
> close(mdfd);
> sysfs_free(sra);
> - rv = Incremental(chosen_name, verbose, runstop,
> - NULL, homehost, require_homehost, autof,
> - freeze_reshape);
> + rv = Incremental_container(st, chosen_name, homehost,
> + verbose, runstop, autof,
> + freeze_reshape);
> + map_unlock(&map);
> if (rv == 1)
> /* Don't fail the whole -I if a subarray didn't
> * have enough devices to start yet
You have replaced a call to Incremental with a call to Incremental_container.
I feel that is significant enough that I would have liked to have seen it
discussed in the changelog comment.
You seem have have open-coded Incremental and then removed all the bits that
you don't need in this case - which is that vast majority of it.
But you didn't include the call to ->load_content().
So I have put it back in because I'm sure it must be harmless (Because it was
there already) but it may not be needed.
If you have thoughts on this, please let me know.
Thanks,
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
next prev parent reply other threads:[~2011-10-22 0:49 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-10-21 13:33 [PATCH v2 0/3] Fix mdadm vs udev race in Incremental and Assemble Jes.Sorensen
2011-10-21 13:33 ` [PATCH 1/3] Remove race for starting container devices Jes.Sorensen
2011-10-22 0:49 ` NeilBrown [this message]
2011-10-22 8:22 ` Jes Sorensen
2011-10-21 13:33 ` [PATCH 2/3] Don't tell sysfs to launch the container as we are doing it ourselves Jes.Sorensen
2011-12-22 23:48 ` NeilBrown
2012-01-03 10:24 ` Jes Sorensen
2012-01-03 15:54 ` Doug Ledford
2012-01-04 2:34 ` NeilBrown
2012-01-06 18:35 ` Doug Ledford
2011-10-21 13:33 ` [PATCH 3/3] Hold the map lock while performing Assemble to avoid races with udev Jes.Sorensen
-- strict thread matches above, loose matches on Subject: below --
2011-10-20 10:06 [PATCH 0/3] Fix mdadm vs udev race in Incremental and Assemble Jes.Sorensen
2011-10-20 10:06 ` [PATCH 1/3] Remove race for starting container devices Jes.Sorensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20111022114906.2e26e66b@notabene.brown \
--to=neilb@suse.de \
--cc=Jes.Sorensen@redhat.com \
--cc=adam.kwolek@intel.com \
--cc=dledford@redhat.com \
--cc=linux-raid@vger.kernel.org \
--cc=lukasz.dorau@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).