From: Neil Brown <neilb@suse.de>
To: Adam Kwolek <adam.kwolek@intel.com>
Cc: linux-raid@vger.kernel.org, dan.j.williams@intel.com,
ed.ciechanowski@intel.com, wojciech.neubauer@intel.com
Subject: Re: [PATCH 1/7] imsm: Update metadata for second array
Date: Fri, 28 Jan 2011 10:24:59 +1000 [thread overview]
Message-ID: <20110128102459.6e3a3439@nbeee.brown> (raw)
In-Reply-To: <20110126150317.20454.62750.stgit@gklab-128-013.igk.intel.com>
On Wed, 26 Jan 2011 16:03:17 +0100
Adam Kwolek <adam.kwolek@intel.com> wrote:
> When second array reshape is about to start external metadata should
> be updated by mdmon in imsm_set_array_state().
> for this purposes imsm_progress_container_reshape() is reused.
We seem to be failing to communicate...
I told you that you didn't need extra arguments to
imsm_progress_container_reshape because IT ALREADY DOES THE RIGHT THING.
It finds the array that needs to be reshaped next, and it starts the
reshape.
You removed the extra arguments from your previous patch, but put extra
code in imsm_progress_container_reshape to to what it ALREADY DOES.
Have you read and understood the code?
If something isn't working the way you expect, please detail exactly
what.
It is quite possible that the code isn't quite right, but trying to
change it when you don't appear to understand it isn't going to
achieve anything useful.
NeilBrown
>
> Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
> ---
>
> super-intel.c | 49
> ++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 48
> insertions(+), 1 deletions(-)
>
> diff --git a/super-intel.c b/super-intel.c
> index 8d1f0ad..4e96196 100644
> --- a/super-intel.c
> +++ b/super-intel.c
> @@ -5052,6 +5052,46 @@ static void
> imsm_progress_container_reshape(struct intel_super *super) struct
> imsm_super *mpb = super->anchor; int prev_disks = -1;
> int i;
> + int any_migration_in_progress = 0;
> + int disks_count_max = 0;
> + struct imsm_dev *dev_next = NULL;
> +
> + /* find maximum number of disks used in any array
> + * and check if any migration is in progress
> + */
> + for (i = 0; i < mpb->num_raid_devs; i++) {
> + struct imsm_dev *dev = get_imsm_dev(super, i);
> + struct imsm_map *map = get_imsm_map(dev, 0);
> + struct imsm_map *migr_map = get_imsm_map(dev, 1);
> + if (migr_map)
> + any_migration_in_progress = 1;
> + if (map->num_members > disks_count_max)
> + disks_count_max = map->num_members;
> + }
> +
> + if (any_migration_in_progress == 0) {
> + /* no migration in progress
> + * so we can check if next migration in container
> + * should be started
> + */
> + int next_inst = -1;
> +
> + for (i = 0; i < mpb->num_raid_devs; i++) {
> + struct imsm_dev *dev = get_imsm_dev(super,
> i);
> + struct imsm_map *map = get_imsm_map(dev, 0);
> + if (map->num_members < disks_count_max) {
> + next_inst = i;
> + break;
> + }
> + }
> + if (next_inst >= 0) {
> + /* found array with smaller number of disks
> in array,
> + * this array should be expanded
> + */
> + dev_next = get_imsm_dev(super, next_inst);
> + prev_disks = disks_count_max;
> + }
> + }
>
> for (i = 0; i < mpb->num_raid_devs; i++) {
> struct imsm_dev *dev = get_imsm_dev(super, i);
> @@ -5063,6 +5103,9 @@ static void
> imsm_progress_container_reshape(struct intel_super *super) if
> (dev->vol.migr_state) return;
>
> + if ((dev_next != NULL) && (dev_next != dev))
> + continue;
> +
> if (prev_disks == -1)
> prev_disks = map->num_members;
> if (prev_disks == map->num_members)
> @@ -5244,13 +5287,17 @@ static int imsm_set_array_state(struct
> active_array *a, int consistent) super->updates_pending++;
> }
>
> - /* finalize online capacity expansion/reshape */
> + /* manage online capacity expansion/reshape */
> if ((a->curr_action != reshape) &&
> (a->prev_action == reshape)) {
> struct mdinfo *mdi;
>
> + /* finalize online capacity expansion/reshape */
> for (mdi = a->info.devs; mdi; mdi = mdi->next)
> imsm_set_disk(a, mdi->disk.raid_disk,
> mdi->curr_state); +
> + /* check next volume reshape */
> + imsm_progress_container_reshape(super);
> }
>
> return consistent;
next prev parent reply other threads:[~2011-01-28 0:24 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-26 15:03 [PATCH 0/7] Online Capacity Expansion (raid5/0, all arrays in container, rebased) Adam Kwolek
2011-01-26 15:03 ` [PATCH 1/7] imsm: Update metadata for second array Adam Kwolek
2011-01-28 0:24 ` Neil Brown [this message]
2011-01-28 8:11 ` Kwolek, Adam
2011-01-26 15:03 ` [PATCH 2/7] FIX: monitor doesn't handshake with md Adam Kwolek
2011-01-26 15:03 ` [PATCH 3/7] imsm: FIX: do not allow for container operation for the same disks number Adam Kwolek
2011-01-28 0:25 ` Neil Brown
2011-01-26 15:03 ` [PATCH 4/7] FIX: Array after takeover has to be frozen Adam Kwolek
2011-01-28 0:40 ` Neil Brown
2011-01-28 8:16 ` Kwolek, Adam
2011-01-26 15:03 ` [PATCH 5/7] imsm: FIX: spare cannot be added Adam Kwolek
2011-01-26 15:03 ` [PATCH 6/7] FIX: Container can be left frozen Adam Kwolek
2011-01-26 15:04 ` [PATCH 7/7] FIX: start_reshape status should be checked Adam Kwolek
2011-01-28 0:41 ` Neil Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110128102459.6e3a3439@nbeee.brown \
--to=neilb@suse.de \
--cc=adam.kwolek@intel.com \
--cc=dan.j.williams@intel.com \
--cc=ed.ciechanowski@intel.com \
--cc=linux-raid@vger.kernel.org \
--cc=wojciech.neubauer@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).