linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* raid1 resync stuck
@ 2015-09-09 19:17 Jes Sorensen
  2015-09-09 19:48 ` Jes Sorensen
  2015-09-11 17:44 ` Jes Sorensen
  0 siblings, 2 replies; 5+ messages in thread
From: Jes Sorensen @ 2015-09-09 19:17 UTC (permalink / raw)
  To: NeilBrown; +Cc: majianpeng, linux-raid, nate.dailey

Neil,

We're chasing a case where the raid1 code gets stuck during resync. Nate
is able to reproduce it much more reliably than me - so attaching his
reproducing script. Basically run it on an existing raid1 with internal
bitmap on rotating disk.

Nate was able to bisect it to 79ef3a8aa1cb1523cc231c9a90a278333c21f761,
the original iobarrier rewrite patch, and it can be reproduced in
current Linus' top of trunk a794b4f3292160bb3fd0f1f90ec8df454e3b17b3.

In Nate's analysis it hangs in raise_barrier():

static void raise_barrier(struct r1conf *conf, sector_t sector_nr)
{
	spin_lock_irq(&conf->resync_lock);

	/* Wait until no block IO is waiting */
	wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting,
			    conf->resync_lock);

	/* block any new IO from starting */
	conf->barrier++;
	conf->next_resync = sector_nr;

	/* For these conditions we must wait:
	 * A: while the array is in frozen state
	 * B: while barrier >= RESYNC_DEPTH, meaning resync reach
	 *    the max count which allowed.
	 * C: next_resync + RESYNC_SECTORS > start_next_window, meaning
	 *    next resync will reach to the window which normal bios are
	 *    handling.
	 * D: while there are any active requests in the current window.
	 */
	wait_event_lock_irq(conf->wait_barrier,
			    !conf->array_frozen &&
			    conf->barrier < RESYNC_DEPTH &&
			    conf->current_window_requests == 0 &&
			    (conf->start_next_window >=
			     conf->next_resync + RESYNC_SECTORS),
			    conf->resync_lock);

crash> r1conf 0xffff882028f3e600 | grep -e array_frozen -e barrier -e start_next_window -e next_resync
  barrier = 0x1,                      (conf->barrier < RESYNC_DEPTH)
  array_frozen = 0x0,                 (!conf->array_frozen)
  next_resync = 0x3000,
  start_next_window = 0x3000,

ie. next_resync == start_next_window, which will never wake up since
start_next_window is smaller than next_resync + RESYNC_SECTORS.

Have you seen anything like this?

Cheers,
Jes

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: raid1 resync stuck
  2015-09-09 19:17 raid1 resync stuck Jes Sorensen
@ 2015-09-09 19:48 ` Jes Sorensen
  2015-09-11 17:44 ` Jes Sorensen
  1 sibling, 0 replies; 5+ messages in thread
From: Jes Sorensen @ 2015-09-09 19:48 UTC (permalink / raw)
  To: NeilBrown; +Cc: majianpeng, linux-raid, nate.dailey

[-- Attachment #1: Type: text/plain, Size: 2203 bytes --]

Jes Sorensen <Jes.Sorensen@redhat.com> writes:
> Neil,
>
> We're chasing a case where the raid1 code gets stuck during resync. Nate
> is able to reproduce it much more reliably than me - so attaching his
> reproducing script. Basically run it on an existing raid1 with internal
> bitmap on rotating disk.
>
> Nate was able to bisect it to 79ef3a8aa1cb1523cc231c9a90a278333c21f761,
> the original iobarrier rewrite patch, and it can be reproduced in
> current Linus' top of trunk a794b4f3292160bb3fd0f1f90ec8df454e3b17b3.
>
> In Nate's analysis it hangs in raise_barrier():
>
> static void raise_barrier(struct r1conf *conf, sector_t sector_nr)
> {
> 	spin_lock_irq(&conf->resync_lock);
>
> 	/* Wait until no block IO is waiting */
> 	wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting,
> 			    conf->resync_lock);
>
> 	/* block any new IO from starting */
> 	conf->barrier++;
> 	conf->next_resync = sector_nr;
>
> 	/* For these conditions we must wait:
> 	 * A: while the array is in frozen state
> 	 * B: while barrier >= RESYNC_DEPTH, meaning resync reach
> 	 *    the max count which allowed.
> 	 * C: next_resync + RESYNC_SECTORS > start_next_window, meaning
> 	 *    next resync will reach to the window which normal bios are
> 	 *    handling.
> 	 * D: while there are any active requests in the current window.
> 	 */
> 	wait_event_lock_irq(conf->wait_barrier,
> 			    !conf->array_frozen &&
> 			    conf->barrier < RESYNC_DEPTH &&
> 			    conf->current_window_requests == 0 &&
> 			    (conf->start_next_window >=
> 			     conf->next_resync + RESYNC_SECTORS),
> 			    conf->resync_lock);
>
> crash> r1conf 0xffff882028f3e600 | grep -e array_frozen -e barrier -e start_next_window -e next_resync
>   barrier = 0x1,                      (conf->barrier < RESYNC_DEPTH)
>   array_frozen = 0x0,                 (!conf->array_frozen)
>   next_resync = 0x3000,
>   start_next_window = 0x3000,
>
> ie. next_resync == start_next_window, which will never wake up since
> start_next_window is smaller than next_resync + RESYNC_SECTORS.
>
> Have you seen anything like this?
>
> Cheers,
> Jes

Grrr - doing too many things in parallel :( I knew I forgot something -
here is Nate's script.

Jes

[-- Attachment #2: md_stuck_resync.sh --]
[-- Type: text/plain, Size: 1201 bytes --]

#!/bin/bash

wait_idle() {
    while [ "$(cat /sys/block/$sysmd/md/sync_action)" != "idle" ] ; do
	sleep 10s
    done
}

[ $# -ne 1 ] && echo "usage: $0 <md>" && exit 1
sysmd=$1
devmd="/dev/$(udevadm info -q name --path=/sys/block/$sysmd)"
[ ! -b "$devmd" ] && echo "failed to find md devnode" && exit 1

devsd=
for m in /sys/block/$sysmd/md/dev-*
{
    realpath=$(cd $m/block && pwd -P)
    devsd="/dev/$(udevadm info -q name --path=$realpath)"
    break
}
[ ! -b "$devsd" ] && echo "failed to find member disk devnode" && exit 1

iter() {
    echo
    echo "Remove $devsd"
    mdadm -f $devmd $devsd
    sleep 1s
    mdadm -r $devmd $devsd

    echo "Start 1GB IO"
    dd if=/dev/urandom of=$devmd bs=1M count=1024 &

    for j in 1 2 3 4 5
    {
	sleep 15s

	echo
	echo "$j: Add $devsd"
	mdadm -a $devmd $devsd
	sleep 1s

	echo
	echo "$j: Remove $devsd"
	mdadm -f $devmd $devsd
	echo "idle" > /sys/block/$sysmd/md/sync_action
	sleep 1s
	mdadm -r $devmd $devsd
    }

    echo
    echo "Add $devsd"
    mdadm -a $devmd $devsd

    echo
    echo "Wait for $sysmd recovery..."
    wait_idle
}

i=1
while [ 1 ] ; do
    echo "$(date) ********** Iteration $i **********"
    iter
    i=$(($i + 1))
done

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: raid1 resync stuck
  2015-09-09 19:17 raid1 resync stuck Jes Sorensen
  2015-09-09 19:48 ` Jes Sorensen
@ 2015-09-11 17:44 ` Jes Sorensen
  2015-09-15  8:05   ` Neil Brown
  1 sibling, 1 reply; 5+ messages in thread
From: Jes Sorensen @ 2015-09-11 17:44 UTC (permalink / raw)
  To: NeilBrown; +Cc: majianpeng, linux-raid, nate.dailey

Jes Sorensen <Jes.Sorensen@redhat.com> writes:
> Neil,
>
> We're chasing a case where the raid1 code gets stuck during resync. Nate
> is able to reproduce it much more reliably than me - so attaching his
> reproducing script. Basically run it on an existing raid1 with internal
> bitmap on rotating disk.
>
> Nate was able to bisect it to 79ef3a8aa1cb1523cc231c9a90a278333c21f761,
> the original iobarrier rewrite patch, and it can be reproduced in
> current Linus' top of trunk a794b4f3292160bb3fd0f1f90ec8df454e3b17b3.
>
> In Nate's analysis it hangs in raise_barrier():
>
> static void raise_barrier(struct r1conf *conf, sector_t sector_nr)
> {
> 	spin_lock_irq(&conf->resync_lock);
>
> 	/* Wait until no block IO is waiting */
> 	wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting,
> 			    conf->resync_lock);
>
> 	/* block any new IO from starting */
> 	conf->barrier++;
> 	conf->next_resync = sector_nr;
>
> 	/* For these conditions we must wait:
> 	 * A: while the array is in frozen state
> 	 * B: while barrier >= RESYNC_DEPTH, meaning resync reach
> 	 *    the max count which allowed.
> 	 * C: next_resync + RESYNC_SECTORS > start_next_window, meaning
> 	 *    next resync will reach to the window which normal bios are
> 	 *    handling.
> 	 * D: while there are any active requests in the current window.
> 	 */
> 	wait_event_lock_irq(conf->wait_barrier,
> 			    !conf->array_frozen &&
> 			    conf->barrier < RESYNC_DEPTH &&
> 			    conf->current_window_requests == 0 &&
> 			    (conf->start_next_window >=
> 			     conf->next_resync + RESYNC_SECTORS),
> 			    conf->resync_lock);
>
> crash> r1conf 0xffff882028f3e600 | grep -e array_frozen -e barrier -e start_next_window -e next_resync
>   barrier = 0x1,                      (conf->barrier < RESYNC_DEPTH)
>   array_frozen = 0x0,                 (!conf->array_frozen)
>   next_resync = 0x3000,
>   start_next_window = 0x3000,
>
> ie. next_resync == start_next_window, which will never wake up since
> start_next_window is smaller than next_resync + RESYNC_SECTORS.
>
> Have you seen anything like this?

Looking further at this together with Nate. It looks like you had a
patch resolving something similar:

commit 669cc7ba77864e7b1ac39c9f2b2afb8730f341f4
Author: NeilBrown <neilb@suse.de>
Date:   Thu Sep 4 16:30:38 2014 +1000

    md/raid1: clean up request counts properly in close_sync()
    
    If there are outstanding writes when close_sync is called,
    the change to ->start_next_window might cause them to
    decrement the wrong counter when they complete.  Fix this
    by merging the two counters into the one that will be decremented.
    
    Having an incorrect value in a counter can cause raise_barrier()
    to hangs, so this is suitable for -stable.
    
    Fixes: 79ef3a8aa1cb1523cc231c9a90a278333c21f761
    cc: stable@vger.kernel.org (v3.13+)
    Signed-off-by: NeilBrown <neilb@suse.de>

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index ad0468c..a31c92b 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1545,8 +1545,13 @@ static void close_sync(struct r1conf *conf)
        mempool_destroy(conf->r1buf_pool);
        conf->r1buf_pool = NULL;
 
+       spin_lock_irq(&conf->resync_lock);
        conf->next_resync = 0;
        conf->start_next_window = MaxSector;
+       conf->current_window_requests +=
+               conf->next_window_requests;
+       conf->next_window_requests = 0;
+       spin_unlock_irq(&conf->resync_lock);
 }
 
It looks to us like close_sync()'s conf->start_next_window = MaxSector
results in wait_barrier() triggering this when the outstanding IO
completes:

	if (bio && bio_data_dir(bio) == WRITE) {
		if (bio->bi_sector >=
		    conf->mddev->curr_resync_completed) {
			if (conf->start_next_window == MaxSector)
				conf->start_next_window =
					conf->next_resync +
					NEXT_NORMALIO_DISTANCE;

putting us into the situation where raise_barrier()'s condition never
completes:

	wait_event_lock_irq(conf->wait_barrier,
			    !conf->array_frozen &&
			    conf->barrier < RESYNC_DEPTH &&
			    conf->current_window_requests == 0 &&
			    (conf->start_next_window >=
			     conf->next_resync + RESYNC_SECTORS),
			    conf->resync_lock);

So the question is, is it wrong for close_sync() to be setting
conf->start_next_window = MaxSector in the first place, or should it
only be doing this once all outstanding I/O has completed?

Nate tested a case where removing the MaxSector assignment from
close_sync() but are there any side effects to doing that?

Cheers,
Jes

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: raid1 resync stuck
  2015-09-11 17:44 ` Jes Sorensen
@ 2015-09-15  8:05   ` Neil Brown
  2015-09-15 15:25     ` Jes Sorensen
  0 siblings, 1 reply; 5+ messages in thread
From: Neil Brown @ 2015-09-15  8:05 UTC (permalink / raw)
  To: Jes Sorensen; +Cc: majianpeng, linux-raid, nate.dailey

[-- Attachment #1: Type: text/plain, Size: 6024 bytes --]

Jes Sorensen <Jes.Sorensen@redhat.com> writes:

> Jes Sorensen <Jes.Sorensen@redhat.com> writes:
>> Neil,
>>
>> We're chasing a case where the raid1 code gets stuck during resync. Nate
>> is able to reproduce it much more reliably than me - so attaching his
>> reproducing script. Basically run it on an existing raid1 with internal
>> bitmap on rotating disk.
>>
>> Nate was able to bisect it to 79ef3a8aa1cb1523cc231c9a90a278333c21f761,
>> the original iobarrier rewrite patch, and it can be reproduced in
>> current Linus' top of trunk a794b4f3292160bb3fd0f1f90ec8df454e3b17b3.
>>
>> In Nate's analysis it hangs in raise_barrier():
>>
>> static void raise_barrier(struct r1conf *conf, sector_t sector_nr)
>> {
>> 	spin_lock_irq(&conf->resync_lock);
>>
>> 	/* Wait until no block IO is waiting */
>> 	wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting,
>> 			    conf->resync_lock);
>>
>> 	/* block any new IO from starting */
>> 	conf->barrier++;
>> 	conf->next_resync = sector_nr;
>>
>> 	/* For these conditions we must wait:
>> 	 * A: while the array is in frozen state
>> 	 * B: while barrier >= RESYNC_DEPTH, meaning resync reach
>> 	 *    the max count which allowed.
>> 	 * C: next_resync + RESYNC_SECTORS > start_next_window, meaning
>> 	 *    next resync will reach to the window which normal bios are
>> 	 *    handling.
>> 	 * D: while there are any active requests in the current window.
>> 	 */
>> 	wait_event_lock_irq(conf->wait_barrier,
>> 			    !conf->array_frozen &&
>> 			    conf->barrier < RESYNC_DEPTH &&
>> 			    conf->current_window_requests == 0 &&
>> 			    (conf->start_next_window >=
>> 			     conf->next_resync + RESYNC_SECTORS),
>> 			    conf->resync_lock);
>>
>> crash> r1conf 0xffff882028f3e600 | grep -e array_frozen -e barrier -e start_next_window -e next_resync
>>   barrier = 0x1,                      (conf->barrier < RESYNC_DEPTH)
>>   array_frozen = 0x0,                 (!conf->array_frozen)
>>   next_resync = 0x3000,
>>   start_next_window = 0x3000,
>>
>> ie. next_resync == start_next_window, which will never wake up since
>> start_next_window is smaller than next_resync + RESYNC_SECTORS.
>>
>> Have you seen anything like this?
>
> Looking further at this together with Nate. It looks like you had a
> patch resolving something similar:

I hope you realize that this a confirming-instance of my hypothesis that
if I just ignore questions, the asker will eventually solve it
themselves?  Maybe I should just wait a bit longer...

>
> commit 669cc7ba77864e7b1ac39c9f2b2afb8730f341f4
> Author: NeilBrown <neilb@suse.de>
> Date:   Thu Sep 4 16:30:38 2014 +1000
>
>     md/raid1: clean up request counts properly in close_sync()
>     
>     If there are outstanding writes when close_sync is called,
>     the change to ->start_next_window might cause them to
>     decrement the wrong counter when they complete.  Fix this
>     by merging the two counters into the one that will be decremented.
>     
>     Having an incorrect value in a counter can cause raise_barrier()
>     to hangs, so this is suitable for -stable.
>     
>     Fixes: 79ef3a8aa1cb1523cc231c9a90a278333c21f761
>     cc: stable@vger.kernel.org (v3.13+)
>     Signed-off-by: NeilBrown <neilb@suse.de>
>
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index ad0468c..a31c92b 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -1545,8 +1545,13 @@ static void close_sync(struct r1conf *conf)
>         mempool_destroy(conf->r1buf_pool);
>         conf->r1buf_pool = NULL;
>  
> +       spin_lock_irq(&conf->resync_lock);
>         conf->next_resync = 0;
>         conf->start_next_window = MaxSector;
> +       conf->current_window_requests +=
> +               conf->next_window_requests;
> +       conf->next_window_requests = 0;
> +       spin_unlock_irq(&conf->resync_lock);
>  }
>  
> It looks to us like close_sync()'s conf->start_next_window = MaxSector
> results in wait_barrier() triggering this when the outstanding IO
> completes:
>
> 	if (bio && bio_data_dir(bio) == WRITE) {
> 		if (bio->bi_sector >=
> 		    conf->mddev->curr_resync_completed) {
> 			if (conf->start_next_window == MaxSector)
> 				conf->start_next_window =
> 					conf->next_resync +
> 					NEXT_NORMALIO_DISTANCE;
>
> putting us into the situation where raise_barrier()'s condition never
> completes:
>
> 	wait_event_lock_irq(conf->wait_barrier,
> 			    !conf->array_frozen &&
> 			    conf->barrier < RESYNC_DEPTH &&
> 			    conf->current_window_requests == 0 &&
> 			    (conf->start_next_window >=
> 			     conf->next_resync + RESYNC_SECTORS),
> 			    conf->resync_lock);
>
> So the question is, is it wrong for close_sync() to be setting
> conf->start_next_window = MaxSector in the first place, or should it
> only be doing this once all outstanding I/O has completed?

I think it is right to set start_next_window = MaxSector, but I think it
is wrong to set ->next_resync = 0;

I think:
 close_sync() should set next_sync to some impossibly big number,
   but not quite MaxSector as we sometimes add RESYNC_SECTORS or
   NEXT_NORMALIO_DISTANCE.
   May mddev->resync_max_sectors would be sensible.  Then raid1_resize
   would need to update it though.
   Or maybe we should make MaxSector a bit smaller so it is safe to
   add to it. ((~(sector_t)0)>>1) ??

 wait_barrier() should include ->next_resync in its decision about
   setting start_next_window.  May just replace
   "mddev->curr_resync_completed" with "next_resync".

Can you try that?  Does it make sense to you too?

 

>
> Nate tested a case where removing the MaxSector assignment from
> close_sync() but are there any side effects to doing that?

Probably.  Not sure off hand what they are though.  It certainly feels
untidy leaving start_next_window pointing in the middle of the array
when resync/recovery has stopped.

Thanks,
NeilBrown

>
> Cheers,
> Jes

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 818 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: raid1 resync stuck
  2015-09-15  8:05   ` Neil Brown
@ 2015-09-15 15:25     ` Jes Sorensen
  0 siblings, 0 replies; 5+ messages in thread
From: Jes Sorensen @ 2015-09-15 15:25 UTC (permalink / raw)
  To: Neil Brown; +Cc: majianpeng, linux-raid, nate.dailey

Neil Brown <neilb@suse.de> writes:
> Jes Sorensen <Jes.Sorensen@redhat.com> writes:
>
>>> crash> r1conf 0xffff882028f3e600 | grep -e array_frozen -e barrier -e start_next_window -e next_resync
>>>   barrier = 0x1,                      (conf->barrier < RESYNC_DEPTH)
>>>   array_frozen = 0x0,                 (!conf->array_frozen)
>>>   next_resync = 0x3000,
>>>   start_next_window = 0x3000,
>>>
>>> ie. next_resync == start_next_window, which will never wake up since
>>> start_next_window is smaller than next_resync + RESYNC_SECTORS.
>>>
>>> Have you seen anything like this?
>>
>> Looking further at this together with Nate. It looks like you had a
>> patch resolving something similar:
>
> I hope you realize that this a confirming-instance of my hypothesis that
> if I just ignore questions, the asker will eventually solve it
> themselves?  Maybe I should just wait a bit longer...

Argh I screwed up again! :)

>> It looks to us like close_sync()'s conf->start_next_window = MaxSector
>> results in wait_barrier() triggering this when the outstanding IO
>> completes:
>>
>> 	if (bio && bio_data_dir(bio) == WRITE) {
>> 		if (bio->bi_sector >=
>> 		    conf->mddev->curr_resync_completed) {
>> 			if (conf->start_next_window == MaxSector)
>> 				conf->start_next_window =
>> 					conf->next_resync +
>> 					NEXT_NORMALIO_DISTANCE;
>>
>> putting us into the situation where raise_barrier()'s condition never
>> completes:
>>
>> 	wait_event_lock_irq(conf->wait_barrier,
>> 			    !conf->array_frozen &&
>> 			    conf->barrier < RESYNC_DEPTH &&
>> 			    conf->current_window_requests == 0 &&
>> 			    (conf->start_next_window >=
>> 			     conf->next_resync + RESYNC_SECTORS),
>> 			    conf->resync_lock);
>>
>> So the question is, is it wrong for close_sync() to be setting
>> conf->start_next_window = MaxSector in the first place, or should it
>> only be doing this once all outstanding I/O has completed?
>
> I think it is right to set start_next_window = MaxSector, but I think it
> is wrong to set ->next_resync = 0;
>
> I think:
>  close_sync() should set next_sync to some impossibly big number,
>    but not quite MaxSector as we sometimes add RESYNC_SECTORS or
>    NEXT_NORMALIO_DISTANCE.
>    May mddev->resync_max_sectors would be sensible.  Then raid1_resize
>    would need to update it though.
>    Or maybe we should make MaxSector a bit smaller so it is safe to
>    add to it. ((~(sector_t)0)>>1) ??
>
>  wait_barrier() should include ->next_resync in its decision about
>    setting start_next_window.  May just replace
>    "mddev->curr_resync_completed" with "next_resync".
>
> Can you try that?  Does it make sense to you too?

I think this makes sense - I'll spin a patch for it and see how it
works out.

Cheers,
Jes

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-09-15 15:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-09 19:17 raid1 resync stuck Jes Sorensen
2015-09-09 19:48 ` Jes Sorensen
2015-09-11 17:44 ` Jes Sorensen
2015-09-15  8:05   ` Neil Brown
2015-09-15 15:25     ` Jes Sorensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).