* Hang in md-raid1 with 3.7-rcX
@ 2012-11-24 9:18 Torsten Kaiser
2012-11-27 1:05 ` NeilBrown
0 siblings, 1 reply; 6+ messages in thread
From: Torsten Kaiser @ 2012-11-24 9:18 UTC (permalink / raw)
To: linux-raid, linux-kernel
After my system got stuck with 3.7.0-rc2 as reported in
http://marc.info/?l=linux-kernel&m=135142236520624 LOCKDEP seem to
blame XFS, because it found 2 possible deadlocks. But after these
locking issues where fixed, my system got stuck again with 3.7.0-rc6
as reported in http://marc.info/?l=linux-kernel&m=135344072325490
Dave Chinner thinks its an issue within md, that it gets stuck and
that will then prevent any further xfs activity, and that I should
report it to the raid mailing list.
The issue seems to be that multiple processes (kswapd0, xfsaild/md4
and flush-9:4) get stuck in md_super_wait() like this:
[<ffffffff816b1224>] schedule+0x24/0x60
[<ffffffff814f9dad>] md_super_wait+0x4d/0x80
[<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
[<ffffffff81500753>] bitmap_unplug+0x173/0x180
[<ffffffff810b6acf>] ? write_cache_pages+0x12f/0x420
[<ffffffff810b6700>] ? set_page_dirty_lock+0x60/0x60
[<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
[<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
[<ffffffff81278c13>] blk_finish_plug+0x13/0x50
The full hung-tasks stack traces and the output from SysRq+W can be
found at http://marc.info/?l=linux-kernel&m=135344072325490 or in the
LKML thread 'Hang in XFS reclaim on 3.7.0-rc3'.
I tried to understand how this could happen, but I don't see anything
wrong. Only that md_super_wait() looks like an open coded version of
__wait_event() and could be replaced by using it.
http://marc.info/?l=linux-raid&m=135283030027665 looks like the same
issue, but using ext4 instead of xfs.
My setup wrt. md is two normal sata disks on a normal ahci controller
(AMD SB850 southbridge).
Both disks are divided into 4 partitions and each one assembled into a
separate raid1.
One (md5) is used for swap, the others hold xfs filesystems for /boot/
(md4), / (md6) and /home/ (md7).
I will try to provide any information you ask, but I can't reproduce
the hang on demand so gathering more information about that state is
not so easy, but I will try.
Thanks for looking into this,
Torsten
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Hang in md-raid1 with 3.7-rcX
2012-11-24 9:18 Hang in md-raid1 with 3.7-rcX Torsten Kaiser
@ 2012-11-27 1:05 ` NeilBrown
2012-11-27 7:08 ` Torsten Kaiser
[not found] ` <32242311.8QXFMOUYz5@deuteros>
0 siblings, 2 replies; 6+ messages in thread
From: NeilBrown @ 2012-11-27 1:05 UTC (permalink / raw)
To: Torsten Kaiser; +Cc: linux-raid, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 3273 bytes --]
On Sat, 24 Nov 2012 10:18:44 +0100 Torsten Kaiser
<just.for.lkml@googlemail.com> wrote:
> After my system got stuck with 3.7.0-rc2 as reported in
> http://marc.info/?l=linux-kernel&m=135142236520624 LOCKDEP seem to
> blame XFS, because it found 2 possible deadlocks. But after these
> locking issues where fixed, my system got stuck again with 3.7.0-rc6
> as reported in http://marc.info/?l=linux-kernel&m=135344072325490
> Dave Chinner thinks its an issue within md, that it gets stuck and
> that will then prevent any further xfs activity, and that I should
> report it to the raid mailing list.
>
> The issue seems to be that multiple processes (kswapd0, xfsaild/md4
> and flush-9:4) get stuck in md_super_wait() like this:
> [<ffffffff816b1224>] schedule+0x24/0x60
> [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
> [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
> [<ffffffff81500753>] bitmap_unplug+0x173/0x180
> [<ffffffff810b6acf>] ? write_cache_pages+0x12f/0x420
> [<ffffffff810b6700>] ? set_page_dirty_lock+0x60/0x60
> [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
> [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
> [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
>
> The full hung-tasks stack traces and the output from SysRq+W can be
> found at http://marc.info/?l=linux-kernel&m=135344072325490 or in the
> LKML thread 'Hang in XFS reclaim on 3.7.0-rc3'.
Yes, it does look like an md bug....
Can you test to see if this fixes it?
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 636bae0..a0f7309 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -963,7 +963,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule)
struct r1conf *conf = mddev->private;
struct bio *bio;
- if (from_schedule) {
+ if (from_schedule || current->bio_list) {
spin_lock_irq(&conf->device_lock);
bio_list_merge(&conf->pending_bio_list, &plug->pending);
conf->pending_count += plug->pending_cnt;
>
> I tried to understand how this could happen, but I don't see anything
> wrong. Only that md_super_wait() looks like an open coded version of
> __wait_event() and could be replaced by using it.
yeah. md_super_wait was much more complex back when we had to support
barrier operations. When they were removed it was simplified a lot and as
you say it could be simplifier further. Patches welcome.
>
> http://marc.info/?l=linux-raid&m=135283030027665 looks like the same
> issue, but using ext4 instead of xfs.
yes, sure does.
>
> My setup wrt. md is two normal sata disks on a normal ahci controller
> (AMD SB850 southbridge).
> Both disks are divided into 4 partitions and each one assembled into a
> separate raid1.
> One (md5) is used for swap, the others hold xfs filesystems for /boot/
> (md4), / (md6) and /home/ (md7).
>
> I will try to provide any information you ask, but I can't reproduce
> the hang on demand so gathering more information about that state is
> not so easy, but I will try.
I'm fairly confident the above patch will fixes it, and in any case it fixes
a real bug. So if you could just run with it and confirm in a week or so
that the problem hasn't recurred, that might have to do.
Thanks,
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: Hang in md-raid1 with 3.7-rcX
2012-11-27 1:05 ` NeilBrown
@ 2012-11-27 7:08 ` Torsten Kaiser
2012-12-02 12:10 ` Torsten Kaiser
[not found] ` <32242311.8QXFMOUYz5@deuteros>
1 sibling, 1 reply; 6+ messages in thread
From: Torsten Kaiser @ 2012-11-27 7:08 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid, linux-kernel
On Tue, Nov 27, 2012 at 2:05 AM, NeilBrown <neilb@suse.de> wrote:
> On Sat, 24 Nov 2012 10:18:44 +0100 Torsten Kaiser
> <just.for.lkml@googlemail.com> wrote:
>
>> After my system got stuck with 3.7.0-rc2 as reported in
>> http://marc.info/?l=linux-kernel&m=135142236520624 LOCKDEP seem to
>> blame XFS, because it found 2 possible deadlocks. But after these
>> locking issues where fixed, my system got stuck again with 3.7.0-rc6
>> as reported in http://marc.info/?l=linux-kernel&m=135344072325490
>> Dave Chinner thinks its an issue within md, that it gets stuck and
>> that will then prevent any further xfs activity, and that I should
>> report it to the raid mailing list.
>>
>> The issue seems to be that multiple processes (kswapd0, xfsaild/md4
>> and flush-9:4) get stuck in md_super_wait() like this:
>> [<ffffffff816b1224>] schedule+0x24/0x60
>> [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
>> [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
>> [<ffffffff81500753>] bitmap_unplug+0x173/0x180
>> [<ffffffff810b6acf>] ? write_cache_pages+0x12f/0x420
>> [<ffffffff810b6700>] ? set_page_dirty_lock+0x60/0x60
>> [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
>> [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
>> [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
>>
>> The full hung-tasks stack traces and the output from SysRq+W can be
>> found at http://marc.info/?l=linux-kernel&m=135344072325490 or in the
>> LKML thread 'Hang in XFS reclaim on 3.7.0-rc3'.
>
> Yes, it does look like an md bug....
> Can you test to see if this fixes it?
Patch applied, I will try to get it stuck again.
I don't have a reliable reproducers, but if the problem persists I
will definitly report back here.
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index 636bae0..a0f7309 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -963,7 +963,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule)
> struct r1conf *conf = mddev->private;
> struct bio *bio;
>
> - if (from_schedule) {
> + if (from_schedule || current->bio_list) {
> spin_lock_irq(&conf->device_lock);
> bio_list_merge(&conf->pending_bio_list, &plug->pending);
> conf->pending_count += plug->pending_cnt;
>
>>
>> I tried to understand how this could happen, but I don't see anything
>> wrong. Only that md_super_wait() looks like an open coded version of
>> __wait_event() and could be replaced by using it.
>
> yeah. md_super_wait was much more complex back when we had to support
> barrier operations. When they were removed it was simplified a lot and as
> you say it could be simplifier further. Patches welcome.
I guessed it predated that particular helper.
If you ask for a patch, I have one question:
md_super_wait() looks like __wait_event(), but there also is a
wait_event() helper.
Would it be better to switch to wait_event()? It would add an
additional check for atomic_read(&mddev->pending_writes)==0 before
"allocating" and initialising the wait_queue_t, which I think would be
a correct optimization.
>> http://marc.info/?l=linux-raid&m=135283030027665 looks like the same
>> issue, but using ext4 instead of xfs.
>
> yes, sure does.
>
>>
>> My setup wrt. md is two normal sata disks on a normal ahci controller
>> (AMD SB850 southbridge).
>> Both disks are divided into 4 partitions and each one assembled into a
>> separate raid1.
>> One (md5) is used for swap, the others hold xfs filesystems for /boot/
>> (md4), / (md6) and /home/ (md7).
>>
>> I will try to provide any information you ask, but I can't reproduce
>> the hang on demand so gathering more information about that state is
>> not so easy, but I will try.
>
> I'm fairly confident the above patch will fixes it, and in any case it fixes
> a real bug. So if you could just run with it and confirm in a week or so
> that the problem hasn't recurred, that might have to do.
I only had 2 or 3 hangs since 3.7-rc1, but suspect forcing the system
to swap (which lies on an raid1) plays a part of it.
As the system as 12GB of RAM it normally doesn't need to swap and I
see no problem. I will try theses workloads again and hope if the
problem persists I can trigger it again in the next few days...
Thanks for the patch,
Torsten
> Thanks,
> NeilBrown
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Hang in md-raid1 with 3.7-rcX
[not found] ` <32242311.8QXFMOUYz5@deuteros>
@ 2012-11-28 20:26 ` NeilBrown
0 siblings, 0 replies; 6+ messages in thread
From: NeilBrown @ 2012-11-28 20:26 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: Torsten Kaiser, linux-raid, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 2106 bytes --]
On Wed, 28 Nov 2012 14:51:59 +0000 Tvrtko Ursulin
<tvrtko.ursulin@onelan.co.uk> wrote:
> On Tuesday 27 November 2012 12:05:28 NeilBrown wrote:
> > On Sat, 24 Nov 2012 10:18:44 +0100 Torsten Kaiser
> >
> > <just.for.lkml@googlemail.com> wrote:
> > > After my system got stuck with 3.7.0-rc2 as reported in
> > > http://marc.info/?l=linux-kernel&m=135142236520624 LOCKDEP seem to
> > > blame XFS, because it found 2 possible deadlocks. But after these
> > > locking issues where fixed, my system got stuck again with 3.7.0-rc6
> > > as reported in http://marc.info/?l=linux-kernel&m=135344072325490
> > > Dave Chinner thinks its an issue within md, that it gets stuck and
> > > that will then prevent any further xfs activity, and that I should
> > > report it to the raid mailing list.
> > >
> > > The issue seems to be that multiple processes (kswapd0, xfsaild/md4
> > > and flush-9:4) get stuck in md_super_wait() like this:
> > > [<ffffffff816b1224>] schedule+0x24/0x60
> > > [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
> > > [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
> > > [<ffffffff81500753>] bitmap_unplug+0x173/0x180
> > > [<ffffffff810b6acf>] ? write_cache_pages+0x12f/0x420
> > > [<ffffffff810b6700>] ? set_page_dirty_lock+0x60/0x60
> > > [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
> > > [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
> > > [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
> > >
> > > The full hung-tasks stack traces and the output from SysRq+W can be
> > > found at http://marc.info/?l=linux-kernel&m=135344072325490 or in the
> > > LKML thread 'Hang in XFS reclaim on 3.7.0-rc3'.
> >
> > Yes, it does look like an md bug....
> > Can you test to see if this fixes it?
>
> Hi,
>
> Would this bug be present in 3.6 as well? Because I am hitting something which
> looks pretty much like this with 3.6.x. In which case it should go to -stable,
> however I am not able to test on the affected machine at the moment.
>
> Regards,
>
> Tvrtko
Yes it is in 3.6, and it will go to -stable.
Thanks,
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Hang in md-raid1 with 3.7-rcX
2012-11-27 7:08 ` Torsten Kaiser
@ 2012-12-02 12:10 ` Torsten Kaiser
2012-12-02 19:52 ` NeilBrown
0 siblings, 1 reply; 6+ messages in thread
From: Torsten Kaiser @ 2012-12-02 12:10 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid, linux-kernel
On Tue, Nov 27, 2012 at 8:08 AM, Torsten Kaiser
<just.for.lkml@googlemail.com> wrote:
> On Tue, Nov 27, 2012 at 2:05 AM, NeilBrown <neilb@suse.de> wrote:
>> Can you test to see if this fixes it?
>
> Patch applied, I will try to get it stuck again.
> I don't have a reliable reproducers, but if the problem persists I
> will definitly report back here.
With this patch I was not able to recreate the hang. Lacking an 100%
way of recreating this, I can't be completely sure of the fix, but as
you understood from the code how this hang could happen, I'm quite
confident that the fix is working.
(As I do not use the raid10 personality only patching raid1.c was
sufficient for me, I didn't test the version that also patched
raid10.c as its not even compiled on my kernel.)
Thanks for the fix!
Torsten
>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
>> index 636bae0..a0f7309 100644
>> --- a/drivers/md/raid1.c
>> +++ b/drivers/md/raid1.c
>> @@ -963,7 +963,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule)
>> struct r1conf *conf = mddev->private;
>> struct bio *bio;
>>
>> - if (from_schedule) {
>> + if (from_schedule || current->bio_list) {
>> spin_lock_irq(&conf->device_lock);
>> bio_list_merge(&conf->pending_bio_list, &plug->pending);
>> conf->pending_count += plug->pending_cnt;
>>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Hang in md-raid1 with 3.7-rcX
2012-12-02 12:10 ` Torsten Kaiser
@ 2012-12-02 19:52 ` NeilBrown
0 siblings, 0 replies; 6+ messages in thread
From: NeilBrown @ 2012-12-02 19:52 UTC (permalink / raw)
To: Torsten Kaiser; +Cc: linux-raid, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1728 bytes --]
On Sun, 2 Dec 2012 13:10:33 +0100 Torsten Kaiser
<just.for.lkml@googlemail.com> wrote:
> On Tue, Nov 27, 2012 at 8:08 AM, Torsten Kaiser
> <just.for.lkml@googlemail.com> wrote:
> > On Tue, Nov 27, 2012 at 2:05 AM, NeilBrown <neilb@suse.de> wrote:
> >> Can you test to see if this fixes it?
> >
> > Patch applied, I will try to get it stuck again.
> > I don't have a reliable reproducers, but if the problem persists I
> > will definitly report back here.
>
> With this patch I was not able to recreate the hang. Lacking an 100%
> way of recreating this, I can't be completely sure of the fix, but as
> you understood from the code how this hang could happen, I'm quite
> confident that the fix is working.
>
> (As I do not use the raid10 personality only patching raid1.c was
> sufficient for me, I didn't test the version that also patched
> raid10.c as its not even compiled on my kernel.)
>
> Thanks for the fix!
And thanks for testing!
Linus doesn't seem to have pulled in the fix yet, but hopefully it will be in
3.7.
NeilBrown
>
> Torsten
>
> >> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> >> index 636bae0..a0f7309 100644
> >> --- a/drivers/md/raid1.c
> >> +++ b/drivers/md/raid1.c
> >> @@ -963,7 +963,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule)
> >> struct r1conf *conf = mddev->private;
> >> struct bio *bio;
> >>
> >> - if (from_schedule) {
> >> + if (from_schedule || current->bio_list) {
> >> spin_lock_irq(&conf->device_lock);
> >> bio_list_merge(&conf->pending_bio_list, &plug->pending);
> >> conf->pending_count += plug->pending_cnt;
> >>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2012-12-02 19:52 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-24 9:18 Hang in md-raid1 with 3.7-rcX Torsten Kaiser
2012-11-27 1:05 ` NeilBrown
2012-11-27 7:08 ` Torsten Kaiser
2012-12-02 12:10 ` Torsten Kaiser
2012-12-02 19:52 ` NeilBrown
[not found] ` <32242311.8QXFMOUYz5@deuteros>
2012-11-28 20:26 ` NeilBrown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).