linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* md/raid10 deadlock at 'Failing raid device'
@ 2012-05-10  2:47 George Shuklin
  2012-05-10  3:03 ` NeilBrown
  0 siblings, 1 reply; 2+ messages in thread
From: George Shuklin @ 2012-05-10  2:47 UTC (permalink / raw)
  To: linux-raid, NeilBrown, Jonathan Nieder

As Jonathan Nieder proposed, writing here about new deadlock bug I met 
recently with raid10.

Summary: under some condition multiple simultaneously failing devices 
cause with some chance deadlock on operations with failed array.

Conditions:
3 Adadptec raid controllers (Adaptec Device 028, aacraid). Every one do 
have 8 directly attached SATA disks   (without extenders or expanders). 
Those disks are configured as 'JBOD' and passed to linux almost 'as is'. 
Those disks joined in three raid10 arrays (by using linux raid). Those 
three arrays joined in raid0.

Configuration looks like this:

   3 x RAID10
md101 [UUUUUUUU] --\
md102 [UUUUUUUU] ------ md100 [UUU]  (raid0)
md103 [UUUUUUUU] --/

After that all disks are deconfigured from adaptec utility. They 
dissappear from /dev/, but /proc/mdadm shows arrays fine.  After that 
some io performed on raid0. That, of cause, causing failure on all raid 
arrays and return IO error to calling software (in my case it was 'fio' 
disk performance test utility).

Two arrays fails gracefully, but one did not. It stuck with one disk 
(which one was not in system) and did not return anything to calling 
software, like it was in raid10 deadlock, which was fixed in commit d9b42d.

Content /proc/mdstat after failure:

md100 : active raid0 md103[2] md102[1] md101[0]
  11714540544 blocks super 1.2 256k chunks

md101 : active raid10 sdv[7](W)(F) sdu[6](W)(F) sdo[5](W)(F) sdn[4](W)(F) sdm[3](W)(F) sdg[2](W)(F) sdf[1](W)(F) sde[0](W)(F)
  3904847872 blocks super 1.2 256K chunks 2 near-copies [8/0] [________]
  bitmap: 0/466 pages [0KB], 4096KB chunk, file: /var/mdadm/md101

md103 : active raid10 sdr[0](W)(F) sdab[7](W)(F) sdt[6](W)(F) sdl[5](W)(F) sdaa[4](W) sds[3](W)(F) sdk[2](W)(F) sdz[1](W)(F)
  3904847872 blocks super 1.2 256K chunks 2 near-copies [8/1] [____U___]
  bitmap: 1/466 pages [4KB], 4096KB chunk, file: /var/mdadm/md103

md102 : active raid10 sdw[0](W)(F) sdj[7](W)(F) sdy[6](W)(F) sdq[5](W)(F) sdi[4](W)(F) sdx[3](W)(F) sdp[2](W)(F) sdh[1](W)(F)
  3904847872 blocks super 1.2 256K chunks 2 near-copies [8/0] [________]

I recheck - /dev/sdaa was no longer in system, but raid10 has think it was.

In dmesg those messages repeat very fast:

[4474.074462] md/raid10:md103: sdaa: Failing raid device

It was so fast so I got race between logging to ring buffer and syslog activity and got this in /var/log/messages:
May 5 21:20:04 server kernel: [ 4507.578517] md/raid10:md103: sdaa: Faid device
May 5 21:20:04 server kernel: [ 4507.578525] md/raid10:md103: sdaa: Faaid device
May 5 21:20:04 server kernel: [ 4507.578533] md/raid10:md103: sdaa: aid device
May 5 21:20:04 server kernel: [ 4507.578541] md/raid10:md103: sdaa: Faid devic
May 5 21:20:04 server kernel: [ 4507.578549] md/raid10:md103: sdaa: Faid device
May 5 21:20:04 server kernel: [ 4507.578557] md/raid10:md103: sdaa: Faid device
May 5 21:20:04 server kernel: [ 4507.578566] md/raid10:md103: sdaa: Failaid device


It was with linux 3.2.0-2-amd64


---
wBR, George Shuklin

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: md/raid10 deadlock at 'Failing raid device'
  2012-05-10  2:47 md/raid10 deadlock at 'Failing raid device' George Shuklin
@ 2012-05-10  3:03 ` NeilBrown
  0 siblings, 0 replies; 2+ messages in thread
From: NeilBrown @ 2012-05-10  3:03 UTC (permalink / raw)
  To: George Shuklin; +Cc: linux-raid, Jonathan Nieder

[-- Attachment #1: Type: text/plain, Size: 3963 bytes --]

On Thu, 10 May 2012 06:47:27 +0400 George Shuklin <george.shuklin@gmail.com>
wrote:

> As Jonathan Nieder proposed, writing here about new deadlock bug I met 
> recently with raid10.
> 
> Summary: under some condition multiple simultaneously failing devices 
> cause with some chance deadlock on operations with failed array.
> 
> Conditions:
> 3 Adadptec raid controllers (Adaptec Device 028, aacraid). Every one do 
> have 8 directly attached SATA disks   (without extenders or expanders). 
> Those disks are configured as 'JBOD' and passed to linux almost 'as is'. 
> Those disks joined in three raid10 arrays (by using linux raid). Those 
> three arrays joined in raid0.
> 
> Configuration looks like this:
> 
>    3 x RAID10
> md101 [UUUUUUUU] --\
> md102 [UUUUUUUU] ------ md100 [UUU]  (raid0)
> md103 [UUUUUUUU] --/
> 
> After that all disks are deconfigured from adaptec utility. They 
> dissappear from /dev/, but /proc/mdadm shows arrays fine.  After that 
> some io performed on raid0. That, of cause, causing failure on all raid 
> arrays and return IO error to calling software (in my case it was 'fio' 
> disk performance test utility).
> 
> Two arrays fails gracefully, but one did not. It stuck with one disk 
> (which one was not in system) and did not return anything to calling 
> software, like it was in raid10 deadlock, which was fixed in commit d9b42d.
> 
> Content /proc/mdstat after failure:
> 
> md100 : active raid0 md103[2] md102[1] md101[0]
>   11714540544 blocks super 1.2 256k chunks
> 
> md101 : active raid10 sdv[7](W)(F) sdu[6](W)(F) sdo[5](W)(F) sdn[4](W)(F) sdm[3](W)(F) sdg[2](W)(F) sdf[1](W)(F) sde[0](W)(F)
>   3904847872 blocks super 1.2 256K chunks 2 near-copies [8/0] [________]
>   bitmap: 0/466 pages [0KB], 4096KB chunk, file: /var/mdadm/md101
> 
> md103 : active raid10 sdr[0](W)(F) sdab[7](W)(F) sdt[6](W)(F) sdl[5](W)(F) sdaa[4](W) sds[3](W)(F) sdk[2](W)(F) sdz[1](W)(F)
>   3904847872 blocks super 1.2 256K chunks 2 near-copies [8/1] [____U___]
>   bitmap: 1/466 pages [4KB], 4096KB chunk, file: /var/mdadm/md103
> 
> md102 : active raid10 sdw[0](W)(F) sdj[7](W)(F) sdy[6](W)(F) sdq[5](W)(F) sdi[4](W)(F) sdx[3](W)(F) sdp[2](W)(F) sdh[1](W)(F)
>   3904847872 blocks super 1.2 256K chunks 2 near-copies [8/0] [________]
> 
> I recheck - /dev/sdaa was no longer in system, but raid10 has think it was.
> 
> In dmesg those messages repeat very fast:
> 
> [4474.074462] md/raid10:md103: sdaa: Failing raid device
> 
> It was so fast so I got race between logging to ring buffer and syslog activity and got this in /var/log/messages:
> May 5 21:20:04 server kernel: [ 4507.578517] md/raid10:md103: sdaa: Faid device
> May 5 21:20:04 server kernel: [ 4507.578525] md/raid10:md103: sdaa: Faaid device
> May 5 21:20:04 server kernel: [ 4507.578533] md/raid10:md103: sdaa: aid device
> May 5 21:20:04 server kernel: [ 4507.578541] md/raid10:md103: sdaa: Faid devic
> May 5 21:20:04 server kernel: [ 4507.578549] md/raid10:md103: sdaa: Faid device
> May 5 21:20:04 server kernel: [ 4507.578557] md/raid10:md103: sdaa: Faid device
> May 5 21:20:04 server kernel: [ 4507.578566] md/raid10:md103: sdaa: Failaid device
> 
> 
> It was with linux 3.2.0-2-amd64

Fixed by commit fae8cc5ed0714953b1ad7cf86 I believe.

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=fae8cc5ed

  From: NeilBrown <neilb@suse.de>
  Date: Tue, 14 Feb 2012 00:10:10 +0000 (+1100)
  Subject: md/raid10: fix handling of error on last working device in array.

  md/raid10: fix handling of error on last working device in array.

  If we get a read error on the last working device in a RAID10 which
  contains the target block, then we don't fail the device (which is
  good) but we don't abort retries, which is wrong.
  We end up in an infinite loop retrying the read on the one device.

NeilBrown


> 
> 
> ---
> wBR, George Shuklin


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-05-10  3:03 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-10  2:47 md/raid10 deadlock at 'Failing raid device' George Shuklin
2012-05-10  3:03 ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).