linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andreas Klauer <Andreas.Klauer@metamorpher.de>
To: mdraid.pkoch@dfgh.net
Cc: linux-raid@vger.kernel.org
Subject: Re: Growing RAID10 with active XFS filesystem
Date: Sun, 7 Jan 2018 21:16:22 +0100	[thread overview]
Message-ID: <20180107201622.GA7977@metamorpher.de> (raw)
In-Reply-To: <4892a03d-018c-8281-13d3-4fcc7acd2bb8@gmail.com>

On Sat, Jan 06, 2018 at 04:44:12PM +0100, mdraid.pkoch@dfgh.net wrote:
> Now today I increased the RAID10 again from 20 to 21 disks with the
> following commands:
> 
> mdadm /dev/md5 --add /dev/sdo
> mdadm --grow /dev/md5 --raid-devices=21
> 
> Just one second after starting the reshape operation
> XFS failed with the following messages:
> 
> md: reshape of RAID array md5
> md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
> md: using maximum available idle IO bandwidth (but not more than 200000 
> KB/sec) for reshape.
> md: using 128k window, over a total of 19533829120k.
> XFS (md5): metadata I/O error: block 0x12c08f360 
> ("xfs_trans_read_buf_map") error 5 numblks 16

Ouch. No idea what happened there.

Use overlays to try to recover. Don't write anymore.

https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file

I tried to reproduce your problem, created a 20 drive RAID, 
and a while loop to grow to 21 drives, then shrink back to 20.

    truncate -s 100M {001..021}
    losetup ...
    mdadm --create /dev/md42 --level=10 --raid-devices=20 /dev/loop{1..20}
    mdadm --grow /dev/md42 --add /dev/loop21

    while :
    do
        mdadm --wait /dev/md42
        mdadm --grow /dev/md42 --raid-devices=21
        mdadm --wait /dev/md42
        mdadm --grow /dev/md42 --array-size 1013760
        mdadm --wait /dev/md42
        mdadm --grow /dev/md42 --raid-devices=20
    done

Then I put XFS on top and another while loop to extract a Linux tarball.

    while :
    do
        tar xf linux-4.13.4.tar.xz
        sync
        rm -rf linux-4.13.4
        sync
    done

Both running in parallel ad infinitum.

I couldn't get the XFS to corrupt.

mdadm itself eventually died though.

Told me two drives failed though none did and would refuse to continue 
the grow operation. Unless I'm missing something, the degraded counter 
seems to have gone out of whack. There was nothing in dmesg.

# cat /sys/block/md42/md/degraded 
2

# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md42 : active raid10 loop20[19] loop19[18] loop18[17] loop17[16] loop16[15] loop15[14] loop14[13] loop13[12] loop12[11] loop11[10] loop10[9] loop9[8] loop8[7] loop7[6] loop6[5] loop5[4] loop4[3] loop3[2] loop2[1] loop1[0]
      1013760 blocks super 1.2 512K chunks 2 near-copies [20/18] [UUUUUUUUUUUUUUUUUUUU]

Stopping and re-assembling and degraded went back to 0.

# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md42 : active raid10 loop1[0] loop20[19] loop19[18] loop18[17] loop17[16] loop16[15] loop15[14] loop14[13] loop13[12] loop12[11] loop11[10] loop10[9] loop9[8] loop8[7] loop7[6] loop6[5] loop5[4] loop4[3] loop3[2] loop2[1]
      1013760 blocks super 1.2 512K chunks 2 near-copies [20/20] [UUUUUUUUUUUUUUUUUUUU]

But this should be unrelated to your issue.
No idea what happened to you.
Sorry.

Regards
Andreas Klauer

  parent reply	other threads:[~2018-01-07 20:16 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-06 15:44 Growing RAID10 with active XFS filesystem mdraid.pkoch
2018-01-07 19:33 ` John Stoffel
2018-01-07 20:16 ` Andreas Klauer [this message]
2018-01-08  7:31 ` Guoqing Jiang
2018-01-08 15:16   ` Wols Lists
2018-01-08 15:34     ` Reindl Harald
2018-01-08 16:24     ` Wolfgang Denk
2018-01-10  1:57     ` Guoqing Jiang
  -- strict thread matches above, loose matches on Subject: below --
2018-01-08 19:06 mdraid.pkoch
     [not found] <f289da8f-96ec-7db4-abb1-b151d553c088@gmail.com>
     [not found] ` <20180108192607.GS5602@magnolia>
2018-01-08 22:01   ` Dave Chinner
2018-01-08 23:44     ` mdraid.pkoch
2018-01-09  9:36     ` Wols Lists
2018-01-09 21:47       ` IMAP-FCC:Sent
2018-01-09 22:25       ` Dave Chinner
2018-01-09 22:32         ` Reindl Harald
2018-01-10  6:17         ` Wols Lists
2018-01-11  2:14           ` Dave Chinner
2018-01-12  2:16             ` Guoqing Jiang
2018-01-10 14:10         ` Phil Turmel
2018-01-10 21:57           ` Wols Lists
2018-01-11  3:07           ` Dave Chinner
2018-01-12 13:32             ` Wols Lists
2018-01-12 14:25               ` Emmanuel Florac
2018-01-12 17:52                 ` Wols Lists
2018-01-12 18:37                   ` Emmanuel Florac
2018-01-12 19:35                     ` Wol's lists
2018-01-13 12:30                       ` Brad Campbell
2018-01-13 13:18                         ` Wols Lists
2018-01-13  0:20                   ` Stan Hoeppner
2018-01-13 19:29                     ` Wol's lists
2018-01-13 22:40                       ` Dave Chinner
2018-01-13 23:04                         ` Wols Lists
2018-01-14 21:33                 ` Wol's lists
2018-01-15 17:08                   ` Emmanuel Florac

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180107201622.GA7977@metamorpher.de \
    --to=andreas.klauer@metamorpher.de \
    --cc=linux-raid@vger.kernel.org \
    --cc=mdraid.pkoch@dfgh.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).