linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andy Smith <andy@strugglers.net>
To: linux-raid@vger.kernel.org
Subject: Shrinking number of devices on a RAID-10 (near 2) array
Date: Sat, 23 Aug 2014 16:31:10 +0000	[thread overview]
Message-ID: <20140823163110.GE11855@bitfolk.com> (raw)

Hi,

I am aware that for a very long time it was not possible to change
the number of devices in an mdadm RAID-10. But then recently I am
sure I saw threads here alluding to this being possible. e.g.:

    http://marc.info/?l=linux-raid&m=140768923829685&w=2

I have a 6 device RAID-10 near=2 array that I would like to shrink
down to 4 devices.

I have compiled mdadm 3.3.2 and am using kernel 3.16.0.

$ sudo mdadm --detail /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Sun Jun  4 08:18:58 2006
     Raid Level : raid10
     Array Size : 471859200 (450.00 GiB 483.18 GB)
  Used Dev Size : 309363520 (295.03 GiB 316.79 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sat Aug 23 15:23:38 2014
          State : active 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 64K

           UUID : 3905b303:ca604b72:be5949c4:ab051b7a
         Events : 0.312149991

    Number   Major   Minor   RaidDevice State
       0       8       51        0      active sync   /dev/sdd3
       1       8       67        1      active sync   /dev/sde3
       2       8       83        2      active sync   /dev/sdf3
       3       8       19        3      active sync   /dev/sdb3
       4       8       35        4      active sync   /dev/sdc3
       5       8        3        5      active sync   /dev/sda3

$ sudo ./mdadm --grow -n4 /dev/md2 --backup-file /var/tmp/mdadm.backup
mdadm: Cannot set new_data_offset: RAID10 reshape not
       supported on this kernel

Is that not yet possible, then?

(Each device is 320GB so it should all fit with only four of them)

Cheers,
Andy

             reply	other threads:[~2014-08-23 16:31 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-23 16:31 Andy Smith [this message]
2014-08-24  3:09 ` Shrinking number of devices on a RAID-10 (near 2) array NeilBrown
2014-08-24  6:28   ` Craig Curtin
2014-08-24  6:45     ` NeilBrown
2014-08-24 13:19       ` Andy Smith
2014-08-24 14:39   ` Andy Smith
2014-08-25 10:32     ` Andy Smith
2014-08-25 11:26       ` NeilBrown
2014-08-25 11:34         ` Andy Smith
2014-08-28  9:53           ` Andy Smith
2014-08-29  3:53           ` NeilBrown
2014-08-29  4:02             ` Andy Smith
2014-08-29  4:18               ` NeilBrown
2014-08-29  4:26                 ` Andy Smith
2014-08-29  4:35                   ` NeilBrown
2014-08-29  4:42                     ` Andy Smith
2014-08-29  6:04                       ` NeilBrown
2014-08-29 20:45                         ` Andy Smith
2014-08-29 20:47   ` [PATCH 1/1] Grow: Report when grow needs metadata update Andy Smith
2014-09-03  3:28     ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140823163110.GE11855@bitfolk.com \
    --to=andy@strugglers.net \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).