linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Raid10 to Raid0 conversion
@ 2014-03-22 11:07 Marcin Wanat
  2014-03-23 18:19 ` Mikael Abrahamsson
  2014-03-31  6:38 ` NeilBrown
  0 siblings, 2 replies; 4+ messages in thread
From: Marcin Wanat @ 2014-03-22 11:07 UTC (permalink / raw)
  To: linux-raid

Hi,

i have 4disc RAID10 on my server and i am trying to grow it to 6 devices.
As direct grow of RAID10 is unavailable so I decided to do it this way:

RAID10->RAID0->Grow RAID0 to 3 devices->RAID0(3 devices)->RAID10(6devices)

But i have problem on the first step. I have degraded my RAID10 array:
# mdadm --detail /dev/md1
/dev/md1:
         Version : 1.1
   Creation Time : Mon Sep  2 12:09:53 2013
      Raid Level : raid10
      Array Size : 1023996928 (976.56 GiB 1048.57 GB)
   Used Dev Size : 511998464 (488.28 GiB 524.29 GB)
    Raid Devices : 4
   Total Devices : 2
     Persistence : Superblock is persistent

     Update Time : Sat Mar 22 13:00:25 2014
           State : clean, degraded
  Active Devices : 2
Working Devices : 2
  Failed Devices : 0
   Spare Devices : 0

          Layout : near=2
      Chunk Size : 512K

     Number   Major   Minor   RaidDevice State
        0       0        0        0      removed
        1       8       17        1      active sync   /dev/sdb1
        2       0        0        2      removed
        4       8       49        3      active sync   /dev/sdd1


And want to change it to RAID0:
# mdadm /dev/md1 --grow --level=0
or:
# mdadm /dev/md1 --grow --raid-devices=2 --level=0

but the result is always the same:
mdadm: /dev/md1: could not set level to raid0

dmesg shows:
md/raid0:md1: All mirrors must be already degraded!
md: md1: raid0 would not accept array

But the array is already degraded... What am I doing wrong ?

I am using Centos 6.5 default version of kernel and mdadm.


PS: I know that it is possible to grow RAID10 by creating new array with 
3 drives and 3 missing and then move data between them, but i am trying 
to grow live system without any offline time.


Regards,
Marcin Wanat

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Raid10 to Raid0 conversion
  2014-03-22 11:07 Raid10 to Raid0 conversion Marcin Wanat
@ 2014-03-23 18:19 ` Mikael Abrahamsson
  2014-03-31  6:38 ` NeilBrown
  1 sibling, 0 replies; 4+ messages in thread
From: Mikael Abrahamsson @ 2014-03-23 18:19 UTC (permalink / raw)
  To: Marcin Wanat; +Cc: linux-raid

On Sat, 22 Mar 2014, Marcin Wanat wrote:

> I am using Centos 6.5 default version of kernel and mdadm.

People here don't have that information. You'll increase your chance of 
getting help if you post output from "uname -a" and "mdadm -V" instead of 
referring to OS version.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Raid10 to Raid0 conversion
  2014-03-22 11:07 Raid10 to Raid0 conversion Marcin Wanat
  2014-03-23 18:19 ` Mikael Abrahamsson
@ 2014-03-31  6:38 ` NeilBrown
  2014-03-31 12:18   ` Marcin Wanat
  1 sibling, 1 reply; 4+ messages in thread
From: NeilBrown @ 2014-03-31  6:38 UTC (permalink / raw)
  To: Marcin Wanat; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2547 bytes --]

On Sat, 22 Mar 2014 12:07:50 +0100 Marcin Wanat <mwanat@forall.pl> wrote:

> Hi,
> 
> i have 4disc RAID10 on my server and i am trying to grow it to 6 devices.
> As direct grow of RAID10 is unavailable so I decided to do it this way:

It is  available with latest kernel and mdadm...

> 
> RAID10->RAID0->Grow RAID0 to 3 devices->RAID0(3 devices)->RAID10(6devices)
> 
> But i have problem on the first step. I have degraded my RAID10 array:
> # mdadm --detail /dev/md1
> /dev/md1:
>          Version : 1.1
>    Creation Time : Mon Sep  2 12:09:53 2013
>       Raid Level : raid10
>       Array Size : 1023996928 (976.56 GiB 1048.57 GB)
>    Used Dev Size : 511998464 (488.28 GiB 524.29 GB)
>     Raid Devices : 4
>    Total Devices : 2
>      Persistence : Superblock is persistent
> 
>      Update Time : Sat Mar 22 13:00:25 2014
>            State : clean, degraded
>   Active Devices : 2
> Working Devices : 2
>   Failed Devices : 0
>    Spare Devices : 0
> 
>           Layout : near=2
>       Chunk Size : 512K
> 
>      Number   Major   Minor   RaidDevice State
>         0       0        0        0      removed
>         1       8       17        1      active sync   /dev/sdb1
>         2       0        0        2      removed
>         4       8       49        3      active sync   /dev/sdd1
> 
> 
> And want to change it to RAID0:
> # mdadm /dev/md1 --grow --level=0
> or:
> # mdadm /dev/md1 --grow --raid-devices=2 --level=0
> 
> but the result is always the same:
> mdadm: /dev/md1: could not set level to raid0
> 
> dmesg shows:
> md/raid0:md1: All mirrors must be already degraded!
> md: md1: raid0 would not accept array
> 
> But the array is already degraded... What am I doing wrong ?

I don't think it is you.
What does /sys/block/md1/md/degraded contain?
If it isn't '2', then that is the problem.
Maybe if you stop the array the array and assemble it again it could get that
right.

> 
> I am using Centos 6.5 default version of kernel and mdadm.

uname -a ; mdadm -V

is more helpful.

NeilBrown


> 
> 
> PS: I know that it is possible to grow RAID10 by creating new array with 
> 3 drives and 3 missing and then move data between them, but i am trying 
> to grow live system without any offline time.
> 
> 
> Regards,
> Marcin Wanat
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Raid10 to Raid0 conversion
  2014-03-31  6:38 ` NeilBrown
@ 2014-03-31 12:18   ` Marcin Wanat
  0 siblings, 0 replies; 4+ messages in thread
From: Marcin Wanat @ 2014-03-31 12:18 UTC (permalink / raw)
  Cc: linux-raid

On 2014-03-31 08:38, NeilBrown wrote:
> I don't think it is you. What does /sys/block/md1/md/degraded contain? 
> If it isn't '2', then that is the problem. Maybe if you stop the array 
> the array and assemble it again it could get that right.

In fact i have resolved this issue a day later.
I saw that despite output of mdadm --detail (which was WD=RD=4), in 
dmesg there was: RD: 5, WD:4 so i guessed that raid array was not 
degraded correctly.
I have fixed it by just stopping and reassembling the array as you said.

Regards,
Marcin Wanat

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-03-31 12:18 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-03-22 11:07 Raid10 to Raid0 conversion Marcin Wanat
2014-03-23 18:19 ` Mikael Abrahamsson
2014-03-31  6:38 ` NeilBrown
2014-03-31 12:18   ` Marcin Wanat

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).