* Raid 5 to 6 migration
@ 2013-02-01 17:17 Dominique
2013-02-01 17:50 ` Phil Turmel
0 siblings, 1 reply; 16+ messages in thread
From: Dominique @ 2013-02-01 17:17 UTC (permalink / raw)
To: linux-raid mailing list
Hi list,
I was wondering what would be the best way to convert a 6 hdd raid 5 to
raid 6. Ideally without having to reformat everything and lose all the
data in the process. I have backups of the data but I don't look forward
reinstalling everything on that server.
Thanks for the input.
Dominique
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-01 17:17 Raid 5 to 6 migration Dominique
@ 2013-02-01 17:50 ` Phil Turmel
2013-02-01 18:16 ` Roman Mamedov
[not found] ` <COL114-W137DD8871889508BBF087BC91C0@phx.gbl>
0 siblings, 2 replies; 16+ messages in thread
From: Phil Turmel @ 2013-02-01 17:50 UTC (permalink / raw)
To: Dominique; +Cc: linux-raid mailing list
On 02/01/2013 12:17 PM, Dominique wrote:
> Hi list,
>
> I was wondering what would be the best way to convert a 6 hdd raid 5 to
> raid 6. Ideally without having to reformat everything and lose all the
> data in the process. I have backups of the data but I don't look forward
> reinstalling everything on that server.
1) Add a seventh drive to the system.
You can do this on the run if you have hotplug-capable sata ports or sas
ports.
2) Add that drive to the array w/ "mdadm --add"
3) Convert array to raid6 w/ "mdadm --grow --level=raid6"
You can do 2 & 3 on the run.
4) Monitor /proc/mdstat to see when it finishes.
HTH,
Phil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-01 17:50 ` Phil Turmel
@ 2013-02-01 18:16 ` Roman Mamedov
2013-02-01 18:19 ` Phil Turmel
[not found] ` <COL114-W137DD8871889508BBF087BC91C0@phx.gbl>
1 sibling, 1 reply; 16+ messages in thread
From: Roman Mamedov @ 2013-02-01 18:16 UTC (permalink / raw)
To: Phil Turmel; +Cc: Dominique, linux-raid mailing list
[-- Attachment #1: Type: text/plain, Size: 776 bytes --]
On Fri, 01 Feb 2013 12:50:45 -0500
Phil Turmel <philip@turmel.org> wrote:
> On 02/01/2013 12:17 PM, Dominique wrote:
> > Hi list,
> >
> > I was wondering what would be the best way to convert a 6 hdd raid 5 to
> > raid 6. Ideally without having to reformat everything and lose all the
> > data in the process. I have backups of the data but I don't look forward
> > reinstalling everything on that server.
...
> 3) Convert array to raid6 w/ "mdadm --grow --level=raid6"
--layout=preserve will make it an order of magnitude faster. Read "man mdadm"
for more details. (And "man md" to learn about things in general.)
That's all while still assuming you *can* or want to add a 7th drive.
If not, things become more complex.
--
With respect,
Roman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-01 18:16 ` Roman Mamedov
@ 2013-02-01 18:19 ` Phil Turmel
2013-02-01 18:49 ` Chris Murphy
0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2013-02-01 18:19 UTC (permalink / raw)
To: Roman Mamedov; +Cc: Dominique, linux-raid mailing list
On 02/01/2013 01:16 PM, Roman Mamedov wrote:
> On Fri, 01 Feb 2013 12:50:45 -0500
> Phil Turmel <philip@turmel.org> wrote:
>
>> On 02/01/2013 12:17 PM, Dominique wrote:
>>> Hi list,
>>>
>>> I was wondering what would be the best way to convert a 6 hdd raid 5 to
>>> raid 6. Ideally without having to reformat everything and lose all the
>>> data in the process. I have backups of the data but I don't look forward
>>> reinstalling everything on that server.
>
> ...
>
>> 3) Convert array to raid6 w/ "mdadm --grow --level=raid6"
>
> --layout=preserve will make it an order of magnitude faster. Read "man mdadm"
> for more details. (And "man md" to learn about things in general.)
But will leave the array "unbalanced"--the 7th drive will never be used
for reads in normal (non-degraded) operation--reducing read performance.
> That's all while still assuming you *can* or want to add a 7th drive.
> If not, things become more complex.
Indeed.
Phil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
[not found] ` <COL114-W137DD8871889508BBF087BC91C0@phx.gbl>
@ 2013-02-01 18:22 ` Phil Turmel
2013-02-03 10:00 ` Roy Sigurd Karlsbakk
2013-02-04 12:51 ` Dominique
0 siblings, 2 replies; 16+ messages in thread
From: Phil Turmel @ 2013-02-01 18:22 UTC (permalink / raw)
To: Dominique C.; +Cc: linux-raid
Hi Dominique,
[Top-posting repaired. Please don't]
On 02/01/2013 01:19 PM, Dominique C. wrote:
[trim /]
>>> I was wondering what would be the best way to convert a 6 hdd raid 5 to
>>> raid 6. Ideally without having to reformat everything and lose all the
>>> data in the process. I have backups of the data but I don't look forward
>>> reinstalling everything on that server.
>>
>> 1) Add a seventh drive to the system.
>>
>> You can do this on the run if you have hotplug-capable sata ports or sas
>> ports.
>>
>> 2) Add that drive to the array w/ "mdadm --add"
>> 3) Convert array to raid6 w/ "mdadm --grow --level=raid6"
>>
>> You can do 2 & 3 on the run.
>>
>> 4) Monitor /proc/mdstat to see when it finishes.
> Hi Phil,
>
> Thanks for your answer, however I forgot to mention one thing: I used
6 hdd out of 6 possible. No possibility to expand at this stage....
> The data only uses part of the space available.
> Any other ideas ?
Shrinking is usually possible, but there are many more steps, depending
on what is in the array.
Please provide more detail. The output of "lsdrv"[1] might be a good start.
Phil
[1] http://github.com/pturmel/lsdrv
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-01 18:19 ` Phil Turmel
@ 2013-02-01 18:49 ` Chris Murphy
2013-02-01 22:04 ` Phil Turmel
0 siblings, 1 reply; 16+ messages in thread
From: Chris Murphy @ 2013-02-01 18:49 UTC (permalink / raw)
To: Phil Turmel; +Cc: Roman Mamedov, linux-raid mailing list
On Feb 1, 2013, at 11:19 AM, Phil Turmel <philip@turmel.org> wrote:
> On 02/01/2013 01:16 PM, Roman Mamedov wrote:
>>
>> --layout=preserve will make it an order of magnitude faster. Read "man mdadm"
>> for more details. (And "man md" to learn about things in general.)
>
> But will leave the array "unbalanced"--the 7th drive will never be used
> for reads in normal (non-degraded) operation--reducing read performance.
Does this option mean, in effect, the 7th disk is just a parity disk? So, sorta like a hybrid RAID 5/RAID4.
Chris Murphy
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-01 18:49 ` Chris Murphy
@ 2013-02-01 22:04 ` Phil Turmel
2013-02-01 22:38 ` Chris Murphy
0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2013-02-01 22:04 UTC (permalink / raw)
To: Chris Murphy; +Cc: Roman Mamedov, linux-raid mailing list
On 02/01/2013 01:49 PM, Chris Murphy wrote:
>
> On Feb 1, 2013, at 11:19 AM, Phil Turmel <philip@turmel.org> wrote:
>
>> On 02/01/2013 01:16 PM, Roman Mamedov wrote:
>>>
>>> --layout=preserve will make it an order of magnitude faster. Read "man mdadm"
>>> for more details. (And "man md" to learn about things in general.)
>>
>> But will leave the array "unbalanced"--the 7th drive will never be used
>> for reads in normal (non-degraded) operation--reducing read performance.
>
> Does this option mean, in effect, the 7th disk is just a parity disk? So, sorta like a hybrid RAID 5/RAID4.
Yes, with just the Q left on the 7th disk. The remainder of the array
is the normal raid5 stripe. See the manpage for the layout
"left-symmetric-6".
Phil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-01 22:04 ` Phil Turmel
@ 2013-02-01 22:38 ` Chris Murphy
0 siblings, 0 replies; 16+ messages in thread
From: Chris Murphy @ 2013-02-01 22:38 UTC (permalink / raw)
To: Phil Turmel; +Cc: Roman Mamedov, linux-raid mailing list
On Feb 1, 2013, at 3:04 PM, Phil Turmel <philip@turmel.org> wrote:
> On 02/01/2013 01:49 PM, Chris Murphy wrote:
>>
>>
>> Does this option mean, in effect, the 7th disk is just a parity disk? So, sorta like a hybrid RAID 5/RAID4.
>
> Yes, with just the Q left on the 7th disk. The remainder of the array
> is the normal raid5 stripe. See the manpage for the layout
> "left-symmetric-6".
Just when I thought I had some lanterns illuminating most of the rabbit hole…
Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-01 18:22 ` Phil Turmel
@ 2013-02-03 10:00 ` Roy Sigurd Karlsbakk
2013-02-03 14:30 ` Phil Turmel
2013-02-04 12:51 ` Dominique
1 sibling, 1 reply; 16+ messages in thread
From: Roy Sigurd Karlsbakk @ 2013-02-03 10:00 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid, Dominique C.
> Shrinking is usually possible, but there are many more steps,
> depending
> on what is in the array.
>
> Please provide more detail. The output of "lsdrv"[1] might be a good
> start.
lsdrv? I can't find any references to that…
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
roy@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-03 10:00 ` Roy Sigurd Karlsbakk
@ 2013-02-03 14:30 ` Phil Turmel
2013-02-03 22:01 ` Robin Hill
0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2013-02-03 14:30 UTC (permalink / raw)
To: Roy Sigurd Karlsbakk; +Cc: linux-raid, Dominique C.
On 02/03/2013 05:00 AM, Roy Sigurd Karlsbakk wrote:
>> Shrinking is usually possible, but there are many more steps,
>> depending
>> on what is in the array.
>>
>> Please provide more detail. The output of "lsdrv"[1] might be a good
>> start.
>
> lsdrv? I can't find any references to that…
Whoops, forgot the reference. I created it just for this sort of
situation, and people on this list suggested I publish it and make it
more general. It's a work in progress, so bug reports welcome:
http://github.com/pturmel/lsdrv
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-03 14:30 ` Phil Turmel
@ 2013-02-03 22:01 ` Robin Hill
0 siblings, 0 replies; 16+ messages in thread
From: Robin Hill @ 2013-02-03 22:01 UTC (permalink / raw)
To: Phil Turmel; +Cc: Roy Sigurd Karlsbakk, linux-raid, Dominique C.
[-- Attachment #1: Type: text/plain, Size: 968 bytes --]
On Sun Feb 03, 2013 at 09:30:18 -0500, Phil Turmel wrote:
> On 02/03/2013 05:00 AM, Roy Sigurd Karlsbakk wrote:
> >> Shrinking is usually possible, but there are many more steps,
> >> depending
> >> on what is in the array.
> >>
> >> Please provide more detail. The output of "lsdrv"[1] might be a good
> >> start.
> >
> > lsdrv? I can't find any references to that…
>
> Whoops, forgot the reference. I created it just for this sort of
> situation, and people on this list suggested I publish it and make it
> more general. It's a work in progress, so bug reports welcome:
>
> http://github.com/pturmel/lsdrv
>
No, you'd included the reference. I guess Roy just forgot to check the
footnotes.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-01 18:22 ` Phil Turmel
2013-02-03 10:00 ` Roy Sigurd Karlsbakk
@ 2013-02-04 12:51 ` Dominique
2013-02-04 18:39 ` Phil Turmel
1 sibling, 1 reply; 16+ messages in thread
From: Dominique @ 2013-02-04 12:51 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
On 01/02/2013 19:22, Phil Turmel wrote:
> Hi Dominique,
>
> [Top-posting repaired. Please don't]
>
> On 02/01/2013 01:19 PM, Dominique C. wrote:
> [trim /]
>
>>>> I was wondering what would be the best way to convert a 6 hdd raid 5 to
>>>> raid 6. Ideally without having to reformat everything and lose all the
>>>> data in the process. I have backups of the data but I don't look forward
>>>> reinstalling everything on that server.
>>> 1) Add a seventh drive to the system.
>>>
>>> You can do this on the run if you have hotplug-capable sata ports or sas
>>> ports.
>>>
>>> 2) Add that drive to the array w/ "mdadm --add"
>>> 3) Convert array to raid6 w/ "mdadm --grow --level=raid6"
>>>
>>> You can do 2 & 3 on the run.
>>>
>>> 4) Monitor /proc/mdstat to see when it finishes.
>> Hi Phil,
>>
>> Thanks for your answer, however I forgot to mention one thing: I used
> 6 hdd out of 6 possible. No possibility to expand at this stage....
>> The data only uses part of the space available.
>> Any other ideas ?
> Shrinking is usually possible, but there are many more steps, depending
> on what is in the array.
>
> Please provide more detail. The output of "lsdrv"[1] might be a good start.
>
> Phil
>
> [1] http://github.com/pturmel/lsdrv
>
>
>
Hi Phil,
Back from weekend, and still with my problem. Output of lsdrv is as follow:
PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 6 Series/C200
Series Chipset Family SATA AHCI Controller (rev 05)
├scsi 0:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1618149}
│└sda 2.73t [8:0] Partitioned (gpt)
│ ├sda1 95.37m [8:1] vfat {89EF-00F4}
│ │└Mounted as /dev/sda1 @ /boot/efi
│ ├sda2 29.80g [8:2] MD raid1 (0/6) (w/ sdd2,sde2,sdc2,sdb2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sda3 186.26g [8:3] MD raid1 (0/6) (w/ sdf3,sdd3,sde3,sdc3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ │ ├Mounted as
/dev/disk/by-uuid/ca4bff22-40d8-4b31-859e-bba063f01df1 @ /
│ │ └Mounted as
/dev/disk/by-uuid/ca4bff22-40d8-4b31-859e-bba063f01df1 @
/var/spool/hylafax/etc
│ └sda4 2.52t [8:4] MD raid5 (0/6) (w/ sdd4,sdb4,sde4,sdc4,sdf4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ │ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
│ └Mounted as /dev/md2 @ /srv
├scsi 1:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1382990}
│└sdb 2.73t [8:16] Partitioned (gpt)
│ ├sdb1 95.37m [8:17] vfat {6EFD-1659}
│ ├sdb2 29.80g [8:18] MD raid1 (1/6) (w/ sda2,sdd2,sde2,sdc2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sdb3 186.26g [8:19] MD raid1 (1/6) (w/ sdf3,sdd3,sde3,sda3,sdc3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ └sdb4 2.52t [8:20] MD raid5 (1/6) (w/ sdd4,sda4,sde4,sdc4,sdf4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
├scsi 2:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ0995502}
│└sdc 2.73t [8:32] Partitioned (gpt)
│ ├sdc1 95.37m [8:33] vfat {16BF-AABE}
│ ├sdc2 29.80g [8:34] MD raid1 (2/6) (w/ sda2,sdd2,sde2,sdb2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sdc3 186.26g [8:35] MD raid1 (2/6) (w/ sdf3,sdd3,sde3,sda3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ └sdc4 2.52t [8:36] MD raid5 (2/6) (w/ sdd4,sda4,sdb4,sde4,sdf4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
├scsi 3:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1118226}
│└sdd 2.73t [8:48] Partitioned (gpt)
│ ├sdd1 95.37m [8:49] vfat {978F-21B2}
│ ├sdd2 29.80g [8:50] MD raid1 (3/6) (w/ sda2,sde2,sdc2,sdb2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sdd3 186.26g [8:51] MD raid1 (3/6) (w/ sdf3,sde3,sda3,sdc3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ └sdd4 2.52t [8:52] MD raid5 (3/6) (w/ sda4,sdb4,sde4,sdc4,sdf4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
├scsi 4:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1649385}
│└sde 2.73t [8:64] Partitioned (gpt)
│ ├sde1 95.37m [8:65] vfat {8875-2D50}
│ ├sde2 29.80g [8:66] MD raid1 (4/6) (w/ sda2,sdd2,sdc2,sdb2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sde3 186.26g [8:67] MD raid1 (4/6) (w/ sdf3,sdd3,sda3,sdc3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ └sde4 2.52t [8:68] MD raid5 (4/6) (w/ sdd4,sda4,sdb4,sdc4,sdf4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
â””scsi 5:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1383151}
â””sdf 2.73t [8:80] Partitioned (gpt)
├sdf1 95.37m [8:81] vfat {A89B-DC05}
├sdf2 29.80g [8:82] MD raid1 (5/6) (w/ sda2,sdd2,sde2,sdc2,sdb2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
├sdf3 186.26g [8:83] MD raid1 (5/6) (w/ sdd3,sde3,sda3,sdc3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
â””sdf4 2.52t [8:84] MD raid5 (5/6) (w/ sdd4,sda4,sdb4,sde4,sdc4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
â””md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
USB [usb-storage] Bus 002 Device 004: ID 059f:1010 LaCie, Ltd Desktop
Hard Drive {ST3500830A 9QG6RC54}
â””scsi 6:0:0:0 ST350083 0AS
â””sdg 465.76g [8:96] Partitioned (dos)
â””sdg1 465.76g [8:97] ext4 {4c2f6b92-829d-4e53-b553-c07e0f571e02}
â””Mounted as /dev/sdg1 @ /mnt
Other Block Devices
├loop0 0.00k [7:0] Empty/Unknown
├loop1 0.00k [7:1] Empty/Unknown
├loop2 0.00k [7:2] Empty/Unknown
├loop3 0.00k [7:3] Empty/Unknown
├loop4 0.00k [7:4] Empty/Unknown
├loop5 0.00k [7:5] Empty/Unknown
├loop6 0.00k [7:6] Empty/Unknown
├loop7 0.00k [7:7] Empty/Unknown
├ram0 64.00m [1:0] Empty/Unknown
├ram1 64.00m [1:1] Empty/Unknown
├ram2 64.00m [1:2] Empty/Unknown
├ram3 64.00m [1:3] Empty/Unknown
├ram4 64.00m [1:4] Empty/Unknown
├ram5 64.00m [1:5] Empty/Unknown
├ram6 64.00m [1:6] Empty/Unknown
├ram7 64.00m [1:7] Empty/Unknown
├ram8 64.00m [1:8] Empty/Unknown
├ram9 64.00m [1:9] Empty/Unknown
├ram10 64.00m [1:10] Empty/Unknown
├ram11 64.00m [1:11] Empty/Unknown
├ram12 64.00m [1:12] Empty/Unknown
├ram13 64.00m [1:13] Empty/Unknown
├ram14 64.00m [1:14] Empty/Unknown
â””ram15 64.00m [1:15] Empty/Unknown
Raid 5 volume is md2 with 6 disk out of 6 in use. No spare.And no space
to add another one...
Dominique
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-04 12:51 ` Dominique
@ 2013-02-04 18:39 ` Phil Turmel
2013-02-05 0:58 ` Brad Campbell
2013-02-05 10:36 ` Dominique
0 siblings, 2 replies; 16+ messages in thread
From: Phil Turmel @ 2013-02-04 18:39 UTC (permalink / raw)
To: Dominique; +Cc: linux-raid
On 02/04/2013 07:51 AM, Dominique wrote:
> Back from weekend, and still with my problem. Output of lsdrv is as follow:
Yuck, mangled utf-8. Does it look that way on your console?
[output w/ fixed tree characters pasted below]
First item, greatest importance: You are using Western Digital Green
drives. These are *unsafe* to use in raid arrays of any kind "out of
the box". They do *not* support SCTERC. You *must* use a boot-up
script to set the linux driver timeouts to two minutes or more or you
*will* crash this array.
I recommend a three minute timeout. Something like this:
> for x in /sys/block/sd[abcdef]/driver/timeout ; do
> echo 180 > $x
> done
in "rc.local" or wherever your distribution likes such things. And
don't wait for your next reboot--execute it now.
If you don't have that in place, it suggests that your array is very new
or you are not running any regular "scrub". It is important that you
not be vulnerable during the reshape, as the array will be heavily
exercised. If you aren't regularly scrubbing, execute:
> echo "check" >/sys/block/md2/md/sync_action
then monitor /proc/mdstat until it completes. (Several hours) Then
look at the mismatch count:
> cat /sys/block/md2/md/mismatch_count
It should be zero.
Otherwise, your array is a simple ext4 filesystem without any apparent
system dependencies. So you should be able to do everything needed in
your normal boot environment, just shutting down the services that are
using the data in "/srv". Once the filesystem itself is resized
(possibly quite quickly if your array has a great deal of free space),
the remainder of the work can occur online (with /srv mounted again and
your services restarted).
Here's your recipe:
1) Stop all services using /srv, then unmount with:
> umount /srv
2) Make sure the filesystem is consistent:
> fsck /dev/md2
3) Determine resizing options:
> resize2fs -P /dev/md2
This will report the number of blocks needed for the current contents
(usually in 4k blocks). I don't recommend resizing all the way to the
minimum, as it may take much longer. Just make sure you can shrink to
~10 terabytes.
4) Resize:
> resize2fs /dev/md2 10240G
5) Verify Ok:
> fsck -n /dev/md2
6) Instruct md to temporarily use less than the future size of md2 (but
more than the filesystem):
> mdadm --grow /dev/md2 --array-size=10241G
7) Verify Again:
> fsck -n /dev/md2
8) Instruct md to reshape to raid6 and view its progress:
> mdadm --grow /dev/md2 --level=raid6 --raid-devices=6
> cat /proc/mdstat
(The reshape will continue in the background.)
9) If you need the services as soon as possible, the filesystem can be
remounted at this point, and the services restarted. If /srv is in your
fstab, just use:
> mount /srv
10) Depending on the speed of your components, and how heavily you use
the array, the reshape can take several hours to days. I would expect
yours to take at least 7 hours, best case. Once complete, you can
resize once more to the maximum available (this can be done while
mounted and the services running):
> mdadm --grow /dev/md2 --array-size=max
> resize2fs /dev/md2
If you run into any problems (or questions), let us know.
Regards,
Phil
--
PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 6 Series/C200
Series Chipset Family SATA AHCI Controller (rev 05)
├scsi 0:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1618149}
│└sda 2.73t [8:0] Partitioned (gpt)
│ ├sda1 95.37m [8:1] vfat {89EF-00F4}
│ │└Mounted as /dev/sda1 @ /boot/efi
│ ├sda2 29.80g [8:2] MD raid1 (0/6) (w/ sdd2,sde2,sdc2,sdb2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sda3 186.26g [8:3] MD raid1 (0/6) (w/ sdf3,sdd3,sde3,sdc3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ │ ├Mounted as /dev/disk/by-uuid/ca4bff22-40d8-4b31-859e-bba063f01df1 @ /
│ │ └Mounted as /dev/disk/by-uuid/ca4bff22-40d8-4b31-859e-bba063f01df1 @
/var/spool/hylafax/etc
│ └sda4 2.52t [8:4] MD raid5 (0/6) (w/ sdd4,sdb4,sde4,sdc4,sdf4) in_sync
'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ │ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
│ └Mounted as /dev/md2 @ /srv
├scsi 1:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1382990}
│└sdb 2.73t [8:16] Partitioned (gpt)
│ ├sdb1 95.37m [8:17] vfat {6EFD-1659}
│ ├sdb2 29.80g [8:18] MD raid1 (1/6) (w/ sda2,sdd2,sde2,sdc2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sdb3 186.26g [8:19] MD raid1 (1/6) (w/ sdf3,sdd3,sde3,sda3,sdc3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ └sdb4 2.52t [8:20] MD raid5 (1/6) (w/ sdd4,sda4,sde4,sdc4,sdf4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
├scsi 2:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ0995502}
│└sdc 2.73t [8:32] Partitioned (gpt)
│ ├sdc1 95.37m [8:33] vfat {16BF-AABE}
│ ├sdc2 29.80g [8:34] MD raid1 (2/6) (w/ sda2,sdd2,sde2,sdb2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sdc3 186.26g [8:35] MD raid1 (2/6) (w/ sdf3,sdd3,sde3,sda3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ └sdc4 2.52t [8:36] MD raid5 (2/6) (w/ sdd4,sda4,sdb4,sde4,sdf4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
├scsi 3:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1118226}
│└sdd 2.73t [8:48] Partitioned (gpt)
│ ├sdd1 95.37m [8:49] vfat {978F-21B2}
│ ├sdd2 29.80g [8:50] MD raid1 (3/6) (w/ sda2,sde2,sdc2,sdb2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sdd3 186.26g [8:51] MD raid1 (3/6) (w/ sdf3,sde3,sda3,sdc3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ └sdd4 2.52t [8:52] MD raid5 (3/6) (w/ sda4,sdb4,sde4,sdc4,sdf4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
├scsi 4:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1649385}
│└sde 2.73t [8:64] Partitioned (gpt)
│ ├sde1 95.37m [8:65] vfat {8875-2D50}
│ ├sde2 29.80g [8:66] MD raid1 (4/6) (w/ sda2,sdd2,sdc2,sdb2,sdf2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
│ ├sde3 186.26g [8:67] MD raid1 (4/6) (w/ sdf3,sdd3,sda3,sdc3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
│ └sde4 2.52t [8:68] MD raid5 (4/6) (w/ sdd4,sda4,sdb4,sdc4,sdf4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
│ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
│ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
└scsi 5:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1383151}
└sdf 2.73t [8:80] Partitioned (gpt)
├sdf1 95.37m [8:81] vfat {A89B-DC05}
├sdf2 29.80g [8:82] MD raid1 (5/6) (w/ sda2,sdd2,sde2,sdc2,sdb2)
in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
│└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
{d2e6885b:2d256c5a:5f3de9a9:a5daa736}
│ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
├sdf3 186.26g [8:83] MD raid1 (5/6) (w/ sdd3,sde3,sda3,sdc3,sdb3)
in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
│└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
{89d69c6f:2ea223f5:aec8b67c:403488ad}
│ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
└sdf4 2.52t [8:84] MD raid5 (5/6) (w/ sdd4,sda4,sdb4,sde4,sdc4)
in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
└md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
{84672b4c:8fae7f38:bb4cc911:aa9d7444}
ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
USB [usb-storage] Bus 002 Device 004: ID 059f:1010 LaCie, Ltd Desktop
Hard Drive {ST3500830A 9QG6RC54}
└scsi 6:0:0:0 ST350083 0AS
└sdg 465.76g [8:96] Partitioned (dos)
└sdg1 465.76g [8:97] ext4 {4c2f6b92-829d-4e53-b553-c07e0f571e02}
└Mounted as /dev/sdg1 @ /mnt
Other Block Devices
├loop0 0.00k [7:0] Empty/Unknown
├loop1 0.00k [7:1] Empty/Unknown
├loop2 0.00k [7:2] Empty/Unknown
├loop3 0.00k [7:3] Empty/Unknown
├loop4 0.00k [7:4] Empty/Unknown
├loop5 0.00k [7:5] Empty/Unknown
├loop6 0.00k [7:6] Empty/Unknown
├loop7 0.00k [7:7] Empty/Unknown
├ram0 64.00m [1:0] Empty/Unknown
├ram1 64.00m [1:1] Empty/Unknown
├ram2 64.00m [1:2] Empty/Unknown
├ram3 64.00m [1:3] Empty/Unknown
├ram4 64.00m [1:4] Empty/Unknown
├ram5 64.00m [1:5] Empty/Unknown
├ram6 64.00m [1:6] Empty/Unknown
├ram7 64.00m [1:7] Empty/Unknown
├ram8 64.00m [1:8] Empty/Unknown
├ram9 64.00m [1:9] Empty/Unknown
├ram10 64.00m [1:10] Empty/Unknown
├ram11 64.00m [1:11] Empty/Unknown
├ram12 64.00m [1:12] Empty/Unknown
├ram13 64.00m [1:13] Empty/Unknown
├ram14 64.00m [1:14] Empty/Unknown
└ram15 64.00m [1:15] Empty/Unknown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-04 18:39 ` Phil Turmel
@ 2013-02-05 0:58 ` Brad Campbell
2013-02-05 10:36 ` Dominique
1 sibling, 0 replies; 16+ messages in thread
From: Brad Campbell @ 2013-02-05 0:58 UTC (permalink / raw)
To: Phil Turmel; +Cc: Dominique, linux-raid
On 05/02/13 02:39, Phil Turmel wrote:
> First item, greatest importance: You are using Western Digital Green
> drives. These are *unsafe* to use in raid arrays of any kind "out of
> the box". They do *not* support SCTERC. You *must* use a boot-up
Warning digression : How's this for odd. I have 10 WD20EARS drives in my
machine and 9 of them support SCTERC. Firmware version is the same
across all drives and I bought them at the same time.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-04 18:39 ` Phil Turmel
2013-02-05 0:58 ` Brad Campbell
@ 2013-02-05 10:36 ` Dominique
2013-02-05 13:04 ` Phil Turmel
1 sibling, 1 reply; 16+ messages in thread
From: Dominique @ 2013-02-05 10:36 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
On 04/02/2013 19:39, Phil Turmel wrote:
> On 02/04/2013 07:51 AM, Dominique wrote:
>> Back from weekend, and still with my problem. Output of lsdrv is as follow:
> Yuck, mangled utf-8. Does it look that way on your console?
> [output w/ fixed tree characters pasted below]
>
> First item, greatest importance: You are using Western Digital Green
> drives. These are *unsafe* to use in raid arrays of any kind "out of
> the box". They do *not* support SCTERC. You *must* use a boot-up
> script to set the linux driver timeouts to two minutes or more or you
> *will* crash this array.
>
> I recommend a three minute timeout. Something like this:
>> for x in /sys/block/sd[abcdef]/driver/timeout ; do
>> echo 180 > $x
>> done
> in "rc.local" or wherever your distribution likes such things. And
> don't wait for your next reboot--execute it now.
>
> If you don't have that in place, it suggests that your array is very new
> or you are not running any regular "scrub". It is important that you
> not be vulnerable during the reshape, as the array will be heavily
> exercised. If you aren't regularly scrubbing, execute:
>> echo "check" >/sys/block/md2/md/sync_action
> then monitor /proc/mdstat until it completes. (Several hours) Then
> look at the mismatch count:
>> cat /sys/block/md2/md/mismatch_count
> It should be zero.
>
> Otherwise, your array is a simple ext4 filesystem without any apparent
> system dependencies. So you should be able to do everything needed in
> your normal boot environment, just shutting down the services that are
> using the data in "/srv". Once the filesystem itself is resized
> (possibly quite quickly if your array has a great deal of free space),
> the remainder of the work can occur online (with /srv mounted again and
> your services restarted).
>
> Here's your recipe:
>
> 1) Stop all services using /srv, then unmount with:
>> umount /srv
> 2) Make sure the filesystem is consistent:
>> fsck /dev/md2
> 3) Determine resizing options:
>> resize2fs -P /dev/md2
> This will report the number of blocks needed for the current contents
> (usually in 4k blocks). I don't recommend resizing all the way to the
> minimum, as it may take much longer. Just make sure you can shrink to
> ~10 terabytes.
>
> 4) Resize:
>> resize2fs /dev/md2 10240G
> 5) Verify Ok:
>> fsck -n /dev/md2
> 6) Instruct md to temporarily use less than the future size of md2 (but
> more than the filesystem):
>> mdadm --grow /dev/md2 --array-size=10241G
> 7) Verify Again:
>> fsck -n /dev/md2
> 8) Instruct md to reshape to raid6 and view its progress:
>> mdadm --grow /dev/md2 --level=raid6 --raid-devices=6
>> cat /proc/mdstat
> (The reshape will continue in the background.)
>
> 9) If you need the services as soon as possible, the filesystem can be
> remounted at this point, and the services restarted. If /srv is in your
> fstab, just use:
>> mount /srv
> 10) Depending on the speed of your components, and how heavily you use
> the array, the reshape can take several hours to days. I would expect
> yours to take at least 7 hours, best case. Once complete, you can
> resize once more to the maximum available (this can be done while
> mounted and the services running):
>> mdadm --grow /dev/md2 --array-size=max
>> resize2fs /dev/md2
> If you run into any problems (or questions), let us know.
>
> Regards,
>
> Phil
>
> --
>
> PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 6 Series/C200
> Series Chipset Family SATA AHCI Controller (rev 05)
> ├scsi 0:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1618149}
> │└sda 2.73t [8:0] Partitioned (gpt)
> │ ├sda1 95.37m [8:1] vfat {89EF-00F4}
> │ │└Mounted as /dev/sda1 @ /boot/efi
> │ ├sda2 29.80g [8:2] MD raid1 (0/6) (w/ sdd2,sde2,sdc2,sdb2,sdf2)
> in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
> │ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
> {d2e6885b:2d256c5a:5f3de9a9:a5daa736}
> │ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
> │ ├sda3 186.26g [8:3] MD raid1 (0/6) (w/ sdf3,sdd3,sde3,sdc3,sdb3)
> in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
> │ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
> {89d69c6f:2ea223f5:aec8b67c:403488ad}
> │ │ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
> │ │ ├Mounted as /dev/disk/by-uuid/ca4bff22-40d8-4b31-859e-bba063f01df1 @ /
> │ │ └Mounted as /dev/disk/by-uuid/ca4bff22-40d8-4b31-859e-bba063f01df1 @
> /var/spool/hylafax/etc
> │ └sda4 2.52t [8:4] MD raid5 (0/6) (w/ sdd4,sdb4,sde4,sdc4,sdf4) in_sync
> 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
> │ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
> {84672b4c:8fae7f38:bb4cc911:aa9d7444}
> │ │ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
> │ └Mounted as /dev/md2 @ /srv
> ├scsi 1:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1382990}
> │└sdb 2.73t [8:16] Partitioned (gpt)
> │ ├sdb1 95.37m [8:17] vfat {6EFD-1659}
> │ ├sdb2 29.80g [8:18] MD raid1 (1/6) (w/ sda2,sdd2,sde2,sdc2,sdf2)
> in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
> │ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
> {d2e6885b:2d256c5a:5f3de9a9:a5daa736}
> │ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
> │ ├sdb3 186.26g [8:19] MD raid1 (1/6) (w/ sdf3,sdd3,sde3,sda3,sdc3)
> in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
> │ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
> {89d69c6f:2ea223f5:aec8b67c:403488ad}
> │ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
> │ └sdb4 2.52t [8:20] MD raid5 (1/6) (w/ sdd4,sda4,sde4,sdc4,sdf4)
> in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
> │ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
> {84672b4c:8fae7f38:bb4cc911:aa9d7444}
> │ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
> ├scsi 2:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ0995502}
> │└sdc 2.73t [8:32] Partitioned (gpt)
> │ ├sdc1 95.37m [8:33] vfat {16BF-AABE}
> │ ├sdc2 29.80g [8:34] MD raid1 (2/6) (w/ sda2,sdd2,sde2,sdb2,sdf2)
> in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
> │ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
> {d2e6885b:2d256c5a:5f3de9a9:a5daa736}
> │ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
> │ ├sdc3 186.26g [8:35] MD raid1 (2/6) (w/ sdf3,sdd3,sde3,sda3,sdb3)
> in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
> │ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
> {89d69c6f:2ea223f5:aec8b67c:403488ad}
> │ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
> │ └sdc4 2.52t [8:36] MD raid5 (2/6) (w/ sdd4,sda4,sdb4,sde4,sdf4)
> in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
> │ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
> {84672b4c:8fae7f38:bb4cc911:aa9d7444}
> │ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
> ├scsi 3:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1118226}
> │└sdd 2.73t [8:48] Partitioned (gpt)
> │ ├sdd1 95.37m [8:49] vfat {978F-21B2}
> │ ├sdd2 29.80g [8:50] MD raid1 (3/6) (w/ sda2,sde2,sdc2,sdb2,sdf2)
> in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
> │ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
> {d2e6885b:2d256c5a:5f3de9a9:a5daa736}
> │ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
> │ ├sdd3 186.26g [8:51] MD raid1 (3/6) (w/ sdf3,sde3,sda3,sdc3,sdb3)
> in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
> │ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
> {89d69c6f:2ea223f5:aec8b67c:403488ad}
> │ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
> │ └sdd4 2.52t [8:52] MD raid5 (3/6) (w/ sda4,sdb4,sde4,sdc4,sdf4)
> in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
> │ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
> {84672b4c:8fae7f38:bb4cc911:aa9d7444}
> │ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
> ├scsi 4:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1649385}
> │└sde 2.73t [8:64] Partitioned (gpt)
> │ ├sde1 95.37m [8:65] vfat {8875-2D50}
> │ ├sde2 29.80g [8:66] MD raid1 (4/6) (w/ sda2,sdd2,sdc2,sdb2,sdf2)
> in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
> │ │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
> {d2e6885b:2d256c5a:5f3de9a9:a5daa736}
> │ │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
> │ ├sde3 186.26g [8:67] MD raid1 (4/6) (w/ sdf3,sdd3,sda3,sdc3,sdb3)
> in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
> │ │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
> {89d69c6f:2ea223f5:aec8b67c:403488ad}
> │ │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
> │ └sde4 2.52t [8:68] MD raid5 (4/6) (w/ sdd4,sda4,sdb4,sdc4,sdf4)
> in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
> │ └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
> {84672b4c:8fae7f38:bb4cc911:aa9d7444}
> │ ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
> └scsi 5:0:0:0 ATA WDC WD30EZRX-00M {WD-WCAWZ1383151}
> └sdf 2.73t [8:80] Partitioned (gpt)
> ├sdf1 95.37m [8:81] vfat {A89B-DC05}
> ├sdf2 29.80g [8:82] MD raid1 (5/6) (w/ sda2,sdd2,sde2,sdc2,sdb2)
> in_sync 'solipym:0' {d2e6885b-2d25-6c5a-5f3d-e9a9a5daa736}
> │└md0 29.80g [9:0] MD v1.2 raid1 (6) clean
> {d2e6885b:2d256c5a:5f3de9a9:a5daa736}
> │ swap {8e565dfc-c5fa-4c76-abe9-f3a5a6e8fcff}
> ├sdf3 186.26g [8:83] MD raid1 (5/6) (w/ sdd3,sde3,sda3,sdc3,sdb3)
> in_sync 'solipym:1' {89d69c6f-2ea2-23f5-aec8-b67c403488ad}
> │└md1 186.26g [9:1] MD v1.2 raid1 (6) clean
> {89d69c6f:2ea223f5:aec8b67c:403488ad}
> │ ext4 {ca4bff22-40d8-4b31-859e-bba063f01df1}
> └sdf4 2.52t [8:84] MD raid5 (5/6) (w/ sdd4,sda4,sdb4,sde4,sdc4)
> in_sync 'solipym:2' {84672b4c-8fae-7f38-bb4c-c911aa9d7444}
> └md2 12.59t [9:2] MD v1.2 raid5 (6) clean, 512k Chunk
> {84672b4c:8fae7f38:bb4cc911:aa9d7444}
> ext4 {a9cfe95c-1e62-4cf6-b2aa-65abbd0c77e4}
> USB [usb-storage] Bus 002 Device 004: ID 059f:1010 LaCie, Ltd Desktop
> Hard Drive {ST3500830A 9QG6RC54}
> └scsi 6:0:0:0 ST350083 0AS
> └sdg 465.76g [8:96] Partitioned (dos)
> └sdg1 465.76g [8:97] ext4 {4c2f6b92-829d-4e53-b553-c07e0f571e02}
> └Mounted as /dev/sdg1 @ /mnt
> Other Block Devices
> ├loop0 0.00k [7:0] Empty/Unknown
> ├loop1 0.00k [7:1] Empty/Unknown
> ├loop2 0.00k [7:2] Empty/Unknown
> ├loop3 0.00k [7:3] Empty/Unknown
> ├loop4 0.00k [7:4] Empty/Unknown
> ├loop5 0.00k [7:5] Empty/Unknown
> ├loop6 0.00k [7:6] Empty/Unknown
> ├loop7 0.00k [7:7] Empty/Unknown
> ├ram0 64.00m [1:0] Empty/Unknown
> ├ram1 64.00m [1:1] Empty/Unknown
> ├ram2 64.00m [1:2] Empty/Unknown
> ├ram3 64.00m [1:3] Empty/Unknown
> ├ram4 64.00m [1:4] Empty/Unknown
> ├ram5 64.00m [1:5] Empty/Unknown
> ├ram6 64.00m [1:6] Empty/Unknown
> ├ram7 64.00m [1:7] Empty/Unknown
> ├ram8 64.00m [1:8] Empty/Unknown
> ├ram9 64.00m [1:9] Empty/Unknown
> ├ram10 64.00m [1:10] Empty/Unknown
> ├ram11 64.00m [1:11] Empty/Unknown
> ├ram12 64.00m [1:12] Empty/Unknown
> ├ram13 64.00m [1:13] Empty/Unknown
> ├ram14 64.00m [1:14] Empty/Unknown
> └ram15 64.00m [1:15] Empty/Unknown
> --
>
Thanks for the detailled answer. I read through all of it to identify
what I did not understood before starting. I have a few clarificacions
for you and a few questions.
Yes, that's the way my console looks like. I usually don't have problem
with UTF8 output (i.e. not garbled), I figured it was the result of the
lsdrv.... Nasty to read, but it's all there.
I am running Ubuntu 12.04.1 server on this relatively new raid5 setup.
I think I understood most of the reshaping instructions, but I need to
clarify your point related to the timeout. I tried to execute it in a
simple bash file and got stopped at the beginning just for the lack of a
timeoutfile?
After looking for it, I realised that sd[abcdef] were all symlinks
pointing to another part of the system, where there is no driver
directory. After browsing, i figured out that /sys/block/sda/device did
led to a timeout file. Can you please confirm, it is the one we want ?
A last point I need to clarify: is the reshaping (although long) data
destructive (backup will be done in all cases)?
Thanks,
Dominique
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Raid 5 to 6 migration
2013-02-05 10:36 ` Dominique
@ 2013-02-05 13:04 ` Phil Turmel
0 siblings, 0 replies; 16+ messages in thread
From: Phil Turmel @ 2013-02-05 13:04 UTC (permalink / raw)
To: Dominique; +Cc: linux-raid
On 02/05/2013 05:36 AM, Dominique wrote:
>
> Thanks for the detailled answer. I read through all of it to identify
> what I did not understood before starting. I have a few clarificacions
> for you and a few questions.
>
> Yes, that's the way my console looks like. I usually don't have problem
> with UTF8 output (i.e. not garbled), I figured it was the result of the
> lsdrv.... Nasty to read, but it's all there.
Thanks for the report. I'll have to set up a VM with your distro and
play a bit.
> I am running Ubuntu 12.04.1 server on this relatively new raid5 setup.
> I think I understood most of the reshaping instructions, but I need to
> clarify your point related to the timeout. I tried to execute it in a
> simple bash file and got stopped at the beginning just for the lack of a
> timeoutfile?
>
> After looking for it, I realised that sd[abcdef] were all symlinks
> pointing to another part of the system, where there is no driver
> directory. After browsing, i figured out that /sys/block/sda/device did
> led to a timeout file. Can you please confirm, it is the one we want ?
Yes, sorry. Typo on my part.
> A last point I need to clarify: is the reshaping (although long) data
> destructive (backup will be done in all cases)?
The reshape may need a small backup file for critical section(s). If it
does, it will refuse to proceed until you specify one with
"--backup-file=..." in the "--grow" operation. If your system crashes
during the reshape, be sure to specify the same backup file when
re-assembling.
But yes, redundancy is maintained throughout the reshape.
> Thanks,
You're welcome.
Phil
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2013-02-05 13:04 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-02-01 17:17 Raid 5 to 6 migration Dominique
2013-02-01 17:50 ` Phil Turmel
2013-02-01 18:16 ` Roman Mamedov
2013-02-01 18:19 ` Phil Turmel
2013-02-01 18:49 ` Chris Murphy
2013-02-01 22:04 ` Phil Turmel
2013-02-01 22:38 ` Chris Murphy
[not found] ` <COL114-W137DD8871889508BBF087BC91C0@phx.gbl>
2013-02-01 18:22 ` Phil Turmel
2013-02-03 10:00 ` Roy Sigurd Karlsbakk
2013-02-03 14:30 ` Phil Turmel
2013-02-03 22:01 ` Robin Hill
2013-02-04 12:51 ` Dominique
2013-02-04 18:39 ` Phil Turmel
2013-02-05 0:58 ` Brad Campbell
2013-02-05 10:36 ` Dominique
2013-02-05 13:04 ` Phil Turmel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).