linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID6: Reducing to Grow
@ 2017-10-17  5:09 Liwei
  2017-10-17 14:39 ` Phil Turmel
  0 siblings, 1 reply; 6+ messages in thread
From: Liwei @ 2017-10-17  5:09 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org

Hi list,
    tldr: Have a mixed-sized (2TB and 6TB) RAID6 array, realised there
is now more potential space (using only 6TB drives) than running the
array based on the 2TB "smallest-drive" size. Is there a way to
convert to using only the 6TB drives, without a rebuild?


   Got an odd situation here. We have a 10-drive array that started
out with 2TB drives. A few years back we started planning for an
upgrade but didn't find justification for acquiring 10 new drives. So
we decided to "gradually migrate" by expanding/replacing storage with
a new 6 TB drive twice a year, and replacing failed 2TB drives with
6TB ones, the hope is that we would have replaced all 10x 2TB drives
by the time storage growth requires it.

    Today we have a 13-drive array with 7x 2TB drives and 6x 6TB
drives. It just occurred to me that we actually have more potential
storage than what the array currently provides. So my question is, is
there a way to reduce the number of drives to just the 6x 6TB drives
and fully utilise all the storage, all in one step, without requiring
a backup and restore? We're using a cloud backup service, so it will
take months for a restore operation (or pay a few K for them to mail
us the data in physical drives, which will be more expensive than just
outright replacing the remaining 7 drives with 6TB ones)

Thanks!
Liwei<div id="DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"><br />
<table style="border-top: 1px solid #D3D4DE;">
	<tr>
        <td style="width: 55px; padding-top: 13px;"><a
href="http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail"
target="_blank"><img
src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-green-avg-v1.png"
alt="" width="46" height="29" style="width: 46px; height: 29px;"
/></a></td>
		<td style="width: 470px; padding-top: 12px; color: #41424e;
font-size: 13px; font-family: Arial, Helvetica, sans-serif;
line-height: 18px;">Virus-free. <a
href="http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail"
target="_blank" style="color: #4453ea;">www.avg.com</a>
		</td>
	</tr>
</table><a href="#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1"
height="1"></a></div>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID6: Reducing to Grow
  2017-10-17  5:09 RAID6: Reducing to Grow Liwei
@ 2017-10-17 14:39 ` Phil Turmel
  2017-10-18  5:20   ` Liwei
  0 siblings, 1 reply; 6+ messages in thread
From: Phil Turmel @ 2017-10-17 14:39 UTC (permalink / raw)
  To: Liwei, linux-raid@vger.kernel.org

Hi Liwei,

On 10/17/2017 01:09 AM, Liwei wrote:
> Hi list, tldr: Have a mixed-sized (2TB and 6TB) RAID6 array, realised
> there is now more potential space (using only 6TB drives) than
> running the array based on the 2TB "smallest-drive" size. Is there a
> way to convert to using only the 6TB drives, without a rebuild?

It depends. /-:

Certainly not in one step.  If you are running LVM on top of your array,
and have the space split up into various LVs, none extremely large,
there are some possibilities.

You might need one more 6TB drive to help with the juggling.

Please use lsdrv[1] to document your running setup so we can see what
might be possible.  Past the output into your reply in *text* mode, with
line wrapping turned off.  (It would help if you composed your mails in
Thunderbird or some client that handles text mode properly -- TBird
supports gmail accounts.)

Phil

[1] https://github.com/pturmel/lsdrv

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID6: Reducing to Grow
  2017-10-17 14:39 ` Phil Turmel
@ 2017-10-18  5:20   ` Liwei
  2017-10-18 18:02     ` Phil Turmel
  0 siblings, 1 reply; 6+ messages in thread
From: Liwei @ 2017-10-18  5:20 UTC (permalink / raw)
  To: Phil Turmel, linux-raid@vger.kernel.org

[-- Attachment #1: Type: text/plain, Size: 8736 bytes --]

Hi Phil,

On 17/10/2017 10:39 PM, Phil Turmel wrote:
> Hi Liwei,
> 
> On 10/17/2017 01:09 AM, Liwei wrote:
>> Hi list, tldr: Have a mixed-sized (2TB and 6TB) RAID6 array, realised
>> there is now more potential space (using only 6TB drives) than
>> running the array based on the 2TB "smallest-drive" size. Is there a
>> way to convert to using only the 6TB drives, without a rebuild?
> 
> It depends. /-:
> 
> Certainly not in one step.  If you are running LVM on top of your array,
> and have the space split up into various LVs, none extremely large,
> there are some possibilities.
> 
> You might need one more 6TB drive to help with the juggling.
> 
> Please use lsdrv[1] to document your running setup so we can see what
> might be possible.  Past the output into your reply in *text* mode, with
> line wrapping turned off.  (It would help if you composed your mails in
> Thunderbird or some client that handles text mode properly -- TBird
> supports gmail accounts.)
> 
> Phil
> 
> [1] https://github.com/pturmel/lsdrv

Whoops, we do run LVM on top, but currently it is one gigantic LV. (We converted to btrfs and started using subvolumes instead) My guess is you're thinking of creating another array with the unused space, and then migrating LVs over?

The lsdrv output follows, with a lot of irrelevant/empty nodes removed. Also, I realised I misread the storage size of two of the 2TB drives, so we're actually at 9x 2TB and 4x 6TB.

_However_, I do have one 6TB "hot spare" plugged in, and 3 proper 6TB SAS drives arriving next week, so there may still be a way to do this, perhaps?

Output of lsdrv is also attached, in case the formatting still gets truncated because I didn't set things up properly.

PCI [mpt3sas] 02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS3224 PCI-Express Fusion-MPT SAS-3 (rev 01)
├scsi 0:0:0:0 ATA      WDC WD2001FASS-0 {WD-WMAY01416344}
│└sda 1.82t [8:0] Partitioned (gpt)
│ └sda1 1.82t [8:1] MD raid6 (7/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│   │                    PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
│   └VG Carroll 20.01t 0 free {9fcFdE-g501-tLfQ-Lg5k-zCYS-fhw3-vEJI1Y}
│    └dm-12 20.01t [253:12] LV Wonderland btrfs 'Wonderland' {f710ec85-b277-45a4-a526-e970f74e2781}
│     └Mounted as /dev/mapper/Carroll-Wonderland @ /mnt/Wonderland
├scsi 0:0:1:0 ATA      WDC WD20EARS-00J {WD-WCAWZ1869671}
│└sdb 1.82t [8:16] Partitioned (gpt)
│ └sdb1 1.82t [8:17] MD raid6 (2/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:2:0 ATA      WDC WD20EFRX-68E {WD-WCC4M6VTHJRR}
│└sdc 1.82t [8:32] Partitioned (gpt)
│ └sdc1 1.82t [8:33] MD raid6 (5/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:3:0 ATA      HGST HDN726060AL {NCGGBM7S}
│└sdd 5.46t [8:48] Empty/Unknown
├scsi 0:0:4:0 ATA      WDC WD20EFRX-68E {WD-WCC4M6YJN0SD}
│└sde 1.82t [8:64] Partitioned (gpt)
│ └sde1 1.82t [8:65] MD raid6 (0/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:5:0 ATA      WDC WD20EFRX-68E {WD-WCC4M6YJNEP7}
│└sdf 1.82t [8:80] Partitioned (gpt)
│ └sdf1 1.82t [8:81] MD raid6 (3/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:6:0 ATA      WDC WD6002FFWX-6 {K1GLBHMD}
│└sdg 5.46t [8:96] Partitioned (gpt)
│ └sdg1 5.46t [8:97] MD raid6 (12/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:7:0 ATA      WDC WD20EZRX-00D {WD-WMC4N2764539}
│└sdh 1.82t [8:112] Partitioned (gpt)
│ └sdh1 1.82t [8:113] MD raid6 (6/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:8:0 ATA      WDC WD20EZRX-00D {WD-WCC300583284}
│└sdi 1.82t [8:128] Partitioned (gpt)
│ └sdi1 1.82t [8:129] MD raid6 (8/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:9:0 ATA      WDC WD20EARS-00M {WD-WCAZA5414083}
│└sdj 1.82t [8:144] Partitioned (gpt)
│ └sdj1 1.82t [8:145] MD raid6 (9/13) (w/ sdo1,sdm1,sdl1,sdk1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:10:0 ATA      HGST HDN726060AL {NCGSUEPS}
│└sdk 5.46t [8:160] Partitioned (gpt)
│ └sdk1 5.46t [8:161] MD raid6 (11/13) (w/ sdo1,sdm1,sdl1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:11:0 ATA      WDC WD20EARS-00J {WD-WCAYY0201907}
│└sdl 1.82t [8:176] Partitioned (gpt)
│ └sdl1 1.82t [8:177] MD raid6 (1/13) (w/ sdo1,sdm1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:12:0 ATA      HGST HDN726060AL {NCGD6PVS}
│└sdm 5.46t [8:192] Partitioned (gpt)
│ └sdm1 5.46t [8:193] MD raid6 (4/13) (w/ sdo1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:x:x:x [Empty]
├scsi 0:0:14:0 ATA      HGST HDN726060AL {NAHUM8DY}
│└sdo 5.46t [8:224] Partitioned (gpt)
│ └sdo1 5.46t [8:225] MD raid6 (10/13) (w/ sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
└scsi 0:x:x:x [Empty]


Liwei

(Also, sorry about the html crud in the previous email, turns out my antivirus just gained the ability to inject stuff into emails composed from Gmail within a browser. Very sketchy behaviour.)
(Resend because I did mess up the plain-text setting, sorry that you're receiving this twice!)

[-- Attachment #2: lsdrv.out --]
[-- Type: text/plain, Size: 6722 bytes --]

PCI [mpt3sas] 02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS3224 PCI-Express Fusion-MPT SAS-3 (rev 01)
├scsi 0:0:0:0 ATA      WDC WD2001FASS-0 {WD-WMAY01416344}
│└sda 1.82t [8:0] Partitioned (gpt)
│ └sda1 1.82t [8:1] MD raid6 (7/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│   │                    PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
│   └VG Carroll 20.01t 0 free {9fcFdE-g501-tLfQ-Lg5k-zCYS-fhw3-vEJI1Y}
│    └dm-12 20.01t [253:12] LV Wonderland btrfs 'Wonderland' {f710ec85-b277-45a4-a526-e970f74e2781}
│     └Mounted as /dev/mapper/Carroll-Wonderland @ /mnt/Wonderland
├scsi 0:0:1:0 ATA      WDC WD20EARS-00J {WD-WCAWZ1869671}
│└sdb 1.82t [8:16] Partitioned (gpt)
│ └sdb1 1.82t [8:17] MD raid6 (2/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:2:0 ATA      WDC WD20EFRX-68E {WD-WCC4M6VTHJRR}
│└sdc 1.82t [8:32] Partitioned (gpt)
│ └sdc1 1.82t [8:33] MD raid6 (5/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:3:0 ATA      HGST HDN726060AL {NCGGBM7S}
│└sdd 5.46t [8:48] Empty/Unknown
├scsi 0:0:4:0 ATA      WDC WD20EFRX-68E {WD-WCC4M6YJN0SD}
│└sde 1.82t [8:64] Partitioned (gpt)
│ └sde1 1.82t [8:65] MD raid6 (0/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:5:0 ATA      WDC WD20EFRX-68E {WD-WCC4M6YJNEP7}
│└sdf 1.82t [8:80] Partitioned (gpt)
│ └sdf1 1.82t [8:81] MD raid6 (3/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:6:0 ATA      WDC WD6002FFWX-6 {K1GLBHMD}
│└sdg 5.46t [8:96] Partitioned (gpt)
│ └sdg1 5.46t [8:97] MD raid6 (12/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:7:0 ATA      WDC WD20EZRX-00D {WD-WMC4N2764539}
│└sdh 1.82t [8:112] Partitioned (gpt)
│ └sdh1 1.82t [8:113] MD raid6 (6/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdi1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:8:0 ATA      WDC WD20EZRX-00D {WD-WCC300583284}
│└sdi 1.82t [8:128] Partitioned (gpt)
│ └sdi1 1.82t [8:129] MD raid6 (8/13) (w/ sdo1,sdm1,sdl1,sdk1,sdj1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:9:0 ATA      WDC WD20EARS-00M {WD-WCAZA5414083}
│└sdj 1.82t [8:144] Partitioned (gpt)
│ └sdj1 1.82t [8:145] MD raid6 (9/13) (w/ sdo1,sdm1,sdl1,sdk1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:10:0 ATA      HGST HDN726060AL {NCGSUEPS}
│└sdk 5.46t [8:160] Partitioned (gpt)
│ └sdk1 5.46t [8:161] MD raid6 (11/13) (w/ sdo1,sdm1,sdl1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:11:0 ATA      WDC WD20EARS-00J {WD-WCAYY0201907}
│└sdl 1.82t [8:176] Partitioned (gpt)
│ └sdl1 1.82t [8:177] MD raid6 (1/13) (w/ sdo1,sdm1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:0:12:0 ATA      HGST HDN726060AL {NCGD6PVS}
│└sdm 5.46t [8:192] Partitioned (gpt)
│ └sdm1 5.46t [8:193] MD raid6 (4/13) (w/ sdo1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
├scsi 0:x:x:x [Empty]
├scsi 0:0:14:0 ATA      HGST HDN726060AL {NAHUM8DY}
│└sdo 5.46t [8:224] Partitioned (gpt)
│ └sdo1 5.46t [8:225] MD raid6 (10/13) (w/ sdm1,sdl1,sdk1,sdj1,sdi1,sdh1,sdg1,sdf1,sde1,sdc1,sdb1,sda1) in_sync 'NLEStore:Alice' {394b9087-268a-a957-f7f1-d4c4ccefb191}
│  └md126 20.01t [9:126] MD v1.2 raid6 (13) clean, 512k Chunk {394b9087:268aa957:f7f1d4c4:ccefb191}
│                        PV LVM2_member 20.01t used, 0 free {8PhZ2b-wiZ1-ok9L-ClLw-OQUy-GrfE-eTo0Hw}
└scsi 0:x:x:x [Empty]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID6: Reducing to Grow
  2017-10-18  5:20   ` Liwei
@ 2017-10-18 18:02     ` Phil Turmel
  2017-10-20  5:04       ` Liwei
  0 siblings, 1 reply; 6+ messages in thread
From: Phil Turmel @ 2017-10-18 18:02 UTC (permalink / raw)
  To: Liwei, linux-raid@vger.kernel.org

On 10/18/2017 01:20 AM, Liwei wrote:
> Hi Phil,


> Whoops, we do run LVM on top, but currently it is one gigantic LV. (We
> converted to btrfs and started using subvolumes instead) My guess is
> you're thinking of creating another array with the unused space, and
> then migrating LVs over?

Yes.  With temporary ~4T partitions.

> The lsdrv output follows, with a lot of irrelevant/empty nodes removed.
> Also, I realised I misread the storage size of two of the 2TB drives, so
> we're actually at 9x 2TB and 4x 6TB.

Very helpful.

> _However_, I do have one 6TB "hot spare" plugged in, and 3 proper 6TB
> SAS drives arriving next week, so there may still be a way to do this,
> perhaps?

Makes it easy, actually.

So, here's what I recommend:

1) Set long timeouts on your desktop drives, the WD*EARS and WD*FASS:

for x in a b j l ; do echo 180 > /sys/block/sd$x/device/timeout ; done

2) Scrub your array (and wait for it to finish):

echo check >/sys/block/md126/md/sync_action

3) After installing the new drives, fail one of your 6T drives out of
the array, then use it, the hot spare, the new drives, and one 'missing'
to set up a new degraded 6-drive raid6.  Consider using a smaller chunk
size unless you are storing very large media files.  (I use 16k or 32k
for my parity arrays.)

4) Add the new array as a physical volume to your existing volume group.

5) Use lvconvert to change LV Wonderland to a mirror.  You could just use
pvmove if you don't mind the reduced redundancy.  Let it get fully
mirrored or moved.  Skip to step #8 if you used pvmove.

6) Fail another 6T drive from the existing array and add it to the new
array.  Let it rebuild.

7) Use lvconvert to change LV Wonderland back into an unmirrored LV on
the new array.  Be careful to specify the new array!

8) Remove the old array from your volume group with vgreduce.  Shut down
the old array and use mdadm --zero-superblock on its prior members.
Get rid of the desktop drives.

9) Add all the newly available 6T drives to the new array.  Let it
rebuild if necessary.

10) Grow the new array to occupy all of the space on all of the devices,
or maybe leave one hot spare.  (I wouldn't, but it depends on your
application and your supply chain.)

11) Enjoy all that space!

Note that all of the above can be executed while using your mounted LV.

Phil

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID6: Reducing to Grow
  2017-10-18 18:02     ` Phil Turmel
@ 2017-10-20  5:04       ` Liwei
  2017-10-20 23:26         ` Phil Turmel
  0 siblings, 1 reply; 6+ messages in thread
From: Liwei @ 2017-10-20  5:04 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid@vger.kernel.org

Hi Phil,

On 19 October 2017 at 02:02, Phil Turmel <philip@turmel.org> wrote:
> On 10/18/2017 01:20 AM, Liwei wrote:
>> Hi Phil,
>
>
>> Whoops, we do run LVM on top, but currently it is one gigantic LV. (We
>> converted to btrfs and started using subvolumes instead) My guess is
>> you're thinking of creating another array with the unused space, and
>> then migrating LVs over?
>
> Yes.  With temporary ~4T partitions.
>
>> The lsdrv output follows, with a lot of irrelevant/empty nodes removed.
>> Also, I realised I misread the storage size of two of the 2TB drives, so
>> we're actually at 9x 2TB and 4x 6TB.
>
> Very helpful.
>
>> _However_, I do have one 6TB "hot spare" plugged in, and 3 proper 6TB
>> SAS drives arriving next week, so there may still be a way to do this,
>> perhaps?
>
> Makes it easy, actually.
>
> So, here's what I recommend:
>
> 1) Set long timeouts on your desktop drives, the WD*EARS and WD*FASS:
>
> for x in a b j l ; do echo 180 > /sys/block/sd$x/device/timeout ; done
>
> 2) Scrub your array (and wait for it to finish):
>
> echo check >/sys/block/md126/md/sync_action
>
> 3) After installing the new drives, fail one of your 6T drives out of
> the array, then use it, the hot spare, the new drives, and one 'missing'
> to set up a new degraded 6-drive raid6.  Consider using a smaller chunk
> size unless you are storing very large media files.  (I use 16k or 32k
> for my parity arrays.)
>
> 4) Add the new array as a physical volume to your existing volume group.
>
> 5) Use lvconvert to change LV Wonderland to a mirror.  You could just use
> pvmove if you don't mind the reduced redundancy.  Let it get fully
> mirrored or moved.  Skip to step #8 if you used pvmove.
>
> 6) Fail another 6T drive from the existing array and add it to the new
> array.  Let it rebuild.
>
> 7) Use lvconvert to change LV Wonderland back into an unmirrored LV on
> the new array.  Be careful to specify the new array!
>
> 8) Remove the old array from your volume group with vgreduce.  Shut down
> the old array and use mdadm --zero-superblock on its prior members.
> Get rid of the desktop drives.
>
> 9) Add all the newly available 6T drives to the new array.  Let it
> rebuild if necessary.
>
> 10) Grow the new array to occupy all of the space on all of the devices,
> or maybe leave one hot spare.  (I wouldn't, but it depends on your
> application and your supply chain.)
>
> 11) Enjoy all that space!
>
> Note that all of the above can be executed while using your mounted LV.
>
> Phil

Great! I'm done with the scrub, now waiting for the new drives to arrive.

I anticipate there'll be quite a bit of downtime involved though. Our
chassis only has 16 cages, so we probably have to temporarily move one
or two of the 2TB drives onto the internal SATA connectors and hang
them around. Power might be an issue too, but we'll figure it out,
hopefully without frying any drives. ;)

My guess is I should probably scrub again after relocating the drives,
just to be sure they work under load.

Regarding the chunk size, my predecessor initially chose 512k based on
some reading about performance boost with raw video files and 4k/AF
drives, not sure whether it holds water. Something along the lines of:
we're dealing with multi gigabyte files anyway, what's wastage of half
a meg here and there? Supposedly RW performance increases with chunk
size - provided they're properly aligned, but it is a diminishing
return.

We're using this NAS to ingest the raw video files from our HD and 4K
cameras so they can be accessed remotely for editing, does 512k chunk
size make sense or should I go down to 16/32k?

Thank you for the assistance so far!

Liwei

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID6: Reducing to Grow
  2017-10-20  5:04       ` Liwei
@ 2017-10-20 23:26         ` Phil Turmel
  0 siblings, 0 replies; 6+ messages in thread
From: Phil Turmel @ 2017-10-20 23:26 UTC (permalink / raw)
  To: Liwei; +Cc: linux-raid@vger.kernel.org

On 10/20/2017 01:04 AM, Liwei wrote:

>> Note that all of the above can be executed while using your mounted
>> LV.
>> 
>> Phil
> 
> Great! I'm done with the scrub, now waiting for the new drives to
> arrive.
> 
> I anticipate there'll be quite a bit of downtime involved though.
> Our chassis only has 16 cages, so we probably have to temporarily
> move one or two of the 2TB drives onto the internal SATA connectors
> and hang them around. Power might be an issue too, but we'll figure
> it out, hopefully without frying any drives. ;)
> 
> My guess is I should probably scrub again after relocating the
> drives, just to be sure they work under load.

Good insurance, yes.

> Regarding the chunk size, my predecessor initially chose 512k based
> on some reading about performance boost with raw video files and
> 4k/AF drives, not sure whether it holds water. Something along the
> lines of: we're dealing with multi gigabyte files anyway, what's
> wastage of half a meg here and there? Supposedly RW performance
> increases with chunk size - provided they're properly aligned, but it
> is a diminishing return.
> 
> We're using this NAS to ingest the raw video files from our HD and
> 4K cameras so they can be accessed remotely for editing, does 512k
> chunk size make sense or should I go down to 16/32k?

No, your use calls for the large chunk.  That large chunk could hurt
more random tasks that involve files that aren't much larger than a
stripe.  Your stripes will be 2MB to start, then a bit bigger as you
grow the space.  Gig+ files are much bigger, so no worries.  If your
application was mostly ~10MB files or smaller, or had random access
patterns within the files, I'd reconsider.

> Thank you for the assistance so far!

No problem.  I like the easy ones... (-:

Phil

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-10-20 23:26 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-17  5:09 RAID6: Reducing to Grow Liwei
2017-10-17 14:39 ` Phil Turmel
2017-10-18  5:20   ` Liwei
2017-10-18 18:02     ` Phil Turmel
2017-10-20  5:04       ` Liwei
2017-10-20 23:26         ` Phil Turmel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).