* MD on raw device, use unallocated space anyway?
@ 2014-01-10 17:36 Mathias Burén
2014-01-10 18:12 ` Wilson Jonathan
0 siblings, 1 reply; 7+ messages in thread
From: Mathias Burén @ 2014-01-10 17:36 UTC (permalink / raw)
To: Linux-RAID
Hi all,
I've a device that's part of an MD array:
fdisk
Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
$ sudo mdadm -E /dev/sdg
/dev/sdg:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 0ad2603e:e43283ee:02180773:98e716ef
Name : ion:md0 (local to host ion)
Creation Time : Tue Feb 5 17:33:27 2013
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
Array Size : 7813531648 (7451.56 GiB 8001.06 GB)
Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 39c0b717:a9ca1dd7:bcba618f:caed0879
Update Time : Fri Jan 10 17:28:31 2014
Checksum : 53ea0170 - correct
Events : 2668
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AAAAAA ('A' == active, '.' == missing)
As you can see, Used Dev Size is lower than Avail Dev Size. Can I use
the unallocated space by MD for storage somehow? As the devices in the
array are used fully (no partitions) I guess not, but perhaps there is
a way.
Regards,
Mathias
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: MD on raw device, use unallocated space anyway?
2014-01-10 17:36 MD on raw device, use unallocated space anyway? Mathias Burén
@ 2014-01-10 18:12 ` Wilson Jonathan
2014-01-10 19:52 ` Mathias Burén
0 siblings, 1 reply; 7+ messages in thread
From: Wilson Jonathan @ 2014-01-10 18:12 UTC (permalink / raw)
To: Mathias Burén; +Cc: Linux-RAID
On Fri, 2014-01-10 at 17:36 +0000, Mathias Burén wrote:
> Hi all,
>
> I've a device that's part of an MD array:
>
> fdisk
> Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes
> 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> $ sudo mdadm -E /dev/sdg
> /dev/sdg:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 0ad2603e:e43283ee:02180773:98e716ef
> Name : ion:md0 (local to host ion)
> Creation Time : Tue Feb 5 17:33:27 2013
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
> Array Size : 7813531648 (7451.56 GiB 8001.06 GB)
> Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
> Data Offset : 262144 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 39c0b717:a9ca1dd7:bcba618f:caed0879
>
> Update Time : Fri Jan 10 17:28:31 2014
> Checksum : 53ea0170 - correct
> Events : 2668
>
> Layout : left-symmetric
> Chunk Size : 512K
>
> Device Role : Active device 4
> Array State : AAAAAA ('A' == active, '.' == missing)
>
> As you can see, Used Dev Size is lower than Avail Dev Size. Can I use
> the unallocated space by MD for storage somehow? As the devices in the
> array are used fully (no partitions) I guess not, but perhaps there is
> a way.
>
I'm wondering, did you perhaps add a drive after the initial creation
and forgot to grow the array to use the additional space? I believe that
adding a drive after creation causes the raid to spread the data over
all the drives via (I think) a re-shape which then needs a "grow" to
then extend to the end of the drives.
Actually thinking about it, 8GB would equate to 4 drives of 2TB each and
2 for redundancy = 6 drives total as noted in your post... is one of the
drives a 2 tb by mistake which would cause the 3tb drives to be limited
to the first 2tb... if they are all 3tb then I believe a grow=max size
(You'll need to double check the man page) should then use the un-used
space which will mean your array size should increase to 12TB (4*3TB+2
redundancy)
> Regards,
> Mathis
Jon.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: MD on raw device, use unallocated space anyway?
2014-01-10 18:12 ` Wilson Jonathan
@ 2014-01-10 19:52 ` Mathias Burén
2014-01-10 20:10 ` Wilson Jonathan
0 siblings, 1 reply; 7+ messages in thread
From: Mathias Burén @ 2014-01-10 19:52 UTC (permalink / raw)
To: Wilson Jonathan; +Cc: Linux-RAID
On 10 January 2014 18:12, Wilson Jonathan <piercing_male@hotmail.com> wrote:
> On Fri, 2014-01-10 at 17:36 +0000, Mathias Burén wrote:
>> Hi all,
>>
>> I've a device that's part of an MD array:
>>
>> fdisk
>> Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes
>> 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 4096 bytes
>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> Disk identifier: 0x00000000
>>
>> $ sudo mdadm -E /dev/sdg
>> /dev/sdg:
>> Magic : a92b4efc
>> Version : 1.2
>> Feature Map : 0x0
>> Array UUID : 0ad2603e:e43283ee:02180773:98e716ef
>> Name : ion:md0 (local to host ion)
>> Creation Time : Tue Feb 5 17:33:27 2013
>> Raid Level : raid6
>> Raid Devices : 6
>>
>> Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
>> Array Size : 7813531648 (7451.56 GiB 8001.06 GB)
>> Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
>> Data Offset : 262144 sectors
>> Super Offset : 8 sectors
>> State : clean
>> Device UUID : 39c0b717:a9ca1dd7:bcba618f:caed0879
>>
>> Update Time : Fri Jan 10 17:28:31 2014
>> Checksum : 53ea0170 - correct
>> Events : 2668
>>
>> Layout : left-symmetric
>> Chunk Size : 512K
>>
>> Device Role : Active device 4
>> Array State : AAAAAA ('A' == active, '.' == missing)
>>
>> As you can see, Used Dev Size is lower than Avail Dev Size. Can I use
>> the unallocated space by MD for storage somehow? As the devices in the
>> array are used fully (no partitions) I guess not, but perhaps there is
>> a way.
>>
>
> I'm wondering, did you perhaps add a drive after the initial creation
> and forgot to grow the array to use the additional space? I believe that
> adding a drive after creation causes the raid to spread the data over
> all the drives via (I think) a re-shape which then needs a "grow" to
> then extend to the end of the drives.
>
> Actually thinking about it, 8GB would equate to 4 drives of 2TB each and
> 2 for redundancy = 6 drives total as noted in your post... is one of the
> drives a 2 tb by mistake which would cause the 3tb drives to be limited
> to the first 2tb... if they are all 3tb then I believe a grow=max size
> (You'll need to double check the man page) should then use the un-used
> space which will mean your array size should increase to 12TB (4*3TB+2
> redundancy)
>
>> Regards,
>> Mathis
>
> Jon.
>
>
Doh,
Of course, I forgot ti add; the array is using 6x 2TB drives and 1x
3TB drive, for a total of 7 drives in a RAID6. It's the single 3TB
drive that I'm wondering about, if I can use the space not used by MD
on it.
Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: MD on raw device, use unallocated space anyway?
2014-01-10 19:52 ` Mathias Burén
@ 2014-01-10 20:10 ` Wilson Jonathan
2014-01-10 20:38 ` Mathias Burén
0 siblings, 1 reply; 7+ messages in thread
From: Wilson Jonathan @ 2014-01-10 20:10 UTC (permalink / raw)
To: Mathias Burén; +Cc: Linux-RAID
On Fri, 2014-01-10 at 19:52 +0000, Mathias Burén wrote:
> On 10 January 2014 18:12, Wilson Jonathan <piercing_male@hotmail.com> wrote:
> > On Fri, 2014-01-10 at 17:36 +0000, Mathias Burén wrote:
> >> Hi all,
> >>
> >> I've a device that's part of an MD array:
> >>
> >> fdisk
> >> Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes
> >> 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
> >> Units = sectors of 1 * 512 = 512 bytes
> >> Sector size (logical/physical): 512 bytes / 4096 bytes
> >> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> >> Disk identifier: 0x00000000
> >>
> >> $ sudo mdadm -E /dev/sdg
> >> /dev/sdg:
> >> Magic : a92b4efc
> >> Version : 1.2
> >> Feature Map : 0x0
> >> Array UUID : 0ad2603e:e43283ee:02180773:98e716ef
> >> Name : ion:md0 (local to host ion)
> >> Creation Time : Tue Feb 5 17:33:27 2013
> >> Raid Level : raid6
> >> Raid Devices : 6
> >>
> >> Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
> >> Array Size : 7813531648 (7451.56 GiB 8001.06 GB)
> >> Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
> >> Data Offset : 262144 sectors
> >> Super Offset : 8 sectors
> >> State : clean
> >> Device UUID : 39c0b717:a9ca1dd7:bcba618f:caed0879
> >>
> >> Update Time : Fri Jan 10 17:28:31 2014
> >> Checksum : 53ea0170 - correct
> >> Events : 2668
> >>
> >> Layout : left-symmetric
> >> Chunk Size : 512K
> >>
> >> Device Role : Active device 4
> >> Array State : AAAAAA ('A' == active, '.' == missing)
> >>
> >> As you can see, Used Dev Size is lower than Avail Dev Size. Can I use
> >> the unallocated space by MD for storage somehow? As the devices in the
> >> array are used fully (no partitions) I guess not, but perhaps there is
> >> a way.
> >>
> >
> > I'm wondering, did you perhaps add a drive after the initial creation
> > and forgot to grow the array to use the additional space? I believe that
> > adding a drive after creation causes the raid to spread the data over
> > all the drives via (I think) a re-shape which then needs a "grow" to
> > then extend to the end of the drives.
> >
> > Actually thinking about it, 8GB would equate to 4 drives of 2TB each and
> > 2 for redundancy = 6 drives total as noted in your post... is one of the
> > drives a 2 tb by mistake which would cause the 3tb drives to be limited
> > to the first 2tb... if they are all 3tb then I believe a grow=max size
> > (You'll need to double check the man page) should then use the un-used
> > space which will mean your array size should increase to 12TB (4*3TB+2
> > redundancy)
> >
> >> Regards,
> >> Mathis
> >
> > Jon.
> >
> >
>
> Doh,
>
> Of course, I forgot ti add; the array is using 6x 2TB drives and 1x
> 3TB drive, for a total of 7 drives in a RAID6. It's the single 3TB
> drive that I'm wondering about, if I can use the space not used by MD
> on it.
>
> Mathias
>
In that case the answer would be no. However if fail/remove the 3TB
drive, then partitioned the drive as 2TB and 1TB and then added the 2TB
partition back into the array then you can use the 1TB on its own...
just as long the "2TB" partition is the same size as the 2TB drives (or
slightly bigger) then it should all work nicely.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: MD on raw device, use unallocated space anyway?
2014-01-10 20:10 ` Wilson Jonathan
@ 2014-01-10 20:38 ` Mathias Burén
2014-01-10 21:59 ` Wilson Jonathan
2014-01-10 23:19 ` Phil Turmel
0 siblings, 2 replies; 7+ messages in thread
From: Mathias Burén @ 2014-01-10 20:38 UTC (permalink / raw)
To: Wilson Jonathan; +Cc: Linux-RAID
On 10 January 2014 20:10, Wilson Jonathan <piercing_male@hotmail.com> wrote:
>
> In that case the answer would be no. However if fail/remove the 3TB
> drive, then partitioned the drive as 2TB and 1TB and then added the 2TB
> partition back into the array then you can use the 1TB on its own...
> just as long the "2TB" partition is the same size as the 2TB drives (or
> slightly bigger) then it should all work nicely.
>
>
Thanks, that confirms my suspicions. I wanted to avoid this (as a
rebuild would be required) but it's the only way, I suppose. Unless I
upgrade to 3TB all around and gain some storage.
Cheers,
Mathias
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: MD on raw device, use unallocated space anyway?
2014-01-10 20:38 ` Mathias Burén
@ 2014-01-10 21:59 ` Wilson Jonathan
2014-01-10 23:19 ` Phil Turmel
1 sibling, 0 replies; 7+ messages in thread
From: Wilson Jonathan @ 2014-01-10 21:59 UTC (permalink / raw)
To: Mathias Burén; +Cc: Linux-RAID
On Fri, 2014-01-10 at 20:38 +0000, Mathias Burén wrote:
> On 10 January 2014 20:10, Wilson Jonathan <piercing_male@hotmail.com> wrote:
> >
> > In that case the answer would be no. However if fail/remove the 3TB
> > drive, then partitioned the drive as 2TB and 1TB and then added the 2TB
> > partition back into the array then you can use the 1TB on its own...
> > just as long the "2TB" partition is the same size as the 2TB drives (or
> > slightly bigger) then it should all work nicely.
> >
> >
>
> Thanks, that confirms my suspicions. I wanted to avoid this (as a
> rebuild would be required) but it's the only way, I suppose. Unless I
> upgrade to 3TB all around and gain some storage.
>
The only re-build would be onto the now partitioned disk (a re-sync) so
any existing data would be un-changed.
If you replaced the other drives then you would either need to
fail/remove a drive, then add in the new 3tb drive and allow it to
re-sync, then replicate 1 by 1 till all replaced then perform a grow...
or build a new array from scratch.
Having done a similar 1/1 replacement over a number of drives reciently,
not without incident, I personally would go for a from scratch new build
if I did it again saving both time and potential mistakes.
> Cheers,
> Mathias
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: MD on raw device, use unallocated space anyway?
2014-01-10 20:38 ` Mathias Burén
2014-01-10 21:59 ` Wilson Jonathan
@ 2014-01-10 23:19 ` Phil Turmel
1 sibling, 0 replies; 7+ messages in thread
From: Phil Turmel @ 2014-01-10 23:19 UTC (permalink / raw)
To: Mathias Burén, Wilson Jonathan; +Cc: Linux-RAID
On 01/10/2014 03:38 PM, Mathias Burén wrote:
> On 10 January 2014 20:10, Wilson Jonathan <piercing_male@hotmail.com> wrote:
>>
>> In that case the answer would be no. However if fail/remove the 3TB
>> drive, then partitioned the drive as 2TB and 1TB and then added the 2TB
>> partition back into the array then you can use the 1TB on its own...
>> just as long the "2TB" partition is the same size as the 2TB drives (or
>> slightly bigger) then it should all work nicely.
>
> Thanks, that confirms my suspicions. I wanted to avoid this (as a
> rebuild would be required) but it's the only way, I suppose.
Actually, you could move the data currently starting at sector zero to
sector 2048 (1MB) or some other start point, then partition the drive to
point at it. If your array has a bitmap, then you could quickly
--re-add the partition to the array.
> Unless I
> upgrade to 3TB all around and gain some storage.
Can't beat that. :-)
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2014-01-10 23:19 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-10 17:36 MD on raw device, use unallocated space anyway? Mathias Burén
2014-01-10 18:12 ` Wilson Jonathan
2014-01-10 19:52 ` Mathias Burén
2014-01-10 20:10 ` Wilson Jonathan
2014-01-10 20:38 ` Mathias Burén
2014-01-10 21:59 ` Wilson Jonathan
2014-01-10 23:19 ` Phil Turmel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).