linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Linear device of two arrays
@ 2017-07-05 15:34 Veljko
  2017-07-05 16:42 ` Roman Mamedov
  0 siblings, 1 reply; 28+ messages in thread
From: Veljko @ 2017-07-05 15:34 UTC (permalink / raw)
  To: linux-raid

Hello,

I have a RAID10 device which I have formated using the mkfs.xfs
defaults (Stan helped me with this few years back). I reached 88%
capacity and it is time to expand it. I bought 4 more drives to create
another RIAD10 array. I would like to create linear device out of
those two and grow XFS across the 2nd device. How can this be done
without loosing the existing device's data? I would also like to add a
spare HDD. Do I have to have a separate spare HDD for each array or
one can be used by both of them?

Regards,
Veljko

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-05 15:34 Linear device of two arrays Veljko
@ 2017-07-05 16:42 ` Roman Mamedov
  2017-07-05 18:07   ` Wols Lists
  0 siblings, 1 reply; 28+ messages in thread
From: Roman Mamedov @ 2017-07-05 16:42 UTC (permalink / raw)
  To: Veljko; +Cc: linux-raid

On Wed, 5 Jul 2017 17:34:09 +0200
Veljko <veljko3@gmail.com> wrote:

> Hello,
> 
> I have a RAID10 device which I have formated using the mkfs.xfs
> defaults (Stan helped me with this few years back). I reached 88%
> capacity and it is time to expand it. I bought 4 more drives to create
> another RIAD10 array. I would like to create linear device out of
> those two and grow XFS across the 2nd device. How can this be done
> without loosing the existing device's data? I would also like to add a
> spare HDD. Do I have to have a separate spare HDD for each array or
> one can be used by both of them?

Why make another RAID10? With modern versions of mdadm and kernel you should
be able to simply reshape the current RAID10 to increase the number of
devices used from 4 to 8.


-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-05 16:42 ` Roman Mamedov
@ 2017-07-05 18:07   ` Wols Lists
  2017-07-07 11:07     ` Nix
  2017-07-07 20:26     ` Veljko
  0 siblings, 2 replies; 28+ messages in thread
From: Wols Lists @ 2017-07-05 18:07 UTC (permalink / raw)
  To: Roman Mamedov, Veljko; +Cc: linux-raid

On 05/07/17 17:42, Roman Mamedov wrote:
> On Wed, 5 Jul 2017 17:34:09 +0200
> Veljko <veljko3@gmail.com> wrote:
> 
>> Hello,
>>
>> I have a RAID10 device which I have formated using the mkfs.xfs
>> defaults (Stan helped me with this few years back). I reached 88%
>> capacity and it is time to expand it. I bought 4 more drives to create
>> another RIAD10 array. I would like to create linear device out of
>> those two and grow XFS across the 2nd device. How can this be done
>> without loosing the existing device's data? I would also like to add a
>> spare HDD. Do I have to have a separate spare HDD for each array or
>> one can be used by both of them?
> 
> Why make another RAID10? With modern versions of mdadm and kernel you should
> be able to simply reshape the current RAID10 to increase the number of
> devices used from 4 to 8.
> 
> 
I was thinking of replying, but isn't that not possible for some
versions of RAID-10?

My feeling was, if you can't just add drives to the existing raid 10,
create a new one which you can expand, migrate the fs across (btrfs
would let you do that live, I believe, so xfs probably can too), then
you can scrap the old raid-10 and add the drives into the new one.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-05 18:07   ` Wols Lists
@ 2017-07-07 11:07     ` Nix
  2017-07-07 20:26     ` Veljko
  1 sibling, 0 replies; 28+ messages in thread
From: Nix @ 2017-07-07 11:07 UTC (permalink / raw)
  To: Wols Lists; +Cc: Roman Mamedov, Veljko, linux-raid

On 5 Jul 2017, Wols Lists spake thusly:
> My feeling was, if you can't just add drives to the existing raid 10,
> create a new one which you can expand, migrate the fs across (btrfs
> would let you do that live, I believe, so xfs probably can too), then
> you can scrap the old raid-10 and add the drives into the new one.

In extremis you can always connect multiple block devices into one, or
indeed multiple *pieces* of them into one, using dmsetup with the linear
table. (However, this is not persistent, so you'd need to script redoing
it on every boot.)

-- 
NULL && (void)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-05 18:07   ` Wols Lists
  2017-07-07 11:07     ` Nix
@ 2017-07-07 20:26     ` Veljko
  2017-07-07 21:20       ` Andreas Klauer
  2017-07-07 22:52       ` Stan Hoeppner
  1 sibling, 2 replies; 28+ messages in thread
From: Veljko @ 2017-07-07 20:26 UTC (permalink / raw)
  To: linux-raid

I just noticed that I replied to Wol insted to list.

On Wed, Jul 5, 2017 at 8:07 PM, Wols Lists <antlists@youngman.org.uk> wrote:
> On 05/07/17 17:42, Roman Mamedov wrote:
>> On Wed, 5 Jul 2017 17:34:09 +0200
>> Veljko <veljko3@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> I have a RAID10 device which I have formated using the mkfs.xfs
>>> defaults (Stan helped me with this few years back). I reached 88%
>>> capacity and it is time to expand it. I bought 4 more drives to create
>>> another RIAD10 array. I would like to create linear device out of
>>> those two and grow XFS across the 2nd device. How can this be done
>>> without loosing the existing device's data? I would also like to add a
>>> spare HDD. Do I have to have a separate spare HDD for each array or
>>> one can be used by both of them?
>>
>> Why make another RAID10? With modern versions of mdadm and kernel you should
>> be able to simply reshape the current RAID10 to increase the number of
>> devices used from 4 to 8.
>>
>>
> I was thinking of replying, but isn't that not possible for some
> versions of RAID-10?
>
> My feeling was, if you can't just add drives to the existing raid 10,
> create a new one which you can expand, migrate the fs across (btrfs
> would let you do that live, I believe, so xfs probably can too), then
> you can scrap the old raid-10 and add the drives into the new one.
>
> Cheers,
> Wol


Thanks for your input, Roman and Wol.

Expanding existing RAID is one of the options, but I was advised by
Stan Hoeppner to do it this way and I tend to believe him on this
subject. With my metadata heavy backup workload, this will provide
better performance.

So my question is still, how can an existing array be added to linear
device, and it's file system expanded over the second array.

And there is also question of spare drive.

Regards,
Veljko

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-07 20:26     ` Veljko
@ 2017-07-07 21:20       ` Andreas Klauer
  2017-07-07 21:53         ` Roman Mamedov
  2017-07-07 22:52       ` Stan Hoeppner
  1 sibling, 1 reply; 28+ messages in thread
From: Andreas Klauer @ 2017-07-07 21:20 UTC (permalink / raw)
  To: Veljko; +Cc: linux-raid

On Fri, Jul 07, 2017 at 10:26:28PM +0200, Veljko wrote:
> So my question is still, how can an existing array be added to linear
> device, and it's file system expanded over the second array.

Put LVM on top?

You could create LVM on the new array, copy your files over, then extend.

If you know LVM _really_ well, it's also possible to convert existing 
filesystems without moving data (other than relocating the parts 
occupied/unusable by LVM, i.e. one extent-size worth of data at 
start and end of disk).

Or find a way to use more than one filesystem.

> And there is also question of spare drive.

Spares can be shared/moved if you use spare-groups.

From the manpage:

| As  well  as  reporting  events,  mdadm may move a spare drive from one
| array to another if they are in the same spare-group or domain  and  if
| the destination array has a failed drive but no spares.

Regards
Andreas Klauer

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-07 21:20       ` Andreas Klauer
@ 2017-07-07 21:53         ` Roman Mamedov
  2017-07-07 22:20           ` Andreas Klauer
  2017-07-07 22:33           ` Andreas Klauer
  0 siblings, 2 replies; 28+ messages in thread
From: Roman Mamedov @ 2017-07-07 21:53 UTC (permalink / raw)
  To: Andreas Klauer; +Cc: Veljko, linux-raid

On Fri, 7 Jul 2017 23:20:36 +0200
Andreas Klauer <Andreas.Klauer@metamorpher.de> wrote:

> Put LVM on top?
> 
> You could create LVM on the new array, copy your files over, then extend.
> 
> If you know LVM _really_ well, it's also possible to convert existing 
> filesystems without moving data (other than relocating the parts 
> occupied/unusable by LVM, i.e. one extent-size worth of data at 
> start and end of disk).

There is this tool which can convert filesystems/partitions to LVM in-place:

  https://github.com/g2p/blocks

however it requires a shrinkable filesystem, and XFS cannot be shrunk.

(Also it appears unmaintained with a number of long-standing unresolved
issues).

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-07 21:53         ` Roman Mamedov
@ 2017-07-07 22:20           ` Andreas Klauer
  2017-07-07 22:33           ` Andreas Klauer
  1 sibling, 0 replies; 28+ messages in thread
From: Andreas Klauer @ 2017-07-07 22:20 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Veljko, linux-raid

On Sat, Jul 08, 2017 at 02:53:18AM +0500, Roman Mamedov wrote:
> however it requires a shrinkable filesystem, and XFS cannot be shrunk.

There's a description of the manual process here:

https://wiki.ubuntuusers.de/Howto/LVM_nachtr%C3%A4glich_einrichten/

(Sorry it's in German)

You have to adapt it to use two extents on another PV instead 
of shrinking the filesystem or growing the existing blockdevice.

It can be done, but yeah it's a bit of voodoo.
Copying is safer.

Regards
Andreas Klauer

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-07 21:53         ` Roman Mamedov
  2017-07-07 22:20           ` Andreas Klauer
@ 2017-07-07 22:33           ` Andreas Klauer
  1 sibling, 0 replies; 28+ messages in thread
From: Andreas Klauer @ 2017-07-07 22:33 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Veljko, linux-raid

On Sat, Jul 08, 2017 at 02:53:18AM +0500, Roman Mamedov wrote:
> however it requires a shrinkable filesystem, and XFS cannot be shrunk.

And an easy way to adapt may be thus:

Just backup the last 100M of the filesystem. Then you can consider 
the filesystem to be shrunk by 100M. Not really, but it covers 
the space that will be lost in the LVM conversion (two extents).

When the conversion is done, you grow the LV back to original size, 
(now that's using two extents on another PV you have to provide) 
and restore the 100M you backed up to the LV.

The conversion is an offline operation anyways so it doesn't matter 
whether you really shrank the filesystem or not, as long as you can 
provide enough space for an LV of the original partition size.

Regards
Andreas Klauer

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-07 20:26     ` Veljko
  2017-07-07 21:20       ` Andreas Klauer
@ 2017-07-07 22:52       ` Stan Hoeppner
  2017-07-08 10:26         ` Veljko
  1 sibling, 1 reply; 28+ messages in thread
From: Stan Hoeppner @ 2017-07-07 22:52 UTC (permalink / raw)
  To: veljko3, linux-raid

On 07/07/2017 03:26 PM, Veljko wrote:
> I just noticed that I replied to Wol insted to list.
>
> On Wed, Jul 5, 2017 at 8:07 PM, Wols Lists <antlists@youngman.org.uk> wrote:
>> On 05/07/17 17:42, Roman Mamedov wrote:
>>> On Wed, 5 Jul 2017 17:34:09 +0200
>>> Veljko <veljko3@gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> I have a RAID10 device which I have formated using the mkfs.xfs
>>>> defaults (Stan helped me with this few years back). I reached 88%
>>>> capacity and it is time to expand it. I bought 4 more drives to create
>>>> another RIAD10 array. I would like to create linear device out of
>>>> those two and grow XFS across the 2nd device. How can this be done
>>>> without loosing the existing device's data? I would also like to add a
>>>> spare HDD. Do I have to have a separate spare HDD for each array or
>>>> one can be used by both of them?
>>> Why make another RAID10? With modern versions of mdadm and kernel you should
>>> be able to simply reshape the current RAID10 to increase the number of
>>> devices used from 4 to 8.
>>>
>>>
>> I was thinking of replying, but isn't that not possible for some
>> versions of RAID-10?
>>
>> My feeling was, if you can't just add drives to the existing raid 10,
>> create a new one which you can expand, migrate the fs across (btrfs
>> would let you do that live, I believe, so xfs probably can too), then
>> you can scrap the old raid-10 and add the drives into the new one.
>>
>> Cheers,
>> Wol
>
> Thanks for your input, Roman and Wol.
>
> Expanding existing RAID is one of the options, but I was advised by
> Stan Hoeppner to do it this way and I tend to believe him on this
> subject. With my metadata heavy backup workload, this will provide
> better performance.
>
> So my question is still, how can an existing array be added to linear
> device, and it's file system expanded over the second array.
For this to work the existing RAID10 array must already be a member of a 
linear device with one component device.  If this linear array already 
exists then you could add another RAID10 array to the linear device.  If 
you currently have an XFS filesystem sitting atop a 'bare' RAID10 then I 
don't believe the linear option will work for you.  Thus I'd tend to 
agree with others that reshaping your current RAID10 is the best option.

My apologies if I wasn't clear in my previous advice.

Stan


> And there is also question of spare drive.
>
> Regards,
> Veljko
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-07 22:52       ` Stan Hoeppner
@ 2017-07-08 10:26         ` Veljko
  2017-07-08 21:24           ` Stan Hoeppner
  0 siblings, 1 reply; 28+ messages in thread
From: Veljko @ 2017-07-08 10:26 UTC (permalink / raw)
  To: linux-raid

On Sat, Jul 8, 2017 at 12:52 AM, Stan Hoeppner <stan@hardwarefreak.org> wrote:
> On 07/07/2017 03:26 PM, Veljko wrote:
>>
>> I just noticed that I replied to Wol insted to list.
>>
>> On Wed, Jul 5, 2017 at 8:07 PM, Wols Lists <antlists@youngman.org.uk>
>> wrote:
>>>
>>> On 05/07/17 17:42, Roman Mamedov wrote:
>>>>
>>>> On Wed, 5 Jul 2017 17:34:09 +0200
>>>> Veljko <veljko3@gmail.com> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I have a RAID10 device which I have formated using the mkfs.xfs
>>>>> defaults (Stan helped me with this few years back). I reached 88%
>>>>> capacity and it is time to expand it. I bought 4 more drives to create
>>>>> another RIAD10 array. I would like to create linear device out of
>>>>> those two and grow XFS across the 2nd device. How can this be done
>>>>> without loosing the existing device's data? I would also like to add a
>>>>> spare HDD. Do I have to have a separate spare HDD for each array or
>>>>> one can be used by both of them?
>>>>
>>>> Why make another RAID10? With modern versions of mdadm and kernel you
>>>> should
>>>> be able to simply reshape the current RAID10 to increase the number of
>>>> devices used from 4 to 8.
>>>>
>>>>
>>> I was thinking of replying, but isn't that not possible for some
>>> versions of RAID-10?
>>>
>>> My feeling was, if you can't just add drives to the existing raid 10,
>>> create a new one which you can expand, migrate the fs across (btrfs
>>> would let you do that live, I believe, so xfs probably can too), then
>>> you can scrap the old raid-10 and add the drives into the new one.
>>>
>>> Cheers,
>>> Wol
>>
>>
>> Thanks for your input, Roman and Wol.
>>
>> Expanding existing RAID is one of the options, but I was advised by
>> Stan Hoeppner to do it this way and I tend to believe him on this
>> subject. With my metadata heavy backup workload, this will provide
>> better performance.
>>
>> So my question is still, how can an existing array be added to linear
>> device, and it's file system expanded over the second array.
>
> For this to work the existing RAID10 array must already be a member of a
> linear device with one component device.  If this linear array already
> exists then you could add another RAID10 array to the linear device.  If you
> currently have an XFS filesystem sitting atop a 'bare' RAID10 then I don't
> believe the linear option will work for you.  Thus I'd tend to agree with
> others that reshaping your current RAID10 is the best option.
>
> My apologies if I wasn't clear in my previous advice.
>
> Stan

Here is your previous advice:
"Do not use LVM.  Directly format the RAID10 device using the mkfs.xfs
defaults.  mkfs.xfs will read the md configuration and automatically
align the filesystem to the stripe width.

When the filesystem reaches 85% capacity, add 4 more drives and create
another RAID10 array.  At that point we'll teach you how to create a
linear device of the two arrays and grow XFS across the 2nd array."

From this I concluded that it is possible to create linear device
using existing array, but since it is not the case, I'll just have to
create new array, move data to it, than add first array to a new
linear device (1 member), copy my data to it, and than join second
device to it. I wanted to avoid all this copying but will have to do
it. Than I will add one drive to spare-group both arrays will be a
members of, as advised by Andreas. Does this sound OK?

Regards,
Veljko

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-08 10:26         ` Veljko
@ 2017-07-08 21:24           ` Stan Hoeppner
  2017-07-09 22:37             ` NeilBrown
  0 siblings, 1 reply; 28+ messages in thread
From: Stan Hoeppner @ 2017-07-08 21:24 UTC (permalink / raw)
  To: veljko3, linux-raid, neilb

On 07/08/2017 05:26 AM, Veljko wrote:
> On Sat, Jul 8, 2017 at 12:52 AM, Stan Hoeppner <stan@hardwarefreak.org> wrote:
>> On 07/07/2017 03:26 PM, Veljko wrote:
>>> I just noticed that I replied to Wol insted to list.
>>>
>>> On Wed, Jul 5, 2017 at 8:07 PM, Wols Lists <antlists@youngman.org.uk>
>>> wrote:
>>>> On 05/07/17 17:42, Roman Mamedov wrote:
>>>>> On Wed, 5 Jul 2017 17:34:09 +0200
>>>>> Veljko <veljko3@gmail.com> wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I have a RAID10 device which I have formated using the mkfs.xfs
>>>>>> defaults (Stan helped me with this few years back). I reached 88%
>>>>>> capacity and it is time to expand it. I bought 4 more drives to create
>>>>>> another RIAD10 array. I would like to create linear device out of
>>>>>> those two and grow XFS across the 2nd device. How can this be done
>>>>>> without loosing the existing device's data? I would also like to add a
>>>>>> spare HDD. Do I have to have a separate spare HDD for each array or
>>>>>> one can be used by both of them?
>>>>> Why make another RAID10? With modern versions of mdadm and kernel you
>>>>> should
>>>>> be able to simply reshape the current RAID10 to increase the number of
>>>>> devices used from 4 to 8.
>>>>>
>>>>>
>>>> I was thinking of replying, but isn't that not possible for some
>>>> versions of RAID-10?
>>>>
>>>> My feeling was, if you can't just add drives to the existing raid 10,
>>>> create a new one which you can expand, migrate the fs across (btrfs
>>>> would let you do that live, I believe, so xfs probably can too), then
>>>> you can scrap the old raid-10 and add the drives into the new one.
>>>>
>>>> Cheers,
>>>> Wol
>>>
>>> Thanks for your input, Roman and Wol.
>>>
>>> Expanding existing RAID is one of the options, but I was advised by
>>> Stan Hoeppner to do it this way and I tend to believe him on this
>>> subject. With my metadata heavy backup workload, this will provide
>>> better performance.
>>>
>>> So my question is still, how can an existing array be added to linear
>>> device, and it's file system expanded over the second array.
>> For this to work the existing RAID10 array must already be a member of a
>> linear device with one component device.  If this linear array already
>> exists then you could add another RAID10 array to the linear device.  If you
>> currently have an XFS filesystem sitting atop a 'bare' RAID10 then I don't
>> believe the linear option will work for you.  Thus I'd tend to agree with
>> others that reshaping your current RAID10 is the best option.
>>
>> My apologies if I wasn't clear in my previous advice.
>>
>> Stan
> Here is your previous advice:
> "Do not use LVM.  Directly format the RAID10 device using the mkfs.xfs
> defaults.  mkfs.xfs will read the md configuration and automatically
> align the filesystem to the stripe width.
>
> When the filesystem reaches 85% capacity, add 4 more drives and create
> another RAID10 array.  At that point we'll teach you how to create a
> linear device of the two arrays and grow XFS across the 2nd array."
>
>  From this I concluded that it is possible to create linear device
> using existing array, but since it is not the case, I'll just have to
> create new array, move data to it, than add first array to a new
> linear device (1 member), copy my data to it, and than join second
> device to it. I wanted to avoid all this copying but will have to do
> it. Than I will add one drive to spare-group both arrays will be a
> members of, as advised by Andreas. Does this sound OK?

Yes.  Although, I was hoping Neil Brown would chime in here. I'm not 
entirely sure that you can't do this the way I originally mentioned.

> Regards,
> Veljko
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-08 21:24           ` Stan Hoeppner
@ 2017-07-09 22:37             ` NeilBrown
  2017-07-10 11:03               ` Veljko
  0 siblings, 1 reply; 28+ messages in thread
From: NeilBrown @ 2017-07-09 22:37 UTC (permalink / raw)
  To: Stan Hoeppner, veljko3, linux-raid

[-- Attachment #1: Type: text/plain, Size: 4714 bytes --]

On Sat, Jul 08 2017, Stan Hoeppner wrote:

> On 07/08/2017 05:26 AM, Veljko wrote:
>> On Sat, Jul 8, 2017 at 12:52 AM, Stan Hoeppner <stan@hardwarefreak.org> wrote:
>>> On 07/07/2017 03:26 PM, Veljko wrote:
>>>> I just noticed that I replied to Wol insted to list.
>>>>
>>>> On Wed, Jul 5, 2017 at 8:07 PM, Wols Lists <antlists@youngman.org.uk>
>>>> wrote:
>>>>> On 05/07/17 17:42, Roman Mamedov wrote:
>>>>>> On Wed, 5 Jul 2017 17:34:09 +0200
>>>>>> Veljko <veljko3@gmail.com> wrote:
>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> I have a RAID10 device which I have formated using the mkfs.xfs
>>>>>>> defaults (Stan helped me with this few years back). I reached 88%
>>>>>>> capacity and it is time to expand it. I bought 4 more drives to create
>>>>>>> another RIAD10 array. I would like to create linear device out of
>>>>>>> those two and grow XFS across the 2nd device. How can this be done
>>>>>>> without loosing the existing device's data? I would also like to add a
>>>>>>> spare HDD. Do I have to have a separate spare HDD for each array or
>>>>>>> one can be used by both of them?
>>>>>> Why make another RAID10? With modern versions of mdadm and kernel you
>>>>>> should
>>>>>> be able to simply reshape the current RAID10 to increase the number of
>>>>>> devices used from 4 to 8.
>>>>>>
>>>>>>
>>>>> I was thinking of replying, but isn't that not possible for some
>>>>> versions of RAID-10?
>>>>>
>>>>> My feeling was, if you can't just add drives to the existing raid 10,
>>>>> create a new one which you can expand, migrate the fs across (btrfs
>>>>> would let you do that live, I believe, so xfs probably can too), then
>>>>> you can scrap the old raid-10 and add the drives into the new one.
>>>>>
>>>>> Cheers,
>>>>> Wol
>>>>
>>>> Thanks for your input, Roman and Wol.
>>>>
>>>> Expanding existing RAID is one of the options, but I was advised by
>>>> Stan Hoeppner to do it this way and I tend to believe him on this
>>>> subject. With my metadata heavy backup workload, this will provide
>>>> better performance.
>>>>
>>>> So my question is still, how can an existing array be added to linear
>>>> device, and it's file system expanded over the second array.
>>> For this to work the existing RAID10 array must already be a member of a
>>> linear device with one component device.  If this linear array already
>>> exists then you could add another RAID10 array to the linear device.  If you
>>> currently have an XFS filesystem sitting atop a 'bare' RAID10 then I don't
>>> believe the linear option will work for you.  Thus I'd tend to agree with
>>> others that reshaping your current RAID10 is the best option.
>>>
>>> My apologies if I wasn't clear in my previous advice.
>>>
>>> Stan
>> Here is your previous advice:
>> "Do not use LVM.  Directly format the RAID10 device using the mkfs.xfs
>> defaults.  mkfs.xfs will read the md configuration and automatically
>> align the filesystem to the stripe width.
>>
>> When the filesystem reaches 85% capacity, add 4 more drives and create
>> another RAID10 array.  At that point we'll teach you how to create a
>> linear device of the two arrays and grow XFS across the 2nd array."
>>
>>  From this I concluded that it is possible to create linear device
>> using existing array, but since it is not the case, I'll just have to
>> create new array, move data to it, than add first array to a new
>> linear device (1 member), copy my data to it, and than join second
>> device to it. I wanted to avoid all this copying but will have to do
>> it. Than I will add one drive to spare-group both arrays will be a
>> members of, as advised by Andreas. Does this sound OK?
>
> Yes.  Although, I was hoping Neil Brown would chime in here. I'm not 
> entirely sure that you can't do this the way I originally mentioned.

I wasn't clear to me that I needed to chime in..  and the complete lack
of details (not even an "mdadm --examine" output), meant I could only
answer in vague generalizations.
However, seeing you asked.
If you really want to have a 'linear' of 2 RAID10s, then
0/ unmount the xfs filesystem
1/ backup the last few megabytes of the device
    dd if=/dev/mdXX of=/safe/place/backup bs=1M skip=$BIGNUM
2/ create a linear array of the two RAID10s, ensuring the
   metadata is v1.0, and the dataoffset is zero (should be default with
   1.0)
    mdadm -C /dev/mdZZ -l linear -n 2 -e 1.0 --data-offset=0 /dev/mdXX /dev/mdYY
3/ restore the saved data
    dd of=/dev/mdZZ if=/safe/place/backup bs=1M seek=$BIGNUM
4/ grow the xfs filesystem
5/ be happy.

I cannot comment on the values of "few" and "$BUGNUM" without seeing
specifics.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-09 22:37             ` NeilBrown
@ 2017-07-10 11:03               ` Veljko
  2017-07-12 10:21                 ` Veljko
  2017-07-14  1:57                 ` NeilBrown
  0 siblings, 2 replies; 28+ messages in thread
From: Veljko @ 2017-07-10 11:03 UTC (permalink / raw)
  To: linux-raid

On 07/10/2017 12:37 AM, NeilBrown wrote:
> I wasn't clear to me that I needed to chime in..  and the complete lack
> of details (not even an "mdadm --examine" output), meant I could only
> answer in vague generalizations.
> However, seeing you asked.
> If you really want to have a 'linear' of 2 RAID10s, then
> 0/ unmount the xfs filesystem
> 1/ backup the last few megabytes of the device
>     dd if=/dev/mdXX of=/safe/place/backup bs=1M skip=$BIGNUM
> 2/ create a linear array of the two RAID10s, ensuring the
>    metadata is v1.0, and the dataoffset is zero (should be default with
>    1.0)
>     mdadm -C /dev/mdZZ -l linear -n 2 -e 1.0 --data-offset=0 /dev/mdXX /dev/mdYY
> 3/ restore the saved data
>     dd of=/dev/mdZZ if=/safe/place/backup bs=1M seek=$BIGNUM
> 4/ grow the xfs filesystem
> 5/ be happy.
>
> I cannot comment on the values of "few" and "$BUGNUM" without seeing
> specifics.
>
> NeilBrown

Thanks for your response, Neil!

md0 is boot (raid1), md1 is root (raid10) and md2 is data (raid10) that 
I need to expand. Here are details:


# mdadm --detail /dev/md0
/dev/md0:
         Version : 1.2
   Creation Time : Mon Sep 10 14:45:11 2012
      Raid Level : raid1
      Array Size : 488128 (476.77 MiB 499.84 MB)
   Used Dev Size : 488128 (476.77 MiB 499.84 MB)
    Raid Devices : 2
   Total Devices : 2
     Persistence : Superblock is persistent

     Update Time : Mon Jul  3 11:57:24 2017
           State : clean
  Active Devices : 2
Working Devices : 2
  Failed Devices : 0
   Spare Devices : 0

            Name : backup1:0  (local to host backup1)
            UUID : e5a17766:b4df544d:c2770d6e:214113ec
          Events : 302

     Number   Major   Minor   RaidDevice State
        2       8       18        0      active sync   /dev/sdb2
        3       8       34        1      active sync   /dev/sdc2


# mdadm --detail /dev/md1
/dev/md1:
         Version : 1.2
   Creation Time : Fri Sep 14 12:39:00 2012
      Raid Level : raid10
      Array Size : 97590272 (93.07 GiB 99.93 GB)
   Used Dev Size : 48795136 (46.53 GiB 49.97 GB)
    Raid Devices : 4
   Total Devices : 4
     Persistence : Superblock is persistent

     Update Time : Mon Jul 10 12:30:46 2017
           State : clean
  Active Devices : 4
Working Devices : 4
  Failed Devices : 0
   Spare Devices : 0

          Layout : near=2
      Chunk Size : 512K

            Name : backup1:1  (local to host backup1)
            UUID : 91560d5a:245bbc56:cc08b0ce:9c78fea1
          Events : 1003350

     Number   Major   Minor   RaidDevice State
        4       8       19        0      active sync set-A   /dev/sdb3
        6       8       35        1      active sync set-B   /dev/sdc3
        7       8       50        2      active sync set-A   /dev/sdd2
        5       8        2        3      active sync set-B   /dev/sda2


# mdadm --detail /dev/md2
/dev/md2:
         Version : 1.2
   Creation Time : Fri Sep 14 12:40:13 2012
      Raid Level : raid10
      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
   Used Dev Size : 2880815616 (2747.36 GiB 2949.96 GB)
    Raid Devices : 4
   Total Devices : 4
     Persistence : Superblock is persistent

     Update Time : Mon Jul 10 12:32:51 2017
           State : clean
  Active Devices : 4
Working Devices : 4
  Failed Devices : 0
   Spare Devices : 0

          Layout : near=2
      Chunk Size : 512K

            Name : backup1:2  (local to host backup1)
            UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
          Events : 2689040

     Number   Major   Minor   RaidDevice State
        4       8       20        0      active sync set-A   /dev/sdb4
        6       8       36        1      active sync set-B   /dev/sdc4
        7       8       51        2      active sync set-A   /dev/sdd3
        5       8        3        3      active sync set-B   /dev/sda3


And here is examine output for md2 partitions:

# mdadm --examine /dev/sda3
/dev/sda3:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
            Name : backup1:2  (local to host backup1)
   Creation Time : Fri Sep 14 12:40:13 2012
      Raid Level : raid10
    Raid Devices : 4

  Avail Dev Size : 5762609152 (2747.83 GiB 2950.46 GB)
      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
   Used Dev Size : 5761631232 (2747.36 GiB 2949.96 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=977920 sectors
           State : clean
     Device UUID : 92beeec2:7ff92b1d:473a9641:2a078b16

     Update Time : Mon Jul 10 12:35:53 2017
        Checksum : d1abfc30 - correct
          Events : 2689040

          Layout : near=2
      Chunk Size : 512K

    Device Role : Active device 3
    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)


# mdadm --examine /dev/sdb4
/dev/sdb4:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
            Name : backup1:2  (local to host backup1)
   Creation Time : Fri Sep 14 12:40:13 2012
      Raid Level : raid10
    Raid Devices : 4

  Avail Dev Size : 5761632256 (2747.36 GiB 2949.96 GB)
      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
   Used Dev Size : 5761631232 (2747.36 GiB 2949.96 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=1024 sectors
           State : clean
     Device UUID : 01e1cb21:01a011a9:85761911:9b4d437a

     Update Time : Mon Jul 10 12:37:00 2017
        Checksum : ef9b6012 - correct
          Events : 2689040

          Layout : near=2
      Chunk Size : 512K

    Device Role : Active device 0
    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)



# mdadm --examine /dev/sdc4
/dev/sdc4:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
            Name : backup1:2  (local to host backup1)
   Creation Time : Fri Sep 14 12:40:13 2012
      Raid Level : raid10
    Raid Devices : 4

  Avail Dev Size : 5761632256 (2747.36 GiB 2949.96 GB)
      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
   Used Dev Size : 5761631232 (2747.36 GiB 2949.96 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=1024 sectors
           State : clean
     Device UUID : 1a2c966f:a78ffaf3:83cf37d4:135087b7

     Update Time : Mon Jul 10 12:37:53 2017
        Checksum : 88b0f680 - correct
          Events : 2689040

          Layout : near=2
      Chunk Size : 512K

    Device Role : Active device 1
    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)




# mdadm --examine /dev/sdd3
/dev/sdd3:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
            Name : backup1:2  (local to host backup1)
   Creation Time : Fri Sep 14 12:40:13 2012
      Raid Level : raid10
    Raid Devices : 4

  Avail Dev Size : 5762609152 (2747.83 GiB 2950.46 GB)
      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
   Used Dev Size : 5761631232 (2747.36 GiB 2949.96 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=977920 sectors
           State : clean
     Device UUID : 52f92e76:15228eee:a20c1ee5:8d4a17d2

     Update Time : Mon Jul 10 12:38:24 2017
        Checksum : b56275df - correct
          Events : 2689040

          Layout : near=2
      Chunk Size : 512K

    Device Role : Active device 2
    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)



Regards,
Veljko

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-10 11:03               ` Veljko
@ 2017-07-12 10:21                 ` Veljko
  2017-07-14  2:03                   ` NeilBrown
  2017-07-14  1:57                 ` NeilBrown
  1 sibling, 1 reply; 28+ messages in thread
From: Veljko @ 2017-07-12 10:21 UTC (permalink / raw)
  To: linux-raid

Hello Neil,

On 07/10/2017 01:03 PM, Veljko wrote:
> On 07/10/2017 12:37 AM, NeilBrown wrote:
>> I wasn't clear to me that I needed to chime in..  and the complete lack
>> of details (not even an "mdadm --examine" output), meant I could only
>> answer in vague generalizations.
>> However, seeing you asked.
>> If you really want to have a 'linear' of 2 RAID10s, then
>> 0/ unmount the xfs filesystem
>> 1/ backup the last few megabytes of the device
>>     dd if=/dev/mdXX of=/safe/place/backup bs=1M skip=$BIGNUM
>> 2/ create a linear array of the two RAID10s, ensuring the
>>    metadata is v1.0, and the dataoffset is zero (should be default with
>>    1.0)
>>     mdadm -C /dev/mdZZ -l linear -n 2 -e 1.0 --data-offset=0 /dev/mdXX
>> /dev/mdYY
>> 3/ restore the saved data
>>     dd of=/dev/mdZZ if=/safe/place/backup bs=1M seek=$BIGNUM
>> 4/ grow the xfs filesystem
>> 5/ be happy.
>>
>> I cannot comment on the values of "few" and "$BUGNUM" without seeing
>> specifics.
>>
>> NeilBrown
>
> Thanks for your response, Neil!
>
> md0 is boot (raid1), md1 is root (raid10) and md2 is data (raid10) that
> I need to expand. Here are details:
>
>
> # mdadm --detail /dev/md0
> /dev/md0:
>         Version : 1.2
>   Creation Time : Mon Sep 10 14:45:11 2012
>      Raid Level : raid1
>      Array Size : 488128 (476.77 MiB 499.84 MB)
>   Used Dev Size : 488128 (476.77 MiB 499.84 MB)
>    Raid Devices : 2
>   Total Devices : 2
>     Persistence : Superblock is persistent
>
>     Update Time : Mon Jul  3 11:57:24 2017
>           State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
>
>            Name : backup1:0  (local to host backup1)
>            UUID : e5a17766:b4df544d:c2770d6e:214113ec
>          Events : 302
>
>     Number   Major   Minor   RaidDevice State
>        2       8       18        0      active sync   /dev/sdb2
>        3       8       34        1      active sync   /dev/sdc2
>
>
> # mdadm --detail /dev/md1
> /dev/md1:
>         Version : 1.2
>   Creation Time : Fri Sep 14 12:39:00 2012
>      Raid Level : raid10
>      Array Size : 97590272 (93.07 GiB 99.93 GB)
>   Used Dev Size : 48795136 (46.53 GiB 49.97 GB)
>    Raid Devices : 4
>   Total Devices : 4
>     Persistence : Superblock is persistent
>
>     Update Time : Mon Jul 10 12:30:46 2017
>           State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : near=2
>      Chunk Size : 512K
>
>            Name : backup1:1  (local to host backup1)
>            UUID : 91560d5a:245bbc56:cc08b0ce:9c78fea1
>          Events : 1003350
>
>     Number   Major   Minor   RaidDevice State
>        4       8       19        0      active sync set-A   /dev/sdb3
>        6       8       35        1      active sync set-B   /dev/sdc3
>        7       8       50        2      active sync set-A   /dev/sdd2
>        5       8        2        3      active sync set-B   /dev/sda2
>
>
> # mdadm --detail /dev/md2
> /dev/md2:
>         Version : 1.2
>   Creation Time : Fri Sep 14 12:40:13 2012
>      Raid Level : raid10
>      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
>   Used Dev Size : 2880815616 (2747.36 GiB 2949.96 GB)
>    Raid Devices : 4
>   Total Devices : 4
>     Persistence : Superblock is persistent
>
>     Update Time : Mon Jul 10 12:32:51 2017
>           State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : near=2
>      Chunk Size : 512K
>
>            Name : backup1:2  (local to host backup1)
>            UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
>          Events : 2689040
>
>     Number   Major   Minor   RaidDevice State
>        4       8       20        0      active sync set-A   /dev/sdb4
>        6       8       36        1      active sync set-B   /dev/sdc4
>        7       8       51        2      active sync set-A   /dev/sdd3
>        5       8        3        3      active sync set-B   /dev/sda3
>
>
> And here is examine output for md2 partitions:
>
> # mdadm --examine /dev/sda3
> /dev/sda3:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
>            Name : backup1:2  (local to host backup1)
>   Creation Time : Fri Sep 14 12:40:13 2012
>      Raid Level : raid10
>    Raid Devices : 4
>
>  Avail Dev Size : 5762609152 (2747.83 GiB 2950.46 GB)
>      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
>   Used Dev Size : 5761631232 (2747.36 GiB 2949.96 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262064 sectors, after=977920 sectors
>           State : clean
>     Device UUID : 92beeec2:7ff92b1d:473a9641:2a078b16
>
>     Update Time : Mon Jul 10 12:35:53 2017
>        Checksum : d1abfc30 - correct
>          Events : 2689040
>
>          Layout : near=2
>      Chunk Size : 512K
>
>    Device Role : Active device 3
>    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>
>
> # mdadm --examine /dev/sdb4
> /dev/sdb4:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
>            Name : backup1:2  (local to host backup1)
>   Creation Time : Fri Sep 14 12:40:13 2012
>      Raid Level : raid10
>    Raid Devices : 4
>
>  Avail Dev Size : 5761632256 (2747.36 GiB 2949.96 GB)
>      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
>   Used Dev Size : 5761631232 (2747.36 GiB 2949.96 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262064 sectors, after=1024 sectors
>           State : clean
>     Device UUID : 01e1cb21:01a011a9:85761911:9b4d437a
>
>     Update Time : Mon Jul 10 12:37:00 2017
>        Checksum : ef9b6012 - correct
>          Events : 2689040
>
>          Layout : near=2
>      Chunk Size : 512K
>
>    Device Role : Active device 0
>    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>
>
>
> # mdadm --examine /dev/sdc4
> /dev/sdc4:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
>            Name : backup1:2  (local to host backup1)
>   Creation Time : Fri Sep 14 12:40:13 2012
>      Raid Level : raid10
>    Raid Devices : 4
>
>  Avail Dev Size : 5761632256 (2747.36 GiB 2949.96 GB)
>      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
>   Used Dev Size : 5761631232 (2747.36 GiB 2949.96 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262064 sectors, after=1024 sectors
>           State : clean
>     Device UUID : 1a2c966f:a78ffaf3:83cf37d4:135087b7
>
>     Update Time : Mon Jul 10 12:37:53 2017
>        Checksum : 88b0f680 - correct
>          Events : 2689040
>
>          Layout : near=2
>      Chunk Size : 512K
>
>    Device Role : Active device 1
>    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>
>
>
>
> # mdadm --examine /dev/sdd3
> /dev/sdd3:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
>            Name : backup1:2  (local to host backup1)
>   Creation Time : Fri Sep 14 12:40:13 2012
>      Raid Level : raid10
>    Raid Devices : 4
>
>  Avail Dev Size : 5762609152 (2747.83 GiB 2950.46 GB)
>      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
>   Used Dev Size : 5761631232 (2747.36 GiB 2949.96 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262064 sectors, after=977920 sectors
>           State : clean
>     Device UUID : 52f92e76:15228eee:a20c1ee5:8d4a17d2
>
>     Update Time : Mon Jul 10 12:38:24 2017
>        Checksum : b56275df - correct
>          Events : 2689040
>
>          Layout : near=2
>      Chunk Size : 512K
>
>    Device Role : Active device 2
>    Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)


Do you know what would be the "few" and "$BIGNUM" values from the output 
above?

Since I need to expand md2 device, I guess `that I need to subtract 
"few" number of megabytes as ($few x 1024 x 1024) in bytes from array 
size of md2 (in my case 5761631232). Is this correct? $BIGNUM is the 
size of md2 array? How to know how many megabytes needs to be backed up?

Data offset is not zero on md2 partitions. Is that a dealbreaker?

Would it be than better to reshape the current RAID10 to increase the 
number of devices used from 4 to 8 (as advised by Roman)?

Regards,
Veljko


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-10 11:03               ` Veljko
  2017-07-12 10:21                 ` Veljko
@ 2017-07-14  1:57                 ` NeilBrown
  2017-07-14  2:05                   ` NeilBrown
  2017-07-14 13:40                   ` Veljko
  1 sibling, 2 replies; 28+ messages in thread
From: NeilBrown @ 2017-07-14  1:57 UTC (permalink / raw)
  To: Veljko, linux-raid

[-- Attachment #1: Type: text/plain, Size: 3061 bytes --]

On Mon, Jul 10 2017, Veljko wrote:

> On 07/10/2017 12:37 AM, NeilBrown wrote:
>> I wasn't clear to me that I needed to chime in..  and the complete lack
>> of details (not even an "mdadm --examine" output), meant I could only
>> answer in vague generalizations.
>> However, seeing you asked.
>> If you really want to have a 'linear' of 2 RAID10s, then
>> 0/ unmount the xfs filesystem
>> 1/ backup the last few megabytes of the device
>>     dd if=/dev/mdXX of=/safe/place/backup bs=1M skip=$BIGNUM
>> 2/ create a linear array of the two RAID10s, ensuring the
>>    metadata is v1.0, and the dataoffset is zero (should be default with
>>    1.0)
>>     mdadm -C /dev/mdZZ -l linear -n 2 -e 1.0 --data-offset=0 /dev/mdXX /dev/mdYY
>> 3/ restore the saved data
>>     dd of=/dev/mdZZ if=/safe/place/backup bs=1M seek=$BIGNUM
>> 4/ grow the xfs filesystem
>> 5/ be happy.
>>
>> I cannot comment on the values of "few" and "$BUGNUM" without seeing
>> specifics.
>>
>> NeilBrown
>
> Thanks for your response, Neil!
>
> md0 is boot (raid1), md1 is root (raid10) and md2 is data (raid10) that 
> I need to expand. Here are details:

Presumably you also have an md3 raid10 which you want to attach to the
end of md2?

md2 is 5761631232 sectors.
  2880815616 kilobytes
  2813296.5 (binary)megabytes.
  
When you include that into a "linear" you will lose a few K from the
end.
It might be sensible to cause the "linear" to use whole stripes from
the raid10, where a stripe is 1M (2 512K chunks).
If you did that, you would lose a little over 1M.
So backup the last 3.5 M of the raid10.  This is much more than you need.

ie.

  dd if=/dev/md2 of=SOMEWHERE/SAFE bs=1M skip=2813293

(dd treats 'M' as 1024*1024, MB is 1000*1000)

If the file this creates is not 3.5M, then something went wrong.  Stop
here.

Just to be safe you might want to backup the first few megabytes.  You
won't need this unless something goes wrong
  dd if=/dev/md2 of=SOMEWHERE/ELSE bs=1M count=10

Now create the linear from /dev/md2 and /dev/md3(?).  Be sure to use
"-e 1.0 --data-offset=0".  This creates /dev/md4

Now restore the first backup

 dd if=SOMEWHERE/SAFE of=/dev/md4 bs=1M seek=2813293

Be sure to use the same bs= and seek= as you did the first time.
Be sure it is copying from the back and to the new linear raid.

You should now be done. Check your xfs filesystem, and maybe even mount
it and use it.

NeilBrown

>
> # mdadm --detail /dev/md2
> /dev/md2:
>          Version : 1.2
>    Creation Time : Fri Sep 14 12:40:13 2012
>       Raid Level : raid10
>       Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
>    Used Dev Size : 2880815616 (2747.36 GiB 2949.96 GB)
>     Raid Devices : 4
>    Total Devices : 4
>      Persistence : Superblock is persistent
>
>      Update Time : Mon Jul 10 12:32:51 2017
>            State : clean
>   Active Devices : 4
> Working Devices : 4
>   Failed Devices : 0
>    Spare Devices : 0
>
>           Layout : near=2
>       Chunk Size : 512K
>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-12 10:21                 ` Veljko
@ 2017-07-14  2:03                   ` NeilBrown
  0 siblings, 0 replies; 28+ messages in thread
From: NeilBrown @ 2017-07-14  2:03 UTC (permalink / raw)
  To: Veljko, linux-raid

[-- Attachment #1: Type: text/plain, Size: 711 bytes --]

On Wed, Jul 12 2017, Veljko wrote:
>
> Data offset is not zero on md2 partitions. Is that a dealbreaker?

No.

>
> Would it be than better to reshape the current RAID10 to increase the 
> number of devices used from 4 to 8 (as advised by Roman)?

I cannot comment on "better".  It would be different.  Both approaches
should work.  End results would not be identical.
A reshape would take longer, but would leave you with just one array to
manage.

NeilBrown

>
> Regards,
> Veljko
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-14  1:57                 ` NeilBrown
@ 2017-07-14  2:05                   ` NeilBrown
  2017-07-14 13:40                   ` Veljko
  1 sibling, 0 replies; 28+ messages in thread
From: NeilBrown @ 2017-07-14  2:05 UTC (permalink / raw)
  To: Veljko, linux-raid

[-- Attachment #1: Type: text/plain, Size: 455 bytes --]

On Fri, Jul 14 2017, NeilBrown wrote:
>
> Now create the linear from /dev/md2 and /dev/md3(?).  Be sure to use
> "-e 1.0 --data-offset=0".  This creates /dev/md4

I meant to add:
 if you want the linear device to align with raid10 stripes (which seems
 elegant, and might slightly help performance is rare cases) you can use
 the "--rounding" mdadm option.  e.g.
   mdadm --create /dev/md4 -l linear --rounding=1M -e 1.0  --data-offset=0 .....

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-14  1:57                 ` NeilBrown
  2017-07-14  2:05                   ` NeilBrown
@ 2017-07-14 13:40                   ` Veljko
  2017-07-15  0:12                     ` NeilBrown
  1 sibling, 1 reply; 28+ messages in thread
From: Veljko @ 2017-07-14 13:40 UTC (permalink / raw)
  To: linux-raid

On 07/14/2017 03:57 AM, NeilBrown wrote:
>
> Presumably you also have an md3 raid10 which you want to attach to the
> end of md2?

Yes, I just created it. It's resyncing.


> md2 is 5761631232 sectors.
>   2880815616 kilobytes
>   2813296.5 (binary)megabytes.
>
> When you include that into a "linear" you will lose a few K from the
> end.
> It might be sensible to cause the "linear" to use whole stripes from
> the raid10, where a stripe is 1M (2 512K chunks).
> If you did that, you would lose a little over 1M.
> So backup the last 3.5 M of the raid10.  This is much more than you need.
>
> ie.
>
>   dd if=/dev/md2 of=SOMEWHERE/SAFE bs=1M skip=2813293

I'm little confused. What I'm backuping is last 3.5M of used space, 
right? How is that only ~2.7T? df shows 4.8T of used space.

Rest of the instructions are clear and I'll try it as soon as md3 is 
synced.

Thanks Neil!

> (dd treats 'M' as 1024*1024, MB is 1000*1000)
>
> If the file this creates is not 3.5M, then something went wrong.  Stop
> here.
>
> Just to be safe you might want to backup the first few megabytes.  You
> won't need this unless something goes wrong
>   dd if=/dev/md2 of=SOMEWHERE/ELSE bs=1M count=10
>
> Now create the linear from /dev/md2 and /dev/md3(?).  Be sure to use
> "-e 1.0 --data-offset=0".  This creates /dev/md4
>
> Now restore the first backup
>
>  dd if=SOMEWHERE/SAFE of=/dev/md4 bs=1M seek=2813293
>
> Be sure to use the same bs= and seek= as you did the first time.
> Be sure it is copying from the back and to the new linear raid.
>
> You should now be done. Check your xfs filesystem, and maybe even mount
> it and use it.
>
> NeilBrown
>
>>
>> # mdadm --detail /dev/md2
>> /dev/md2:
>>          Version : 1.2
>>    Creation Time : Fri Sep 14 12:40:13 2012
>>       Raid Level : raid10
>>       Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
>>    Used Dev Size : 2880815616 (2747.36 GiB 2949.96 GB)
>>     Raid Devices : 4
>>    Total Devices : 4
>>      Persistence : Superblock is persistent
>>
>>      Update Time : Mon Jul 10 12:32:51 2017
>>            State : clean
>>   Active Devices : 4
>> Working Devices : 4
>>   Failed Devices : 0
>>    Spare Devices : 0
>>
>>           Layout : near=2
>>       Chunk Size : 512K
>>


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-14 13:40                   ` Veljko
@ 2017-07-15  0:12                     ` NeilBrown
  2017-07-17 10:16                       ` Veljko
  0 siblings, 1 reply; 28+ messages in thread
From: NeilBrown @ 2017-07-15  0:12 UTC (permalink / raw)
  To: Veljko, linux-raid

[-- Attachment #1: Type: text/plain, Size: 2951 bytes --]

On Fri, Jul 14 2017, Veljko wrote:

> On 07/14/2017 03:57 AM, NeilBrown wrote:
>>
>> Presumably you also have an md3 raid10 which you want to attach to the
>> end of md2?
>
> Yes, I just created it. It's resyncing.
>
>
>> md2 is 5761631232 sectors.
>>   2880815616 kilobytes
>>   2813296.5 (binary)megabytes.
>>
>> When you include that into a "linear" you will lose a few K from the
>> end.
>> It might be sensible to cause the "linear" to use whole stripes from
>> the raid10, where a stripe is 1M (2 512K chunks).
>> If you did that, you would lose a little over 1M.
>> So backup the last 3.5 M of the raid10.  This is much more than you need.
>>
>> ie.
>>
>>   dd if=/dev/md2 of=SOMEWHERE/SAFE bs=1M skip=2813293
>
> I'm little confused. What I'm backuping is last 3.5M of used space, 
> right? How is that only ~2.7T? df shows 4.8T of used space.

You were right to check.

md2 is 5761631232 kilobytes, not sectors.
so 5626593 (binary) megabytes (exactly)

So command should be

>>   dd if=/dev/md2 of=SOMEWHERE/SAFE bs=1M skip=5626590

and expect it to create a 3M file.

Use this 'skip' number of the 'seek' number later.

NeilBrown


>
> Rest of the instructions are clear and I'll try it as soon as md3 is 
> synced.
>
> Thanks Neil!
>
>> (dd treats 'M' as 1024*1024, MB is 1000*1000)
>>
>> If the file this creates is not 3.5M, then something went wrong.  Stop
>> here.
>>
>> Just to be safe you might want to backup the first few megabytes.  You
>> won't need this unless something goes wrong
>>   dd if=/dev/md2 of=SOMEWHERE/ELSE bs=1M count=10
>>
>> Now create the linear from /dev/md2 and /dev/md3(?).  Be sure to use
>> "-e 1.0 --data-offset=0".  This creates /dev/md4
>>
>> Now restore the first backup
>>
>>  dd if=SOMEWHERE/SAFE of=/dev/md4 bs=1M seek=2813293
>>
>> Be sure to use the same bs= and seek= as you did the first time.
>> Be sure it is copying from the back and to the new linear raid.
>>
>> You should now be done. Check your xfs filesystem, and maybe even mount
>> it and use it.
>>
>> NeilBrown
>>
>>>
>>> # mdadm --detail /dev/md2
>>> /dev/md2:
>>>          Version : 1.2
>>>    Creation Time : Fri Sep 14 12:40:13 2012
>>>       Raid Level : raid10
>>>       Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
>>>    Used Dev Size : 2880815616 (2747.36 GiB 2949.96 GB)
>>>     Raid Devices : 4
>>>    Total Devices : 4
>>>      Persistence : Superblock is persistent
>>>
>>>      Update Time : Mon Jul 10 12:32:51 2017
>>>            State : clean
>>>   Active Devices : 4
>>> Working Devices : 4
>>>   Failed Devices : 0
>>>    Spare Devices : 0
>>>
>>>           Layout : near=2
>>>       Chunk Size : 512K
>>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-15  0:12                     ` NeilBrown
@ 2017-07-17 10:16                       ` Veljko
  2017-07-18  8:58                         ` Veljko
  2017-07-20 22:00                         ` NeilBrown
  0 siblings, 2 replies; 28+ messages in thread
From: Veljko @ 2017-07-17 10:16 UTC (permalink / raw)
  To: linux-raid

On 07/15/2017 02:12 AM, NeilBrown wrote:

> So command should be
>
>>>   dd if=/dev/md2 of=SOMEWHERE/SAFE bs=1M skip=5626590
>
> and expect it to create a 3M file.
>
> Use this 'skip' number of the 'seek' number later.
>
> NeilBrown

Thanks, Neil, now it makes more sense.

I tried to create new linear device, but mdadm is complaining about 
data-offset:

# mdadm -C /dev/md4 -l linear -n 2 --rounding=1M -e 1.0 --data-offset=0 
/dev/md2 /dev/md3
  mdadm: invalid data-offset: 0

I'm using Debian 8.8 if it makes any difference.

# mdadm -V
  mdadm - v3.3.2 - 21st August 2014

What could be the problem?

Regards,
Veljko

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-17 10:16                       ` Veljko
@ 2017-07-18  8:58                         ` Veljko
  2017-07-20 21:40                           ` Veljko
  2017-07-20 22:00                         ` NeilBrown
  1 sibling, 1 reply; 28+ messages in thread
From: Veljko @ 2017-07-18  8:58 UTC (permalink / raw)
  To: linux-raid

On 07/17/2017 12:16 PM, Veljko wrote:
> On 07/15/2017 02:12 AM, NeilBrown wrote:
>
>> So command should be
>>
>>>>   dd if=/dev/md2 of=SOMEWHERE/SAFE bs=1M skip=5626590
>>
>> and expect it to create a 3M file.
>>
>> Use this 'skip' number of the 'seek' number later.
>>
>> NeilBrown
>
> Thanks, Neil, now it makes more sense.
>
> I tried to create new linear device, but mdadm is complaining about
> data-offset:
>
> # mdadm -C /dev/md4 -l linear -n 2 --rounding=1M -e 1.0 --data-offset=0
> /dev/md2 /dev/md3
>  mdadm: invalid data-offset: 0
>
> I'm using Debian 8.8 if it makes any difference.
>
> # mdadm -V
>  mdadm - v3.3.2 - 21st August 2014
>
> What could be the problem?

I noticed that md2 and md3 use 1.2 metadata. Can that be the issue? 
Trying to create 1.0 metadata for md4?


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-18  8:58                         ` Veljko
@ 2017-07-20 21:40                           ` Veljko
  0 siblings, 0 replies; 28+ messages in thread
From: Veljko @ 2017-07-20 21:40 UTC (permalink / raw)
  To: linux-raid, neilb

On Tue, Jul 18, 2017 at 10:58 AM, Veljko <veljko3@gmail.com> wrote:
> On 07/17/2017 12:16 PM, Veljko wrote:
>>
>> On 07/15/2017 02:12 AM, NeilBrown wrote:
>>
>>> So command should be
>>>
>>>>>   dd if=/dev/md2 of=SOMEWHERE/SAFE bs=1M skip=5626590
>>>
>>>
>>> and expect it to create a 3M file.
>>>
>>> Use this 'skip' number of the 'seek' number later.
>>>
>>> NeilBrown
>>
>>
>> Thanks, Neil, now it makes more sense.
>>
>> I tried to create new linear device, but mdadm is complaining about
>> data-offset:
>>
>> # mdadm -C /dev/md4 -l linear -n 2 --rounding=1M -e 1.0 --data-offset=0
>> /dev/md2 /dev/md3
>>  mdadm: invalid data-offset: 0
>>
>> I'm using Debian 8.8 if it makes any difference.
>>
>> # mdadm -V
>>  mdadm - v3.3.2 - 21st August 2014
>>
>> What could be the problem?
>
>
> I noticed that md2 and md3 use 1.2 metadata. Can that be the issue? Trying
> to create 1.0 metadata for md4?

Any advice on this?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-17 10:16                       ` Veljko
  2017-07-18  8:58                         ` Veljko
@ 2017-07-20 22:00                         ` NeilBrown
  2017-07-21  9:15                           ` Veljko
  1 sibling, 1 reply; 28+ messages in thread
From: NeilBrown @ 2017-07-20 22:00 UTC (permalink / raw)
  To: Veljko, linux-raid

[-- Attachment #1: Type: text/plain, Size: 1344 bytes --]

On Mon, Jul 17 2017, Veljko wrote:

> On 07/15/2017 02:12 AM, NeilBrown wrote:
>
>> So command should be
>>
>>>>   dd if=/dev/md2 of=SOMEWHERE/SAFE bs=1M skip=5626590
>>
>> and expect it to create a 3M file.
>>
>> Use this 'skip' number of the 'seek' number later.
>>
>> NeilBrown
>
> Thanks, Neil, now it makes more sense.
>
> I tried to create new linear device, but mdadm is complaining about 
> data-offset:
>
> # mdadm -C /dev/md4 -l linear -n 2 --rounding=1M -e 1.0 --data-offset=0 
> /dev/md2 /dev/md3
>   mdadm: invalid data-offset: 0

Bother.
 mdadm uses "parse_size()" to parse the offset, and this rejects
 "0", which makes sense for a size, but not for an offset.

 Just leave the "--data-offset=0" out. I checked and that is defintely
 the default for 1.0.

> I noticed that md2 and md3 use 1.2 metadata. Can that be the issue? 

No the metadata on md4 is quite independent of the metadata on md2 and md3.

NeilBrown


>
> I'm using Debian 8.8 if it makes any difference.
>
> # mdadm -V
>   mdadm - v3.3.2 - 21st August 2014
>
> What could be the problem?
>
> Regards,
> Veljko
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-20 22:00                         ` NeilBrown
@ 2017-07-21  9:15                           ` Veljko
  2017-07-21 11:37                             ` Veljko
  0 siblings, 1 reply; 28+ messages in thread
From: Veljko @ 2017-07-21  9:15 UTC (permalink / raw)
  To: NeilBrown, linux-raid

On 07/21/2017 12:00 AM, NeilBrown wrote:
> On Mon, Jul 17 2017, Veljko wrote:
>
>> On 07/15/2017 02:12 AM, NeilBrown wrote:
>>
>>> So command should be
>>>
>>>>>   dd if=/dev/md2 of=SOMEWHERE/SAFE bs=1M skip=5626590
>>>
>>> and expect it to create a 3M file.
>>>
>>> Use this 'skip' number of the 'seek' number later.
>>>
>>> NeilBrown
>>
>> Thanks, Neil, now it makes more sense.
>>
>> I tried to create new linear device, but mdadm is complaining about
>> data-offset:
>>
>> # mdadm -C /dev/md4 -l linear -n 2 --rounding=1M -e 1.0 --data-offset=0
>> /dev/md2 /dev/md3
>>   mdadm: invalid data-offset: 0
>
> Bother.
>  mdadm uses "parse_size()" to parse the offset, and this rejects
>  "0", which makes sense for a size, but not for an offset.
>
>  Just leave the "--data-offset=0" out. I checked and that is defintely
>  the default for 1.0.
>
>> I noticed that md2 and md3 use 1.2 metadata. Can that be the issue?
>
> No the metadata on md4 is quite independent of the metadata on md2 and md3.
>
> NeilBrown

Yes, now it works. I was able to create new linear device, restored the 
saved 3M file and grown xfs. It was really fast and indeed, I'm happy.

Thank you very much, Neil!

Regards,
Veljko


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-21  9:15                           ` Veljko
@ 2017-07-21 11:37                             ` Veljko
  2017-07-22 23:03                               ` NeilBrown
  0 siblings, 1 reply; 28+ messages in thread
From: Veljko @ 2017-07-21 11:37 UTC (permalink / raw)
  To: NeilBrown, linux-raid

Hello Neil,

On 07/21/2017 11:15 AM, Veljko wrote:
>> On 07/21/2017 12:00 AM, NeilBrown wrote:
>> Bother.
>>  mdadm uses "parse_size()" to parse the offset, and this rejects
>>  "0", which makes sense for a size, but not for an offset.
>>
>>  Just leave the "--data-offset=0" out. I checked and that is defintely
>>  the default for 1.0.
>
> Yes, now it works. I was able to create new linear device, restored the
> saved 3M file and grown xfs. It was really fast and indeed, I'm happy.
>
> Thank you very much, Neil!


Well, not without problems, it seams.

I was having problem with root partition that was mounted read-only 
because of some problem with ext4. After fixing that on reboot, I still 
have extra space that was added with 4 new drives, but there are no md3 
or md4. I'm mounting this device in fstab with UUID. I noticed that UUID 
for md4 was the same as md2 before rebooting. But if I use array UUID 
from examine of md2 (first member of linear raid), I get:

    mount: can't find UUID="24843a41:8f84ee37:869fbe7b:bc953b58"

The one that works and that is in fstab is that one for md2 in by-uuid 
below.


# cat /proc/mdstat
Personalities : [raid1] [raid10]
md2 : active raid10 sda4[4] sdd3[5] sdc3[7] sdb4[6]
       5761631232 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

md1 : active raid10 sda3[4] sdd2[5] sdc2[7] sdb3[6]
       97590272 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

md0 : active raid1 sda2[2] sdb2[3]
       488128 blocks super 1.2 [2/2] [UU]


No md3 or md4.

Also, no mention of them in mdadm.conf

# definitions of existing MD arrays
  ARRAY /dev/md/0 metadata=1.2 UUID=e5a17766:b4df544d:c2770d6e:214113ec 
name=backup1:0
  ARRAY /dev/md/1 metadata=1.2 UUID=91560d5a:245bbc56:cc08b0ce:9c78fea1 
name=backup1:1
  ARRAY /dev/md/2 metadata=1.2 UUID=f6eeaa57:a55f36ff:6980a62a:d4781e44 
name=backup1:2

or in dev:

# ls /dev/md/
0  1  2

or in /dev/disk/by-uuid:

  ls -al /dev/disk/by-uuid/
  total 0
  drwxr-xr-x 2 root root 120 Jul 21 12:11 .
  drwxr-xr-x 7 root root 140 Jul 21 12:11 ..
  lrwxrwxrwx 1 root root  10 Jul 21 12:15 
64f194a4-7a3f-4cff-b167-ff6d8e70adff -> ../../dm-1
  lrwxrwxrwx 1 root root   9 Jul 21 12:15 
81060b25-b698-4cbd-b67f-d35c42c9482c -> ../../md2
  lrwxrwxrwx 1 root root  10 Jul 21 12:15 
b0285993-75db-48fd-bcd7-10d870e6069f -> ../../dm-0
  lrwxrwxrwx 1 root root   9 Jul 21 12:15 
dcfde992-2fe2-4da2-bb66-8c541a4bd473 -> ../../md0



--examine of md2 shows that this is a member of newly created linear md4 
array:

# mdadm --examine /dev/md2
/dev/md2:
           Magic : a92b4efc
         Version : 1.0
     Feature Map : 0x0
      Array UUID : 24843a41:8f84ee37:869fbe7b:bc953b58
            Name : backup2:4  (local to host backup2)
   Creation Time : Fri Jul 21 10:58:07 2017
      Raid Level : linear
    Raid Devices : 2

  Avail Dev Size : 11523262440 (5494.72 GiB 5899.91 GB)
   Used Dev Size : 0
    Super Offset : 11523262448 sectors
           State : clean
     Device UUID : d8931222:30af893e:9e0d2fe3:b18274ef

     Update Time : Fri Jul 21 10:58:07 2017
   Bad Block Log : 512 entries available at offset -8 sectors
        Checksum : eff10ec - correct
          Events : 0

        Rounding : 1024K

    Device Role : Active device 0
    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)


This device and array has different UUID than one used for mounting the 
device in fstab.

--detail on md2 still works.

mdadm --detail /dev/md2
/dev/md2:
         Version : 1.2
   Creation Time : Fri Sep 14 12:40:13 2012
      Raid Level : raid10
      Array Size : 5761631232 (5494.72 GiB 5899.91 GB)
   Used Dev Size : 2880815616 (2747.36 GiB 2949.96 GB)
    Raid Devices : 4
   Total Devices : 4
     Persistence : Superblock is persistent

     Update Time : Fri Jul 21 12:28:07 2017
           State : clean
  Active Devices : 4
Working Devices : 4
  Failed Devices : 0
   Spare Devices : 0

          Layout : near=2
      Chunk Size : 512K

            Name : backup1:2
            UUID : f6eeaa57:a55f36ff:6980a62a:d4781e44
          Events : 2689052

     Number   Major   Minor   RaidDevice State
        4       8        4        0      active sync set-A   /dev/sda4
        6       8       20        1      active sync set-B   /dev/sdb4
        7       8       35        2      active sync set-A   /dev/sdc3
        5       8       51        3      active sync set-B   /dev/sdd3


--examine switch on partitions that are members of md2 or md3 shows that 
they know where they belong.

Is there a procedure for finding the missing devices? I'm not really 
comfortable with this confusing situation.

Regards,
Veljko

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-21 11:37                             ` Veljko
@ 2017-07-22 23:03                               ` NeilBrown
  2017-07-23 10:05                                 ` Veljko
  0 siblings, 1 reply; 28+ messages in thread
From: NeilBrown @ 2017-07-22 23:03 UTC (permalink / raw)
  To: Veljko, linux-raid

[-- Attachment #1: Type: text/plain, Size: 2905 bytes --]

On Fri, Jul 21 2017, Veljko wrote:

> Hello Neil,
>
> On 07/21/2017 11:15 AM, Veljko wrote:
>>> On 07/21/2017 12:00 AM, NeilBrown wrote:
>>> Bother.
>>>  mdadm uses "parse_size()" to parse the offset, and this rejects
>>>  "0", which makes sense for a size, but not for an offset.
>>>
>>>  Just leave the "--data-offset=0" out. I checked and that is defintely
>>>  the default for 1.0.
>>
>> Yes, now it works. I was able to create new linear device, restored the
>> saved 3M file and grown xfs. It was really fast and indeed, I'm happy.
>>
>> Thank you very much, Neil!
>
>
> Well, not without problems, it seams.
>
> I was having problem with root partition that was mounted read-only 
> because of some problem with ext4. After fixing that on reboot, I still 
> have extra space that was added with 4 new drives, but there are no md3 
> or md4. I'm mounting this device in fstab with UUID. I noticed that UUID 
> for md4 was the same as md2 before rebooting. But if I use array UUID 
> from examine of md2 (first member of linear raid), I get:
>
>     mount: can't find UUID="24843a41:8f84ee37:869fbe7b:bc953b58"
>
> The one that works and that is in fstab is that one for md2 in by-uuid 
> below.

The UUID you give to mount is the UUID of the filesystem, not of the
device (or array) which stores the filesystem.

One of the problems with use 1.0 metadata (or 0.90) is that the first
component device looks like it contains the same filesystem as the whole
array.   I think this is what is causing your confusion.

>
>
> # cat /proc/mdstat
> Personalities : [raid1] [raid10]
> md2 : active raid10 sda4[4] sdd3[5] sdc3[7] sdb4[6]
>        5761631232 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
>
> md1 : active raid10 sda3[4] sdd2[5] sdc2[7] sdb3[6]
>        97590272 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
>
> md0 : active raid1 sda2[2] sdb2[3]
>        488128 blocks super 1.2 [2/2] [UU]
>
>
> No md3 or md4.
>
> Also, no mention of them in mdadm.conf
>
> # definitions of existing MD arrays
>   ARRAY /dev/md/0 metadata=1.2 UUID=e5a17766:b4df544d:c2770d6e:214113ec 
> name=backup1:0
>   ARRAY /dev/md/1 metadata=1.2 UUID=91560d5a:245bbc56:cc08b0ce:9c78fea1 
> name=backup1:1
>   ARRAY /dev/md/2 metadata=1.2 UUID=f6eeaa57:a55f36ff:6980a62a:d4781e44 
> name=backup1:2

This all depends on the details of the particular distro you are using.
You don't, in general, need arrays to be listed in mdadm.conf.  A
particular distro could require it though.

If you run
   mdadm -Es

It will show a sample mdadm.conf which should contain /dev/md/3 - the
new raid10, and /dev/md/4.
You could add those lines to mdadm.conf, then
   mdadm --assemble /dev/md/3
   mdadm --assemble /dev/md/4
and it should get assembled.  Then you should be able to mount the large
filesystem successfully.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Linear device of two arrays
  2017-07-22 23:03                               ` NeilBrown
@ 2017-07-23 10:05                                 ` Veljko
  0 siblings, 0 replies; 28+ messages in thread
From: Veljko @ 2017-07-23 10:05 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

On 2017-Jul-23 09:03, NeilBrown wrote:
> The UUID you give to mount is the UUID of the filesystem, not of the
> device (or array) which stores the filesystem.
> 
> One of the problems with use 1.0 metadata (or 0.90) is that the first
> component device looks like it contains the same filesystem as the whole
> array.   I think this is what is causing your confusion.

Yes, I mixed up those two. Now is all clear.

> This all depends on the details of the particular distro you are using.
> You don't, in general, need arrays to be listed in mdadm.conf.  A
> particular distro could require it though.
> 
> If you run
>    mdadm -Es
> 
> It will show a sample mdadm.conf which should contain /dev/md/3 - the
> new raid10, and /dev/md/4.
> You could add those lines to mdadm.conf, then
>    mdadm --assemble /dev/md/3
>    mdadm --assemble /dev/md/4
> and it should get assembled.  Then you should be able to mount the large
> filesystem successfully.

Well, I feel much better now that I do have arrays listed in mdadm.conf
and in /dev.

Thanks very much for your help, Neil!

Regards,
Veljko


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2017-07-23 10:05 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-05 15:34 Linear device of two arrays Veljko
2017-07-05 16:42 ` Roman Mamedov
2017-07-05 18:07   ` Wols Lists
2017-07-07 11:07     ` Nix
2017-07-07 20:26     ` Veljko
2017-07-07 21:20       ` Andreas Klauer
2017-07-07 21:53         ` Roman Mamedov
2017-07-07 22:20           ` Andreas Klauer
2017-07-07 22:33           ` Andreas Klauer
2017-07-07 22:52       ` Stan Hoeppner
2017-07-08 10:26         ` Veljko
2017-07-08 21:24           ` Stan Hoeppner
2017-07-09 22:37             ` NeilBrown
2017-07-10 11:03               ` Veljko
2017-07-12 10:21                 ` Veljko
2017-07-14  2:03                   ` NeilBrown
2017-07-14  1:57                 ` NeilBrown
2017-07-14  2:05                   ` NeilBrown
2017-07-14 13:40                   ` Veljko
2017-07-15  0:12                     ` NeilBrown
2017-07-17 10:16                       ` Veljko
2017-07-18  8:58                         ` Veljko
2017-07-20 21:40                           ` Veljko
2017-07-20 22:00                         ` NeilBrown
2017-07-21  9:15                           ` Veljko
2017-07-21 11:37                             ` Veljko
2017-07-22 23:03                               ` NeilBrown
2017-07-23 10:05                                 ` Veljko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).