* raid0 not growable? @ 2009-12-23 13:52 Kristleifur Daðason [not found] ` <20091224094557.1ae96a0d@notabene> 2009-12-30 6:26 ` Billy Crook 0 siblings, 2 replies; 6+ messages in thread From: Kristleifur Daðason @ 2009-12-23 13:52 UTC (permalink / raw) To: linux-raid Hi, I'm running a raid0 array over a couple of raid6 arrays. I had planned on growing the arrays in time, and now turns out to be the time. To my chagrin, searches indicate that raid0 isn't growable. Can anyone confirm this before I wipe and reconfigure? Thanks! (Merry christmas too, if that's your thing!) -- Kristleifur ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <20091224094557.1ae96a0d@notabene>]
* Re: raid0 not growable? [not found] ` <20091224094557.1ae96a0d@notabene> @ 2009-12-23 23:28 ` Kristleifur Daðason 2009-12-23 23:54 ` Neil Brown 0 siblings, 1 reply; 6+ messages in thread From: Kristleifur Daðason @ 2009-12-23 23:28 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid On Wed, Dec 23, 2009 at 10:45 PM, Neil Brown <neilb@suse.de> wrote: > On Wed, 23 Dec 2009 13:52:54 +0000 > Kristleifur Daðason <kristleifur@gmail.com> wrote: > >> Hi, >> >> I'm running a raid0 array over a couple of raid6 arrays. I had planned >> on growing the arrays in time, and now turns out to be the time. >> >> To my chagrin, searches indicate that raid0 isn't growable. Can anyone >> confirm this before I wipe and reconfigure? > > That is correct, you cannot currently grow md/raid0. > > If the two raid6 arrays are exactly the same size, then you > could grow the two raid6 arrays, create a new raid0 over them > with the same block size and all your data will still be there. > But if they are not the same size, that won't work. > > NeilBrown > Many thanks for the reply. (Re-cc'd to the linux-raid list. Hope that's OK.) 1. The raid6 arrays are exactly alike. Do I just do a create a new raid0 with the right size, device count and chunk-size parameters? I trust your advice, but I am also certain of my own foolishness. I can't fully picture what happens to the data on the array -- Specifically, I'm thinking about whether to use --assume-clean or not. The documentation says not. I am guessing that a newly-created raid0 doesn't do any syncing/resyncing anyway - it just sets up the array structure and metadata and I am left to my own devices to fill it with data. Current chunksize is 256 and metadata is 1.1. So it's just a "mdadm --create /dev/md_bigraid0 --level=0 --raid-devices=2 --metadata=1.1 --chunksize=256 /dev/md_raid6a /dev/md_raid6b", right? 2. I have a JFS filesystem on the big raid0. Once I have the bigger raid0 built, I assume that I would first do a read-only fsck.jfs, which will succeed if I did everything correctly. Then I do a remount with the "resize" option to JFS to finally grow the JFS filesystem. -- Sincere thanks. I hope I shall be able to contribute something meaningful to mdadm when the company is richer and my time is freer :) In the meantime, is there any preferred way of donating to mdadm or sponsoring it? -- Kristleifur -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: raid0 not growable? 2009-12-23 23:28 ` Kristleifur Daðason @ 2009-12-23 23:54 ` Neil Brown 2009-12-30 15:25 ` Kristleifur Daðason 0 siblings, 1 reply; 6+ messages in thread From: Neil Brown @ 2009-12-23 23:54 UTC (permalink / raw) To: Kristleifur Daðason; +Cc: linux-raid On Wed, 23 Dec 2009 23:28:37 +0000 Kristleifur Daðason <kristleifur@gmail.com> wrote: > On Wed, Dec 23, 2009 at 10:45 PM, Neil Brown <neilb@suse.de> wrote: > > On Wed, 23 Dec 2009 13:52:54 +0000 > > Kristleifur Daðason <kristleifur@gmail.com> wrote: > > > >> Hi, > >> > >> I'm running a raid0 array over a couple of raid6 arrays. I had planned > >> on growing the arrays in time, and now turns out to be the time. > >> > >> To my chagrin, searches indicate that raid0 isn't growable. Can anyone > >> confirm this before I wipe and reconfigure? > > > > That is correct, you cannot currently grow md/raid0. > > > > If the two raid6 arrays are exactly the same size, then you > > could grow the two raid6 arrays, create a new raid0 over them > > with the same block size and all your data will still be there. > > But if they are not the same size, that won't work. > > > > NeilBrown > > > > Many thanks for the reply. (Re-cc'd to the linux-raid list. Hope that's OK.) Certainly. I didn't mean to drop linux-raid - I must have clicked the wrong button. > > 1. > The raid6 arrays are exactly alike. Do I just do a create a new raid0 > with the right size, device count and chunk-size parameters? I trust > your advice, but I am also certain of my own foolishness. I can't > fully picture what happens to the data on the array -- Specifically, > I'm thinking about whether to use --assume-clean or not. The > documentation says not. I am guessing that a newly-created raid0 > doesn't do any syncing/resyncing anyway - it just sets up the array > structure and metadata and I am left to my own devices to fill it with > data. > > Current chunksize is 256 and metadata is 1.1. So it's just a "mdadm > --create /dev/md_bigraid0 --level=0 --raid-devices=2 --metadata=1.1 > --chunksize=256 /dev/md_raid6a /dev/md_raid6b", right? Yes... there is a possible complication though. With 1.1 metadata mdadm reserves some space between the end of the metadata and the start of the data for a bitmap - even for raid0 which cannot have a bitmap. The amount of space reserved is affected by the size of the devices. So it is possible that the "data offset" will be different. You should check the data offset before and after. If it is different, we will have to hack mdadm to allow you to set the data offset manually. > > 2. > I have a JFS filesystem on the big raid0. Once I have the bigger raid0 > built, I assume that I would first do a read-only fsck.jfs, which will > succeed if I did everything correctly. Then I do a remount with the > "resize" option to JFS to finally grow the JFS filesystem. Sounds right (though I've never used jfs). > > -- > > Sincere thanks. I hope I shall be able to contribute something > meaningful to mdadm when the company is richer and my time is freer :) > In the meantime, is there any preferred way of donating to mdadm or > sponsoring it? > Just use it and report any issues you have, as you have done. NeilBrown -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: raid0 not growable? 2009-12-23 23:54 ` Neil Brown @ 2009-12-30 15:25 ` Kristleifur Daðason 2009-12-30 17:55 ` Kristleifur Daðason 0 siblings, 1 reply; 6+ messages in thread From: Kristleifur Daðason @ 2009-12-30 15:25 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid On Wed, Dec 23, 2009 at 11:54 PM, Neil Brown <neilb@suse.de> wrote: > On Wed, 23 Dec 2009 23:28:37 +0000 > Kristleifur Daðason <kristleifur@gmail.com> wrote: > >> On Wed, Dec 23, 2009 at 10:45 PM, Neil Brown <neilb@suse.de> wrote: >> > On Wed, 23 Dec 2009 13:52:54 +0000 >> > Kristleifur Daðason <kristleifur@gmail.com> wrote: >> > >> >> Hi, >> >> >> >> I'm running a raid0 array over a couple of raid6 arrays. I had planned >> >> on growing the arrays in time, and now turns out to be the time. >> >> >> > If the two raid6 arrays are exactly the same size, then you >> > could grow the two raid6 arrays, create a new raid0 over them >> > with the same block size and all your data will still be there. >> > But if they are not the same size, that won't work. >> >> Current chunksize is 256 and metadata is 1.1. So it's just a "mdadm >> --create /dev/md_bigraid0 --level=0 --raid-devices=2 --metadata=1.1 >> --chunksize=256 /dev/md_raid6a /dev/md_raid6b", right? > > Yes... there is a possible complication though. > With 1.1 metadata mdadm reserves some space between the end of the metadata > and the start of the data for a bitmap - even for raid0 which cannot have > a bitmap. The amount of space reserved is affected by the size of the > devices. > So it is possible that the "data offset" will be different. > You should check the data offset before and after. If it is different, we > will have to hack mdadm to allow you to set the data offset manually. Thank you for the replies, Neil and everybody. As we rise from Christmas, bloated to satisfaction, we are in spirits to grow the RAID. Following your information about the bitmap size and the data offset, I had a quick dig through the mdadm 3.1.1 source [1]. In "super1.c:static unsigned long choose_bm_space(unsigned long devsize)", it says "if the device is bigger than 8Gig, save 64k for bitmap usage, if bigger than 200Gig, save 128k". In this case, I have two raid6 devices under the raid0. I have grown the raid6 devices from ~3TB to ~6TB each. Unless I am mistaken, the devices are far bigger than the threshold to the 128k bitmap size, both before and after growth. Hence, I believe I am guaranteed an identical bitmap size and hence an identical data offset. And in theory, this case is closed. Thank you, all. -- Kristleifur [1] Turns out that Textmate on OS X is a very nice tool for studying open sourcecode. The times I have dug through source before, I've usually gotten lost in the trees. Textmate was refreshing - it felt like the source was levitating in my hand, elegantly twisting and turned and revealing itself. Textmate is just an editor of course, but it's comfortable to the point of being magical. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: raid0 not growable? 2009-12-30 15:25 ` Kristleifur Daðason @ 2009-12-30 17:55 ` Kristleifur Daðason 0 siblings, 0 replies; 6+ messages in thread From: Kristleifur Daðason @ 2009-12-30 17:55 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid On Wed, Dec 30, 2009 at 3:25 PM, Kristleifur Daðason <kristleifur@gmail.com> wrote: > On Wed, Dec 23, 2009 at 11:54 PM, Neil Brown <neilb@suse.de> wrote: >> On Wed, 23 Dec 2009 23:28:37 +0000 >> Kristleifur Daðason <kristleifur@gmail.com> wrote: >> >>> On Wed, Dec 23, 2009 at 10:45 PM, Neil Brown <neilb@suse.de> wrote: >>> > On Wed, 23 Dec 2009 13:52:54 +0000 >>> > Kristleifur Daðason <kristleifur@gmail.com> wrote: >>> > >>> >> Hi, >>> >> >>> >> I'm running a raid0 array over a couple of raid6 arrays. I had planned >>> >> on growing the arrays in time, and now turns out to be the time. >>> >> >>> > If the two raid6 arrays are exactly the same size, then you >>> > could grow the two raid6 arrays, create a new raid0 over them >>> > with the same block size and all your data will still be there. >>> > But if they are not the same size, that won't work. >>> >>> Current chunksize is 256 and metadata is 1.1. So it's just a "mdadm >>> --create /dev/md_bigraid0 --level=0 --raid-devices=2 --metadata=1.1 >>> --chunksize=256 /dev/md_raid6a /dev/md_raid6b", right? >> >> Yes... there is a possible complication though. >> With 1.1 metadata mdadm reserves some space between the end of the metadata >> and the start of the data for a bitmap - even for raid0 which cannot have >> a bitmap. The amount of space reserved is affected by the size of the >> devices. >> So it is possible that the "data offset" will be different. >> You should check the data offset before and after. If it is different, we >> will have to hack mdadm to allow you to set the data offset manually. > > ... I believe I am guaranteed an identical bitmap size and hence an identical data offset. > > And in theory, this case is closed. Thank you, all. > Yep, it worked great. We built the new raid0 array over the old one, and did a "fsck.jfs -n" dry-run over the filesystem. Still there, clean as a whistle. Next was a quick "mount -o remount,resize /tank" which grew the JFS filesystem in a second or two. Very quick and painless. For my purposes, I consider raid0 arrays to be growable. Such was the ease. Even though raid0 may not be officially growable. -- Kristleifur -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: raid0 not growable? 2009-12-23 13:52 raid0 not growable? Kristleifur Daðason [not found] ` <20091224094557.1ae96a0d@notabene> @ 2009-12-30 6:26 ` Billy Crook 1 sibling, 0 replies; 6+ messages in thread From: Billy Crook @ 2009-12-30 6:26 UTC (permalink / raw) To: Kristleifur Daðason; +Cc: linux-raid On Wed, Dec 23, 2009 at 07:52, Kristleifur Daðason <kristleifur@gmail.com> wrote: > Hi, > > I'm running a raid0 array over a couple of raid6 arrays. I had planned > on growing the arrays in time, and now turns out to be the time. > > To my chagrin, searches indicate that raid0 isn't growable. Can anyone > confirm this before I wipe and reconfigure? Not that this would solve your particular problem, but for the future, or if you do wipe and reconfigure, I would recommend: When you don't want your individual raid6 arrays to be any larger (because you don't want to further increase the probability of failure), don't use raid0 on top of multiple raid6 arrays. Use LVM. Make each raid6 array a PV. Add them all to one VG. Then make one LV, specifying striping. Remember that lvm extent size is NOT the size of lvm's striping. lvcreate -i3 -I4 -l 50%FREE -n lv_bigfast vg_arrays /dev/md3 /dev/md4 /dev/md5 will create a logical volume named lv_bigfast using 50% of all free space in the volume group vg_arrays; split evenly between three contributing physical volumes md[345] (each say an 8 2TB disk raid6 array), and striped in 4kb slices. Adjust as you see fit. Over time, you can add more md# arrays, and grow the LV and its filesystem while its in use. You can even (if you have enough freespace, shrink the PVs, pop a disk out of each, add some more component disks, and narrow your disk failure domains (the number of disks in each underlying raid6 array) on the fly without disrupting service. It will be far from instantaneous, but it beats disrupting service. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2009-12-30 17:55 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-12-23 13:52 raid0 not growable? Kristleifur Daðason [not found] ` <20091224094557.1ae96a0d@notabene> 2009-12-23 23:28 ` Kristleifur Daðason 2009-12-23 23:54 ` Neil Brown 2009-12-30 15:25 ` Kristleifur Daðason 2009-12-30 17:55 ` Kristleifur Daðason 2009-12-30 6:26 ` Billy Crook
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).