* RAID 5 build time optimization question and experiments.
@ 2009-12-11 14:55 Cat'Killer
2009-12-11 15:29 ` Robin Hill
0 siblings, 1 reply; 5+ messages in thread
From: Cat'Killer @ 2009-12-11 14:55 UTC (permalink / raw)
To: linux-raid
Hi All,
I am currently needing to build a RAID 5 on ten large 1TB Disks in
several different systems, that will come over time.
The systems are all running exactly the same linux version and mdadm
tools, as well as being exactly the same hardware, and are populated
with exactly the same drives (same capacity, manufacturer model and
batch).
Creating a RAID 5 array on each of these devices will take approx
15Hours+, which led me to think of the following procedure/experiment
to create them faster.
I found a way using Doug Gilbert's great sg3utils to zero all these
disks in a very efficient manner, using sgp_dd, at near drive
bandwidth, and proceeded to zero all the disks fully in about 4 hours!
Once done, I then created a RAID 5 on 10 disks, waited for the rebuild
to complete, stopped the array using mdadm, and dumped each of the
RAID's components superblocks to files.
The command used for the RAID 5 creation was the following:
mdadm --create -vvv --force --run --metadata=1.2 /dev/md/d0 --level=5
--size=35879936 --chunk=64 --name=0125465 -n10 --bitmap=internal
--bitmap-chunk=4096 --layout=ls /dev/sde2 /dev/sdj2 /dev/sdf2
/dev/sdg2 /dev/sdb2 /dev/sdc2 /dev/sdh2 /dev/sdi2 /dev/sdd2 /dev/sda2
The create worked fine and I waited for the rebuild to be complete
before stopping the array and dumping the SBs.
I then proceeded to write these same superblocks to 10 new similar
disks in a different system, and used the following command to
assemble the array:
bin/mdadm/mdadm --assemble -vvv --force /dev/md/d0 --run
--name=0125465 /dev/sde2 /dev/sdj2 /dev/sdf2 /dev/sdg2 /dev/sdb2
/dev/sdc2 /dev/sdh2 /dev/sdi2 /dev/sdd2 /dev/sda2
This subsequently led to my new array to be instantly online with no
rebuild required.
I figured this whole procedure would work since the disks are
completely zeroed, the parity information should be zero anyways on
the drives. Since zeroing all these drives took me 4 hours, I would
save nearly 10 hours for any
subsequent array I would need to build in the future, just by zeroing
their components and dd-ing the SBs onto them.
Do you think this method has its flaws? Do you think the data on it
should be safe? How about the bitmap, would that be safe as long as
the superblock has been copied, or should I copy this over too?
Thanks in advance.
Ben.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID 5 build time optimization question and experiments.
2009-12-11 14:55 Cat'Killer
@ 2009-12-11 15:29 ` Robin Hill
2009-12-11 15:44 ` Joe Landman
0 siblings, 1 reply; 5+ messages in thread
From: Robin Hill @ 2009-12-11 15:29 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1130 bytes --]
On Fri Dec 11, 2009 at 02:55:31PM +0000, Cat'Killer wrote:
> I found a way using Doug Gilbert's great sg3utils to zero all these
> disks in a very efficient manner, using sgp_dd, at near drive
> bandwidth, and proceeded to zero all the disks fully in about 4 hours!
>
> Once done, I then created a RAID 5 on 10 disks, waited for the rebuild
> to complete, stopped the array using mdadm, and dumped each of the
> RAID's components superblocks to files.
>
<-snip->
> The create worked fine and I waited for the rebuild to be complete
> before stopping the array and dumping the SBs.
>
> I then proceeded to write these same superblocks to 10 new similar
> disks in a different system.
>
You could just do the create with --assume-clean, which should take very
little time at all. If the drives were zeroed initially then this will
give you valid parity data.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID 5 build time optimization question and experiments.
2009-12-11 15:29 ` Robin Hill
@ 2009-12-11 15:44 ` Joe Landman
0 siblings, 0 replies; 5+ messages in thread
From: Joe Landman @ 2009-12-11 15:44 UTC (permalink / raw)
To: linux-raid
Gsacapital!
Robin Hill wrote:
> On Fri Dec 11, 2009 at 02:55:31PM +0000, Cat'Killer wrote:
>
>> I found a way using Doug Gilbert's great sg3utils to zero all these
>> disks in a very efficient manner, using sgp_dd, at near drive
>> bandwidth, and proceeded to zero all the disks fully in about 4 hours!
>>
>> Once done, I then created a RAID 5 on 10 disks, waited for the rebuild
>> to complete, stopped the array using mdadm, and dumped each of the
>> RAID's components superblocks to files.
>>
> <-snip->
>> The create worked fine and I waited for the rebuild to be complete
>> before stopping the array and dumping the SBs.
>>
>> I then proceeded to write these same superblocks to 10 new similar
>> disks in a different system.
>>
> You could just do the create with --assume-clean, which should take very
> little time at all. If the drives were zeroed initially then this will
> give you valid parity data.
>
> Cheers,
> Robin
>
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID 5 build time optimization question and experiments.
[not found] <4B227932.80503@mpstor.com>
@ 2009-12-11 16:57 ` Cat'Killer
2009-12-11 22:35 ` Michael Evans
0 siblings, 1 reply; 5+ messages in thread
From: Cat'Killer @ 2009-12-11 16:57 UTC (permalink / raw)
To: linux-raid
> On Fri Dec 11, 2009 at 02:55:31PM +0000, Cat'Killer wrote:
>
>> I found a way using Doug Gilbert's great sg3utils to zero all these
>> disks in a very efficient manner, using sgp_dd, at near drive
>> bandwidth, and proceeded to zero all the disks fully in about 4 hours!
>>
>> Once done, I then created a RAID 5 on 10 disks, waited for the rebuild
>> to complete, stopped the array using mdadm, and dumped each of the
>> RAID's components superblocks to files.
>>
> <-snip->
>>
>> The create worked fine and I waited for the rebuild to be complete
>> before stopping the array and dumping the SBs.
>>
>> I then proceeded to write these same superblocks to 10 new similar
>> disks in a different system.
>>
> You could just do the create with --assume-clean, which should take very
> little time at all. If the drives were zeroed initially then this will
> give you valid parity data.
>
Thank you for this, that's exactly what I was looking for. I had
looked in the manpage beforehand but I did not search for the right
terms nor assumed this would work in case of an array created ontop of
zeroed drives.
Thanks again.
Ben.
> Cheers,
> Robin
>
> --
> ___ ( ' } | Robin Hill <robin@robinhill.me.uk>
> |
> / / ) | Little Jim says .... |
> // !! | "He fallen in de water !!" |
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID 5 build time optimization question and experiments.
2009-12-11 16:57 ` RAID 5 build time optimization question and experiments Cat'Killer
@ 2009-12-11 22:35 ` Michael Evans
0 siblings, 0 replies; 5+ messages in thread
From: Michael Evans @ 2009-12-11 22:35 UTC (permalink / raw)
Cc: linux-raid
On Fri, Dec 11, 2009 at 8:57 AM, Cat'Killer <catkiller@gmail.com> wrote:
>> On Fri Dec 11, 2009 at 02:55:31PM +0000, Cat'Killer wrote:
>>
>>> I found a way using Doug Gilbert's great sg3utils to zero all these
>>> disks in a very efficient manner, using sgp_dd, at near drive
>>> bandwidth, and proceeded to zero all the disks fully in about 4 hours!
>>>
>>> Once done, I then created a RAID 5 on 10 disks, waited for the rebuild
>>> to complete, stopped the array using mdadm, and dumped each of the
>>> RAID's components superblocks to files.
>>>
>> <-snip->
>>>
>>> The create worked fine and I waited for the rebuild to be complete
>>> before stopping the array and dumping the SBs.
>>>
>>> I then proceeded to write these same superblocks to 10 new similar
>>> disks in a different system.
>>>
>> You could just do the create with --assume-clean, which should take very
>> little time at all. If the drives were zeroed initially then this will
>> give you valid parity data.
>>
> Thank you for this, that's exactly what I was looking for. I had
> looked in the manpage beforehand but I did not search for the right
> terms nor assumed this would work in case of an array created ontop of
> zeroed drives.
>
> Thanks again.
>
> Ben.
>
>> Cheers,
>> Robin
>>
>> --
>> ___ ( ' } | Robin Hill <robin@robinhill.me.uk>
>> |
>> / / ) | Little Jim says .... |
>> // !! | "He fallen in de water !!" |
>>
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
Yes, I had just come to a similar conclusion in a recent email series
and even provided a patch against the git copy of the manpage to
clarify the documentation.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2009-12-11 22:35 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <4B227932.80503@mpstor.com>
2009-12-11 16:57 ` RAID 5 build time optimization question and experiments Cat'Killer
2009-12-11 22:35 ` Michael Evans
2009-12-11 14:55 Cat'Killer
2009-12-11 15:29 ` Robin Hill
2009-12-11 15:44 ` Joe Landman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).