From: Goswin von Brederlow <goswin-v-b@web.de>
To: Tapani Tarvainen <raid@tapanitarvainen.fi>
Cc: Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: Full use of varying drive sizes?
Date: Wed, 23 Sep 2009 14:42:31 +0200 [thread overview]
Message-ID: <87bpl1ygqw.fsf@frosties.localdomain> (raw)
In-Reply-To: <20090923101505.GA11165@hamsu.tarvainen.info> (Tapani Tarvainen's message of "Wed, 23 Sep 2009 13:15:05 +0300")
Tapani Tarvainen <raid@tapanitarvainen.fi> writes:
> On Tue, Sep 22, 2009 at 04:07:53PM +0300, Majed B. (majedb@gmail.com) wrote:
>
>> When I first put up a storage box, it was built out of 4x 500GB disks,
>> later on, I expanded to 1TB disks.
>>
>> What I did was partition the 1TB disks into 2x 500GB partitions, then
>> create 2 RAID arrays: Each array out of partitions:
>> md0: sda1, sdb1, sdc1, ...etc.
>> md1: sda2, sdb2, sdc2, ...etc.
>>
>> All of those below LVM.
>>
>> This worked for a while, but when more 1TB disks started making way
>> into the array, performance dropped because the disk had to read from
>> 2 partitions on the same disk, and even worse: When a disk fail, both
>> arrays were affected, and things only got nastier and worse with time.
>
> I'm not 100% sure I understand what you did, but for the record,
> I've got a box with four 1TB disks arranged roughly like this:
>
> md0: sda1, sdb1, sdc1, sde1
> md1: sda2, sdb2, sdc2, sde2
> md2: sda3, sdb3, sdc3, sde3
> md3: sda4, sdb4, sdc4, sde4
>
> and each md a pv under lvm, and it's been running problem-free
> for over a year now. (No claims about performance, haven't
> made any usable measurements, but it's fast enough for what
> it does.)
>
> When it was new I had strange problems of one disk dropping out of the
> arrays every few days. The reason was traced to faulty SATA controller
> (replacing it fixed the problem), but the process revealed an extra
> advantage in the partitioning scheme: the lost disk could be added
> back after reboot and array rebuilt, but the fault had appeared
> in only one md at a time, so recovery was four times faster
> than if the disks had had only one partition.
In such a case bitmaps will bring the resync time down to minutes.
MfG
Goswin
next prev parent reply other threads:[~2009-09-23 12:42 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-22 11:24 Full use of varying drive sizes? Jon Hardcastle
2009-09-22 11:52 ` Kristleifur Daðason
2009-09-22 12:58 ` John Robinson
2009-09-22 13:07 ` Majed B.
2009-09-22 15:38 ` Jon Hardcastle
2009-09-22 15:47 ` Majed B.
2009-09-22 15:48 ` Ryan Wagoner
2009-09-22 16:04 ` Robin Hill
2009-09-23 8:20 ` John Robinson
2009-09-23 10:15 ` Tapani Tarvainen
2009-09-23 12:42 ` Goswin von Brederlow [this message]
2009-09-22 13:05 ` Tapani Tarvainen
2009-09-23 10:07 ` Goswin von Brederlow
2009-09-23 14:57 ` Jon Hardcastle
2009-09-23 20:28 ` Full use of varying drive sizes?---maybe a new raid mode is the answer? Konstantinos Skarlatos
2009-09-23 21:29 ` Chris Green
2009-09-24 17:23 ` John Robinson
2009-09-25 6:09 ` Neil Brown
2009-09-27 12:26 ` Konstantinos Skarlatos
2009-09-28 10:53 ` Goswin von Brederlow
2009-09-28 14:10 ` Konstantinos Skarlatos
2009-10-05 9:06 ` Goswin von Brederlow
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87bpl1ygqw.fsf@frosties.localdomain \
--to=goswin-v-b@web.de \
--cc=linux-raid@vger.kernel.org \
--cc=raid@tapanitarvainen.fi \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox