linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Goswin von Brederlow <goswin-v-b@web.de>
To: Redeeman <redeeman@metanurb.dk>
Cc: John Robinson <john.robinson@anonymous.org.uk>,
	SandeepKsinha <sandeepksinha@gmail.com>,
	Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: RAID5 reconstruction ?
Date: Sat, 30 May 2009 20:55:59 +0200	[thread overview]
Message-ID: <878wkezagw.fsf@frosties.localdomain> (raw)
In-Reply-To: <1243699735.5740.103.camel@localhost> (redeeman@metanurb.dk's message of "Sat, 30 May 2009 18:08:55 +0200")

Redeeman <redeeman@metanurb.dk> writes:

> On Sat, 2009-05-30 at 14:35 +0100, John Robinson wrote:
>> On 30/05/2009 06:44, SandeepKsinha wrote:
>> > Hi all,
>> > 
>> > Say If I have a RAID 5 array of 50GB of five disks of 10GB each.
>> > 
>> > I have data of 5GB. When a disk fails and replaced with a spare disk.
>> > Will the reconstruction happen only for the 5GB allocated disk blocks
>> > or it will happen for the whole disk size.
>> 
>> The whole disc size, for now anyway; md does not currently note which 
>> blocks have been used by its client (the filesystem, LVM, whatever).
>> 
>> > Is it possible to make  reconstruction intelligent enough to keep it optimized ?
>> 
>> This has been discussed in combination with supporting SSD drives' TRIM 
>> function, and would mean md had to keep track of used chunks or possibly 
>> even sectors using a bitmap or something like that, but whether anyone's 
>> working on it I don't know.
>
> I would say it should be possible to 'query' the filesystem for that
> information. Obviously this will only work if you run a filesystem on it
> which supports it, but it would seem like a nicer solution than a bitmap
> for it.
>
>> 
>> Cheers,
>> 
>> John.

And just when I hit send I thought of something else.

Instead of the initial sync when creating a raid the bitmap could just
mark all blocks as unused. Much faster raid creation.

MfG
        Goswin

  parent reply	other threads:[~2009-05-30 18:55 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-30  5:44 RAID5 reconstruction ? SandeepKsinha
2009-05-30 12:52 ` Sujit Karataparambil
2009-05-30 13:28   ` SandeepKsinha
2009-05-30 13:31     ` Sujit Karataparambil
2009-06-09  4:13   ` Nifty Fedora Mitch
2009-05-30 13:35 ` John Robinson
2009-05-30 14:06   ` Maxime Boissonneault
2009-05-30 15:46     ` John Robinson
2009-05-30 16:16       ` Maxime Boissonneault
2009-05-30 16:30         ` John Robinson
2009-05-30 16:08   ` Redeeman
2009-05-30 18:39     ` Bill Davidsen
2009-05-30 18:54     ` Goswin von Brederlow
2009-05-31  8:10       ` SandeepKsinha
2009-05-30 18:55     ` Goswin von Brederlow [this message]
2009-05-30 19:37       ` Redeeman
2009-05-31  8:02         ` SandeepKsinha
2009-05-31 11:54           ` Goswin von Brederlow
2009-05-31 12:11             ` John Robinson
2009-05-31 12:14             ` NeilBrown
2009-06-03  1:54               ` Greg Freemyer
2009-06-02 18:42       ` Bill Davidsen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=878wkezagw.fsf@frosties.localdomain \
    --to=goswin-v-b@web.de \
    --cc=john.robinson@anonymous.org.uk \
    --cc=linux-raid@vger.kernel.org \
    --cc=redeeman@metanurb.dk \
    --cc=sandeepksinha@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).