linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: Shaohua Li <shli@kernel.org>
Cc: Shaohua Li <shli@fusionio.com>,
	linux-raid@vger.kernel.org, axboe@kernel.dk
Subject: Re: [patch 6/7 v2] MD: raid5 trim support
Date: Mon, 13 Aug 2012 13:58:31 +1000	[thread overview]
Message-ID: <20120813135831.284d721d@notabene.brown> (raw)
In-Reply-To: <20120813020454.GA447@kernel.org>

[-- Attachment #1: Type: text/plain, Size: 1296 bytes --]

On Mon, 13 Aug 2012 10:04:54 +0800 Shaohua Li <shli@kernel.org> wrote:

> On Mon, Aug 13, 2012 at 11:50:51AM +1000, NeilBrown wrote:
> > On Fri, 10 Aug 2012 10:51:19 +0800 Shaohua Li <shli@fusionio.com> wrote:
> > 
> > > @@ -4094,6 +4159,19 @@ static void make_request(struct mddev *m
> > >  	bi->bi_next = NULL;
> > >  	bi->bi_phys_segments = 1;	/* over-loaded to count active stripes */
> > >  
> > > +	/* block layer doesn't correctly do alignment even we set correct alignment */
> > > +	if (unlikely(bi->bi_rw & REQ_DISCARD)) {
> > > +		int stripe_sectors = conf->chunk_sectors *
> > > +			(conf->raid_disks - conf->max_degraded);
> > 
> > This isn't right when an array is being reshaped.
> > I suspect that during a reshape we should only attempt DISCARD on the part of
> > the array which has already been reshaped.  On the other section we can
> > either fail the discard (is that a good idea?) or write zeros.
> 
> I had a check in below for-loop for reshape, is it enough? If not, I'd like
> just ignore discard request for reshape. We force discard_zero_data to be 0, so
> should be ok.

Yes, you are right - that is sufficient.  I hadn't read it properly.

> 
> I'll fix other two issues. Will repost the raid5 discard patches later.

thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

  reply	other threads:[~2012-08-13  3:58 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-10  2:51 [patch 0/7 v2] MD linear/0/1/10/5 TRIM support Shaohua Li
2012-08-10  2:51 ` [patch 1/7 v2] block: makes bio_split support bio without data Shaohua Li
2012-08-10  2:51 ` [patch 2/7 v2] md: linear supports TRIM Shaohua Li
2012-08-10  2:51 ` [patch 3/7 v2] md: raid 0 " Shaohua Li
2012-08-10  2:51 ` [patch 4/7 v2] md: raid 1 " Shaohua Li
2012-08-10  2:51 ` [patch 5/7 v2] md: raid 10 " Shaohua Li
2012-08-10  2:51 ` [patch 6/7 v2] MD: raid5 trim support Shaohua Li
2012-08-13  1:50   ` NeilBrown
2012-08-13  2:04     ` Shaohua Li
2012-08-13  3:58       ` NeilBrown [this message]
2012-08-13  5:43         ` Shaohua Li
2012-09-11  4:10           ` NeilBrown
2012-09-12  4:09             ` Shaohua Li
2012-09-18  4:52               ` NeilBrown
2012-08-10  2:51 ` [patch 7/7 v2] MD: raid5 avoid unnecessary zero page for trim Shaohua Li
2012-08-13  1:51 ` [patch 0/7 v2] MD linear/0/1/10/5 TRIM support NeilBrown
2012-08-29 18:58 ` Holger Kiehl
2012-08-29 20:19   ` Martin K. Petersen
2012-08-30  0:45   ` Shaohua Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120813135831.284d721d@notabene.brown \
    --to=neilb@suse.de \
    --cc=axboe@kernel.dk \
    --cc=linux-raid@vger.kernel.org \
    --cc=shli@fusionio.com \
    --cc=shli@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).