linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Mike Snitzer <snitzer@redhat.com>
Cc: jaxboe@fusionio.com, linux-fsdevel@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-ide@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org,
	hch@lst.de, James.Bottomley@suse.de, tytso@mit.edu,
	chris.mason@oracle.com, swhiteho@redhat.com,
	konishi.ryusuke@lab.ntt.co.jp, dm-devel@redhat.com, vst@vlnb.net,
	jack@suse.cz, rwheeler@redhat.com, hare@suse.de, neilb@suse.de,
	rusty@rustcorp.com.au, mst@redhat.com,
	Mikulas Patocka <mpatocka@redhat.com>,
	Kiyoshi Ueda <k-ueda@ct.jp.nec.com>,
	"Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
Subject: Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
Date: Tue, 17 Aug 2010 18:51:30 +0200	[thread overview]
Message-ID: <4C6ABE12.40705@kernel.org> (raw)
In-Reply-To: <20100817140734.GA30768@redhat.com>

Hello,

On 08/17/2010 04:07 PM, Mike Snitzer wrote:
>> With the patch applied, there's no second flush.  Those requests would
>> now be REQ_FLUSH + REQ_DISCARD.  The first can't be avoided anyway and
>> there won't be the second flush to begin with, so I don't think this
>> worsens anything.
> 
> Makes sense, but your patches still need to be refreshed against the
> latest (2.6.36-rc1) upstream code.  Numerous changes went in to DM
> recently.

Sure thing.  The block part isn't fixed yet and so the RFC tag.  Once
the block layer part is settled, it probably should be pulled into
dm/md and other trees and conversions should happen there.

>> Yeap, I want you to be concerned. :-) This was the first time I looked
>> at the dm code and there are many different disjoint code paths and I
>> couldn't fully follow or test all of them, so it definitely needs a
>> careful review from someone who understands the whole thing.
> 
> You'll need Mikulas (bio-based) and NEC (request-based, Kiyoshi and
> Jun'ichi) to give it serious review.

Oh, you already cc'd them.  Great.  Hello, guys, the original thread
is

  http://thread.gmane.org/gmane.linux.raid/29100

> NOTE: NEC has already given some preliminary feedback to hch in the
> "[PATCH, RFC 2/2] dm: support REQ_FLUSH directly" thread:
> https://www.redhat.com/archives/dm-devel/2010-August/msg00026.html
> https://www.redhat.com/archives/dm-devel/2010-August/msg00033.html

Hmmm... I think both issues don't exist in this incarnation of
conversion although I'm fairly sure there will be other issues.  :-)

>>     A related question: Is dm_wait_for_completion() used in
>>     process_flush() safe against starvation under continuous influx of
>>     other commands?
>
> As for your specific dm_wait_for_completion() concern -- I'll defer to
> Mikulas.  But I'll add: we haven't had any reported starvation issues
> with DM's existing barrier support.  DM uses a mempool for its clones,
> so it should naturally throttle (without starvation) when memory gets
> low.

I see but single pending flush and steady write streams w/o saturating
the mempool would be able to stall dm_wait_for_completeion(), no?  Eh
well, it's a separate issue, I guess.

>>   * Guarantee that REQ_FLUSH w/ data never reaches targets (this in
>>     part is to put it in alignment with request based dm).
> 
> bio-based DM already split the barrier out from the data (in
> process_barrier).  You've renamed process_barrier to process_flush and
> added the REQ_FLUSH logic like I'd expect.

Yeah and threw in WARN_ON() there to make sure REQ_FLUSH + data bios
don't slip through for whatever reason.

>> * For request based dm:
>>
>>   * The sequencing is done by the block layer for the top level
>>     request_queue, so the only things request based dm needs to make
>>     sure is 1. handling empty REQ_FLUSH correctly (block layer will
>>     only send down empty REQ_FLUSHes) and 2. propagate REQ_FUA bit to
>>     member devices.
> 
> OK, so seems 1 is done, 2 is still TODO.  Looking at your tree it seems
> 2 would be as simple as using the following in

Oh, I was talking about the other way around.  Passing REQ_FUA in
bio->bi_rw down to member request_queues.  Sometimes while
constructing clone / split bios, the bit is lost (e.g. md raid5).

> dm_init_request_based_queue (on the most current upstream dm.c):
> blk_queue_flush(q, REQ_FLUSH | REQ_FUA);
> (your current patch only sets REQ_FLUSH in alloc_dev).

Yeah, but for that direction, just adding REQ_FUA to blk_queue_flush()
should be enough.  I'll add it.

Thanks.

-- 
tejun

  reply	other threads:[~2010-08-17 16:51 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-16 16:51 [RFC PATCHSET block#for-2.6.36-post] block: convert to REQ_FLUSH/FUA Tejun Heo
2010-08-16 16:51 ` [PATCH 1/5] block/loop: implement REQ_FLUSH/FUA support Tejun Heo
2010-08-16 16:52 ` [PATCH 2/5] virtio_blk: " Tejun Heo
2010-08-16 18:33   ` Christoph Hellwig
2010-08-17  8:17     ` Tejun Heo
2010-08-17 13:23       ` Christoph Hellwig
2010-08-17 16:22         ` Tejun Heo
2010-08-18 10:22         ` Rusty Russell
2010-08-17  1:16   ` Rusty Russell
2010-08-17  8:18     ` Tejun Heo
2010-08-19 15:14   ` [PATCH 2/5 UPDATED] virtio_blk: drop REQ_HARDBARRIER support Tejun Heo
2010-08-16 16:52 ` [PATCH 3/5] lguest: replace VIRTIO_F_BARRIER support with VIRTIO_F_FLUSH/FUA support Tejun Heo
2010-08-19 15:15   ` [PATCH 3/5] lguest: replace VIRTIO_F_BARRIER support with VIRTIO_F_FLUSH support Tejun Heo
2010-08-16 16:52 ` [PATCH 4/5] md: implment REQ_FLUSH/FUA support Tejun Heo
2010-08-24  5:41   ` Neil Brown
2010-08-25 11:22     ` [PATCH UPDATED " Tejun Heo
2010-08-25 11:42       ` Neil Brown
2010-08-16 16:52 ` [PATCH 5/5] dm: implement " Tejun Heo
2010-08-16 19:02   ` Mike Snitzer
2010-08-17  9:33     ` Tejun Heo
2010-08-17 13:13       ` Christoph Hellwig
2010-08-17 14:07       ` Mike Snitzer
2010-08-17 16:51         ` Tejun Heo [this message]
2010-08-17 18:21           ` Mike Snitzer
2010-08-18  6:32             ` Tejun Heo
2010-08-19 10:32           ` Kiyoshi Ueda
2010-08-19 15:45             ` Tejun Heo
2010-08-18  9:53 ` [RFC PATCHSET block#for-2.6.36-post] block: convert to REQ_FLUSH/FUA Christoph Hellwig
2010-08-18 14:26   ` James Bottomley
2010-08-18 14:33     ` Christoph Hellwig
2010-08-19 15:37     ` FUJITA Tomonori
2010-08-19 15:41       ` Christoph Hellwig
2010-08-19 15:56         ` FUJITA Tomonori
2010-08-23 16:47 ` Christoph Hellwig
2010-08-24  9:51   ` Lars Ellenberg
2010-08-24 15:45   ` Philipp Reisner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C6ABE12.40705@kernel.org \
    --to=tj@kernel.org \
    --cc=James.Bottomley@suse.de \
    --cc=chris.mason@oracle.com \
    --cc=dm-devel@redhat.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=j-nomura@ce.jp.nec.com \
    --cc=jack@suse.cz \
    --cc=jaxboe@fusionio.com \
    --cc=k-ueda@ct.jp.nec.com \
    --cc=konishi.ryusuke@lab.ntt.co.jp \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-ide@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=mpatocka@redhat.com \
    --cc=mst@redhat.com \
    --cc=neilb@suse.de \
    --cc=rusty@rustcorp.com.au \
    --cc=rwheeler@redhat.com \
    --cc=snitzer@redhat.com \
    --cc=swhiteho@redhat.com \
    --cc=tytso@mit.edu \
    --cc=vst@vlnb.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).