linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: jaxboe@fusionio.com, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-ide@vger.kernel.org, linux-raid@vger.kernel.org,
	hch@lst.de, James.
Cc: Tejun Heo <tj@kernel.org>, Christoph Hellwig <hch@infradead.org>,
	Nick Piggin <npiggin@kernel.dk>,
	Jeremy Fitzhardinge <jeremy@xensource.com>,
	Chris Wright <chrisw@sous-sol.org>
Subject: [PATCH 03/30] block: kill QUEUE_ORDERED_BY_TAG
Date: Wed, 25 Aug 2010 17:47:20 +0200	[thread overview]
Message-ID: <1282751267-3530-4-git-send-email-tj@kernel.org> (raw)
In-Reply-To: <1282751267-3530-1-git-send-email-tj@kernel.org>

Nobody is making meaningful use of ORDERED_BY_TAG now and queue
draining for barrier requests will be removed soon which will render
the advantage of tag ordering moot.  Kill ORDERED_BY_TAG.  The
following users are affected.

* brd: converted to ORDERED_DRAIN.
* virtio_blk: ORDERED_TAG path was already marked deprecated.  Removed.
* xen-blkfront: ORDERED_TAG case dropped.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 block/blk-barrier.c          |   35 +++++++----------------------------
 drivers/block/brd.c          |    2 +-
 drivers/block/virtio_blk.c   |    9 ---------
 drivers/block/xen-blkfront.c |    8 +++-----
 drivers/scsi/sd.c            |    4 +---
 include/linux/blkdev.h       |   17 +----------------
 6 files changed, 13 insertions(+), 62 deletions(-)

diff --git a/block/blk-barrier.c b/block/blk-barrier.c
index f0faefc..c807e9c 100644
--- a/block/blk-barrier.c
+++ b/block/blk-barrier.c
@@ -26,10 +26,7 @@ int blk_queue_ordered(struct request_queue *q, unsigned ordered)
 	if (ordered != QUEUE_ORDERED_NONE &&
 	    ordered != QUEUE_ORDERED_DRAIN &&
 	    ordered != QUEUE_ORDERED_DRAIN_FLUSH &&
-	    ordered != QUEUE_ORDERED_DRAIN_FUA &&
-	    ordered != QUEUE_ORDERED_TAG &&
-	    ordered != QUEUE_ORDERED_TAG_FLUSH &&
-	    ordered != QUEUE_ORDERED_TAG_FUA) {
+	    ordered != QUEUE_ORDERED_DRAIN_FUA) {
 		printk(KERN_ERR "blk_queue_ordered: bad value %d\n", ordered);
 		return -EINVAL;
 	}
@@ -155,21 +152,9 @@ static inline bool start_ordered(struct request_queue *q, struct request **rqp)
 	 * For an empty barrier, there's no actual BAR request, which
 	 * in turn makes POSTFLUSH unnecessary.  Mask them off.
 	 */
-	if (!blk_rq_sectors(rq)) {
+	if (!blk_rq_sectors(rq))
 		q->ordered &= ~(QUEUE_ORDERED_DO_BAR |
 				QUEUE_ORDERED_DO_POSTFLUSH);
-		/*
-		 * Empty barrier on a write-through device w/ ordered
-		 * tag has no command to issue and without any command
-		 * to issue, ordering by tag can't be used.  Drain
-		 * instead.
-		 */
-		if ((q->ordered & QUEUE_ORDERED_BY_TAG) &&
-		    !(q->ordered & QUEUE_ORDERED_DO_PREFLUSH)) {
-			q->ordered &= ~QUEUE_ORDERED_BY_TAG;
-			q->ordered |= QUEUE_ORDERED_BY_DRAIN;
-		}
-	}
 
 	/* stash away the original request */
 	blk_dequeue_request(rq);
@@ -210,7 +195,7 @@ static inline bool start_ordered(struct request_queue *q, struct request **rqp)
 	} else
 		skip |= QUEUE_ORDSEQ_PREFLUSH;
 
-	if ((q->ordered & QUEUE_ORDERED_BY_DRAIN) && queue_in_flight(q))
+	if (queue_in_flight(q))
 		rq = NULL;
 	else
 		skip |= QUEUE_ORDSEQ_DRAIN;
@@ -257,16 +242,10 @@ bool blk_do_ordered(struct request_queue *q, struct request **rqp)
 	    rq != &q->pre_flush_rq && rq != &q->post_flush_rq)
 		return true;
 
-	if (q->ordered & QUEUE_ORDERED_BY_TAG) {
-		/* Ordered by tag.  Blocking the next barrier is enough. */
-		if (is_barrier && rq != &q->bar_rq)
-			*rqp = NULL;
-	} else {
-		/* Ordered by draining.  Wait for turn. */
-		WARN_ON(blk_ordered_req_seq(rq) < blk_ordered_cur_seq(q));
-		if (blk_ordered_req_seq(rq) > blk_ordered_cur_seq(q))
-			*rqp = NULL;
-	}
+	/* Ordered by draining.  Wait for turn. */
+	WARN_ON(blk_ordered_req_seq(rq) < blk_ordered_cur_seq(q));
+	if (blk_ordered_req_seq(rq) > blk_ordered_cur_seq(q))
+		*rqp = NULL;
 
 	return true;
 }
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 1c7f637..47a4127 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -482,7 +482,7 @@ static struct brd_device *brd_alloc(int i)
 	if (!brd->brd_queue)
 		goto out_free_dev;
 	blk_queue_make_request(brd->brd_queue, brd_make_request);
-	blk_queue_ordered(brd->brd_queue, QUEUE_ORDERED_TAG);
+	blk_queue_ordered(brd->brd_queue, QUEUE_ORDERED_DRAIN);
 	blk_queue_max_hw_sectors(brd->brd_queue, 1024);
 	blk_queue_bounce_limit(brd->brd_queue, BLK_BOUNCE_ANY);
 
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 2aafafc..7965280 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -395,15 +395,6 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
 		 * to implement write barrier support.
 		 */
 		blk_queue_ordered(q, QUEUE_ORDERED_DRAIN_FLUSH);
-	} else if (virtio_has_feature(vdev, VIRTIO_BLK_F_BARRIER)) {
-		/*
-		 * If the BARRIER feature is supported the host expects us
-		 * to order request by tags.  This implies there is not
-		 * volatile write cache on the host, and that the host
-		 * never re-orders outstanding I/O.  This feature is not
-		 * useful for real life scenarious and deprecated.
-		 */
-		blk_queue_ordered(q, QUEUE_ORDERED_TAG);
 	} else {
 		/*
 		 * If the FLUSH feature is not supported we must assume that
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index ab735a6..8341862 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -424,8 +424,7 @@ static int xlvbd_barrier(struct blkfront_info *info)
 	const char *barrier;
 
 	switch (info->feature_barrier) {
-	case QUEUE_ORDERED_DRAIN:	barrier = "enabled (drain)"; break;
-	case QUEUE_ORDERED_TAG:		barrier = "enabled (tag)"; break;
+	case QUEUE_ORDERED_DRAIN:	barrier = "enabled"; break;
 	case QUEUE_ORDERED_NONE:	barrier = "disabled"; break;
 	default:			return -EINVAL;
 	}
@@ -1078,8 +1077,7 @@ static void blkfront_connect(struct blkfront_info *info)
 	 * we're dealing with a very old backend which writes
 	 * synchronously; draining will do what needs to get done.
 	 *
-	 * If there are barriers, then we can do full queued writes
-	 * with tagged barriers.
+	 * If there are barriers, then we use flush.
 	 *
 	 * If barriers are not supported, then there's no much we can
 	 * do, so just set ordering to NONE.
@@ -1087,7 +1085,7 @@ static void blkfront_connect(struct blkfront_info *info)
 	if (err)
 		info->feature_barrier = QUEUE_ORDERED_DRAIN;
 	else if (barrier)
-		info->feature_barrier = QUEUE_ORDERED_TAG;
+		info->feature_barrier = QUEUE_ORDERED_DRAIN_FLUSH;
 	else
 		info->feature_barrier = QUEUE_ORDERED_NONE;
 
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 2714bec..cdfc51a 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -2151,9 +2151,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 
 	/*
 	 * We now have all cache related info, determine how we deal
-	 * with ordered requests.  Note that as the current SCSI
-	 * dispatch function can alter request order, we cannot use
-	 * QUEUE_ORDERED_TAG_* even when ordered tag is supported.
+	 * with ordered requests.
 	 */
 	if (sdkp->WCE)
 		ordered = sdkp->DPOFUA
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 015375c..7077bc0 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -470,12 +470,7 @@ enum {
 	 * DRAIN	: ordering by draining is enough
 	 * DRAIN_FLUSH	: ordering by draining w/ pre and post flushes
 	 * DRAIN_FUA	: ordering by draining w/ pre flush and FUA write
-	 * TAG		: ordering by tag is enough
-	 * TAG_FLUSH	: ordering by tag w/ pre and post flushes
-	 * TAG_FUA	: ordering by tag w/ pre flush and FUA write
 	 */
-	QUEUE_ORDERED_BY_DRAIN		= 0x01,
-	QUEUE_ORDERED_BY_TAG		= 0x02,
 	QUEUE_ORDERED_DO_PREFLUSH	= 0x10,
 	QUEUE_ORDERED_DO_BAR		= 0x20,
 	QUEUE_ORDERED_DO_POSTFLUSH	= 0x40,
@@ -483,8 +478,7 @@ enum {
 
 	QUEUE_ORDERED_NONE		= 0x00,
 
-	QUEUE_ORDERED_DRAIN		= QUEUE_ORDERED_BY_DRAIN |
-					  QUEUE_ORDERED_DO_BAR,
+	QUEUE_ORDERED_DRAIN		= QUEUE_ORDERED_DO_BAR,
 	QUEUE_ORDERED_DRAIN_FLUSH	= QUEUE_ORDERED_DRAIN |
 					  QUEUE_ORDERED_DO_PREFLUSH |
 					  QUEUE_ORDERED_DO_POSTFLUSH,
@@ -492,15 +486,6 @@ enum {
 					  QUEUE_ORDERED_DO_PREFLUSH |
 					  QUEUE_ORDERED_DO_FUA,
 
-	QUEUE_ORDERED_TAG		= QUEUE_ORDERED_BY_TAG |
-					  QUEUE_ORDERED_DO_BAR,
-	QUEUE_ORDERED_TAG_FLUSH		= QUEUE_ORDERED_TAG |
-					  QUEUE_ORDERED_DO_PREFLUSH |
-					  QUEUE_ORDERED_DO_POSTFLUSH,
-	QUEUE_ORDERED_TAG_FUA		= QUEUE_ORDERED_TAG |
-					  QUEUE_ORDERED_DO_PREFLUSH |
-					  QUEUE_ORDERED_DO_FUA,
-
 	/*
 	 * Ordered operation sequence
 	 */
-- 
1.7.1


  parent reply	other threads:[~2010-08-25 15:47 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-25 15:47 [PATCHSET 2.6.36-rc2] block, fs: replace HARDBARRIER with FLUSH/FUA Tejun Heo
2010-08-25 15:47 ` [PATCH 01/30] ide: remove unnecessary blk_queue_flushing() test in do_ide_request() Tejun Heo
2010-08-25 15:47 ` [PATCH 02/30] block/loop: queue ordered mode should be DRAIN_FLUSH Tejun Heo
2010-08-25 15:47 ` Tejun Heo [this message]
2010-08-25 15:47 ` [PATCH 04/30] block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush() Tejun Heo
2010-08-30 15:37   ` Boaz Harrosh
2010-08-25 15:47 ` [PATCH 05/30] block: remove spurious uses of REQ_HARDBARRIER Tejun Heo
2010-08-25 15:47 ` [PATCH 06/30] block: misc cleanups in barrier code Tejun Heo
2010-08-25 15:47 ` [PATCH 07/30] block: drop barrier ordering by queue draining Tejun Heo
2010-08-25 15:47 ` [PATCH 08/30] block: rename blk-barrier.c to blk-flush.c Tejun Heo
2010-08-25 15:47 ` [PATCH 09/30] block: rename barrier/ordered to flush Tejun Heo
2010-08-25 15:47 ` [PATCH 10/30] block: implement REQ_FLUSH/FUA based interface for FLUSH/FUA requests Tejun Heo
2010-08-25 15:47 ` [PATCH 11/30] block: filter flush bio's in __generic_make_request() Tejun Heo
2010-08-25 15:47 ` [PATCH 12/30] block: use REQ_FLUSH in blkdev_issue_flush() Tejun Heo
2010-08-25 15:47 ` [PATCH 13/30] block: simplify queue_next_fseq Tejun Heo
2010-08-25 15:47 ` [PATCH 14/30] block/loop: implement REQ_FLUSH/FUA support Tejun Heo
2010-08-25 15:47 ` [PATCH 15/30] virtio_blk: drop REQ_HARDBARRIER support Tejun Heo
2010-08-25 15:47 ` [PATCH 16/30] lguest: replace VIRTIO_F_BARRIER support with VIRTIO_F_FLUSH support Tejun Heo
2010-08-25 15:47 ` [PATCH 17/30] md: implment REQ_FLUSH/FUA support Tejun Heo
2010-08-25 15:47 ` [PATCH 18/30] block: pass gfp_mask and flags to sb_issue_discard Tejun Heo
2010-08-25 15:47 ` [PATCH 19/30] xfs: replace barriers with explicit flush / FUA usage Tejun Heo
2010-08-25 15:47 ` [PATCH 20/30] btrfs: " Tejun Heo
2010-08-25 15:47 ` [PATCH 21/30] gfs2: " Tejun Heo
2010-08-25 15:47 ` [PATCH 22/30] reiserfs: " Tejun Heo
2010-08-25 15:47 ` [PATCH 23/30] nilfs2: " Tejun Heo
2010-08-25 15:47 ` [PATCH 24/30] jbd: " Tejun Heo
2010-08-25 15:47 ` [PATCH 25/30] jbd2: " Tejun Heo
2010-08-25 15:47 ` [PATCH 26/30] ext4: do not send discards as barriers Tejun Heo
2010-08-25 15:58   ` Christoph Hellwig
2010-08-25 16:00     ` Christoph Hellwig
2010-08-25 15:57       ` Tejun Heo
2010-08-25 20:02         ` Jan Kara
2010-08-26  8:25           ` Tejun Heo
2010-08-27 17:31             ` Jan Kara
2010-08-30 19:56               ` Jeff Moyer
2010-08-30 20:20                 ` Jan Kara
2010-08-30 20:24                   ` Ric Wheeler
2010-08-30 20:39                   ` Vladislav Bolkhovitin
2010-08-30 21:02                     ` Jan Kara
2010-08-31  9:55                       ` Boaz Harrosh
2010-09-02 18:46                         ` Vladislav Bolkhovitin
2010-08-30 21:01                   ` Jeff Moyer
2010-08-31  8:11                   ` Tejun Heo
2010-08-31 10:07                     ` Boaz Harrosh
2010-08-31 10:13                       ` Tejun Heo
2010-08-31 10:27                         ` Boaz Harrosh
2010-09-09 22:53                     ` Jan Kara
2010-08-25 15:47 ` [PATCH 27/30] fat: " Tejun Heo
2010-08-25 15:47 ` [PATCH 28/30] swap: " Tejun Heo
2010-08-25 15:47 ` [PATCH 29/30] block: remove the BLKDEV_IFL_BARRIER flag Tejun Heo
2010-08-25 15:59   ` Christoph Hellwig
2010-08-25 15:47 ` [PATCH 30/30] block: remove the BH_Eopnotsupp flag Tejun Heo
2010-08-25 16:03 ` [PATCHSET 2.6.36-rc2] block, fs: replace HARDBARRIER with FLUSH/FUA Mike Snitzer
2010-08-26  8:23 ` [PATCH 24.5/30] jbd2: Modify ASYNC_COMMIT code to not rely on queue draining on barrier Tejun Heo
2010-08-26  9:33   ` Sergei Shtylyov
2010-08-26  9:37   ` [PATCH UPDATED " Tejun Heo
2010-09-06 11:15   ` [PATCH " Andreas Dilger
2010-09-06 11:40     ` Jan Kara
2010-08-26  9:54 ` [PATCH] block: update documentation for REQ_FLUSH / REQ_FUA Christoph Hellwig
2010-08-27  9:18   ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1282751267-3530-4-git-send-email-tj@kernel.org \
    --to=tj@kernel.org \
    --cc=chrisw@sous-sol.org \
    --cc=hch@infradead.org \
    --cc=hch@lst.de \
    --cc=jaxboe@fusionio.com \
    --cc=jeremy@xensource.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-ide@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=npiggin@kernel.dk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).