linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, hch@lst.de
Subject: Re: [PATCH 2/2] xfs: add 'discard_sync' mount flag
Date: Tue, 1 May 2018 09:23:13 -0600	[thread overview]
Message-ID: <bacc9dbb-c38b-9258-cd95-7582c27a97f7@kernel.dk> (raw)
In-Reply-To: <20180430232319.GV23861@dastard>

On 4/30/18 5:23 PM, Dave Chinner wrote:
> On Mon, Apr 30, 2018 at 05:00:14PM -0600, Jens Axboe wrote:
>> On 4/30/18 4:40 PM, Jens Axboe wrote:
>>> On 4/30/18 4:28 PM, Dave Chinner wrote:
>>>> Yes, it does, but so would having the block layer to throttle device
>>>> discard requests in flight to a queue depth of 1. And then we don't
>>>> have to change XFS at all.
>>>
>>> I'm perfectly fine with making that change by default, and much easier
>>> for me since I don't have to patch file systems.
>>
>> Totally untested, but this should do the trick. It ensures we have
>> a QD of 1 (per caller), which should be sufficient.
>>
>> If people tune down the discard size, then you'll be blocking waiting
>> for discards on issue.
>>
>> diff --git a/block/blk-lib.c b/block/blk-lib.c
>> index a676084d4740..0bf9befcc863 100644
>> --- a/block/blk-lib.c
>> +++ b/block/blk-lib.c
>> @@ -11,16 +11,19 @@
>>  #include "blk.h"
>>  
>>  static struct bio *next_bio(struct bio *bio, unsigned int nr_pages,
>> -		gfp_t gfp)
>> +			    gfp_t gfp)
>>  {
>> -	struct bio *new = bio_alloc(gfp, nr_pages);
>> -
>> +	/*
>> +	 * Devices suck at discard, so if we have to break up the bio
>> +	 * size due to the max discard size setting, wait for the
>> +	 * previous one to finish first.
>> +	 */
>>  	if (bio) {
>> -		bio_chain(bio, new);
>> -		submit_bio(bio);
>> +		submit_bio_wait(bio);
>> +		bio_put(bio);
>>  	}
> 
> This only addresses the case where __blkdev_issue_discard() breaks
> up a single large discard, right? It seems like a brute force
> solution, too, because it will do so even when the underlying device
> is idle and there's no need to throttle.

Right, the above would only break up a single discard, that's the
per-caller part.

> Shouldn't the throttling logic at least look at device congestion?
> i.e. if the device is not backlogged, then we should be able to
> issue the discard without problems. 
> 
> I ask this because this only addresses throttling the "discard large
> extent" case when the discard limit is set low. i.e. your exact
> problem case. We know that XFS can issue large numbers of
> discontiguous async discards in a single batch - this patch does not
> address that case and so it will still cause starvation problems.
> 
> If we look at device congestion in determining how to throttle/back
> off during discard issuing, then it doesn't matter what
> max_discard_sectors is set to - it will throttle in all situations
> that cause device overloads and starvations....

How about the below? It integrates it with the writeback throttling,
treating it like background writes. Totally untested. The benefit
of this is that it ties into that whole framework, and it's
per-device managed.

The blk-lib change is a separate patch, ensuring we break up discards
according to the user size. Will get broken up.


diff --git a/block/blk-lib.c b/block/blk-lib.c
index a676084d4740..7417d617091b 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -62,10 +62,11 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
 		unsigned int req_sects;
 		sector_t end_sect, tmp;
 
-		/* Make sure bi_size doesn't overflow */
-		req_sects = min_t(sector_t, nr_sects, UINT_MAX >> 9);
+		/* Issue in chunks of the user defined max discard setting */
+		req_sects = min_t(sector_t, nr_sects,
+					q->limits.max_discard_sectors);
 
-		/**
+		/*
 		 * If splitting a request, and the next starting sector would be
 		 * misaligned, stop the discard at the previous aligned sector.
 		 */
diff --git a/block/blk-stat.h b/block/blk-stat.h
index 2dd36347252a..c22049a8125e 100644
--- a/block/blk-stat.h
+++ b/block/blk-stat.h
@@ -10,11 +10,11 @@
 
 /*
  * from upper:
- * 3 bits: reserved for other usage
+ * 4 bits: reserved for other usage
  * 12 bits: size
- * 49 bits: time
+ * 48 bits: time
  */
-#define BLK_STAT_RES_BITS	3
+#define BLK_STAT_RES_BITS	4
 #define BLK_STAT_SIZE_BITS	12
 #define BLK_STAT_RES_SHIFT	(64 - BLK_STAT_RES_BITS)
 #define BLK_STAT_SIZE_SHIFT	(BLK_STAT_RES_SHIFT - BLK_STAT_SIZE_BITS)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index f92fc84b5e2c..ba0c2825d382 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -101,9 +101,15 @@ static bool wb_recent_wait(struct rq_wb *rwb)
 	return time_before(jiffies, wb->dirty_sleep + HZ);
 }
 
-static inline struct rq_wait *get_rq_wait(struct rq_wb *rwb, bool is_kswapd)
+static inline struct rq_wait *get_rq_wait(struct rq_wb *rwb, bool is_trim,
+					  bool is_kswapd)
 {
-	return &rwb->rq_wait[is_kswapd];
+	if (is_trim)
+		return &rwb->rq_wait[WBT_REQ_TRIM];
+	else if (is_kswapd)
+		return &rwb->rq_wait[WBT_REQ_KSWAPD];
+	else
+		return &rwb->rq_wait[WBT_REQ_BG];
 }
 
 static void rwb_wake_all(struct rq_wb *rwb)
@@ -120,13 +126,14 @@ static void rwb_wake_all(struct rq_wb *rwb)
 
 void __wbt_done(struct rq_wb *rwb, enum wbt_flags wb_acct)
 {
+	const bool is_trim = wb_acct & WBT_TRIM;
 	struct rq_wait *rqw;
 	int inflight, limit;
 
 	if (!(wb_acct & WBT_TRACKED))
 		return;
 
-	rqw = get_rq_wait(rwb, wb_acct & WBT_KSWAPD);
+	rqw = get_rq_wait(rwb, is_trim, wb_acct & WBT_KSWAPD);
 	inflight = atomic_dec_return(&rqw->inflight);
 
 	/*
@@ -139,10 +146,13 @@ void __wbt_done(struct rq_wb *rwb, enum wbt_flags wb_acct)
 	}
 
 	/*
-	 * If the device does write back caching, drop further down
-	 * before we wake people up.
+	 * For discards, our limit is always the background. For writes, if
+	 * the device does write back caching, drop further down before we
+	 * wake people up.
 	 */
-	if (rwb->wc && !wb_recent_wait(rwb))
+	if (is_trim)
+		limit = rwb->wb_background;
+	else if (rwb->wc && !wb_recent_wait(rwb))
 		limit = 0;
 	else
 		limit = rwb->wb_normal;
@@ -479,6 +489,9 @@ static inline unsigned int get_limit(struct rq_wb *rwb, unsigned long rw)
 {
 	unsigned int limit;
 
+	if ((rw & REQ_OP_MASK) == REQ_OP_DISCARD)
+		return rwb->wb_background;
+
 	/*
 	 * At this point we know it's a buffered write. If this is
 	 * kswapd trying to free memory, or REQ_SYNC is set, then
@@ -533,7 +546,8 @@ static void __wbt_wait(struct rq_wb *rwb, unsigned long rw, spinlock_t *lock)
 	__releases(lock)
 	__acquires(lock)
 {
-	struct rq_wait *rqw = get_rq_wait(rwb, current_is_kswapd());
+	const bool is_trim = (rw & REQ_OP_MASK) == REQ_OP_DISCARD;
+	struct rq_wait *rqw = get_rq_wait(rwb, is_trim, current_is_kswapd());
 	DEFINE_WAIT(wait);
 
 	if (may_queue(rwb, rqw, &wait, rw))
@@ -561,19 +575,19 @@ static inline bool wbt_should_throttle(struct rq_wb *rwb, struct bio *bio)
 {
 	const int op = bio_op(bio);
 
-	/*
-	 * If not a WRITE, do nothing
-	 */
-	if (op != REQ_OP_WRITE)
-		return false;
+	if (op == REQ_OP_WRITE) {
+		/*
+		 * Don't throttle WRITE_ODIRECT
+		 */
+		if ((bio->bi_opf & (REQ_SYNC | REQ_IDLE)) ==
+		    (REQ_SYNC | REQ_IDLE))
+			return false;
 
-	/*
-	 * Don't throttle WRITE_ODIRECT
-	 */
-	if ((bio->bi_opf & (REQ_SYNC | REQ_IDLE)) == (REQ_SYNC | REQ_IDLE))
-		return false;
+		return true;
+	} else if (op == REQ_OP_DISCARD)
+		return true;
 
-	return true;
+	return false;
 }
 
 /*
@@ -605,6 +619,8 @@ enum wbt_flags wbt_wait(struct rq_wb *rwb, struct bio *bio, spinlock_t *lock)
 
 	if (current_is_kswapd())
 		ret |= WBT_KSWAPD;
+	if (bio_op(bio) == REQ_OP_DISCARD)
+		ret |= WBT_TRIM;
 
 	return ret | WBT_TRACKED;
 }
diff --git a/block/blk-wbt.h b/block/blk-wbt.h
index a232c98fbf4d..aec5bc82d580 100644
--- a/block/blk-wbt.h
+++ b/block/blk-wbt.h
@@ -14,12 +14,17 @@ enum wbt_flags {
 	WBT_TRACKED		= 1,	/* write, tracked for throttling */
 	WBT_READ		= 2,	/* read */
 	WBT_KSWAPD		= 4,	/* write, from kswapd */
+	WBT_TRIM		= 8,
 
-	WBT_NR_BITS		= 3,	/* number of bits */
+	WBT_NR_BITS		= 4,	/* number of bits */
 };
 
 enum {
-	WBT_NUM_RWQ		= 2,
+	WBT_REQ_BG = 0,
+	WBT_REQ_KSWAPD,
+	WBT_REQ_TRIM,
+
+	WBT_NUM_RWQ,
 };
 
 /*

-- 
Jens Axboe


  parent reply	other threads:[~2018-05-01 15:23 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-30 15:32 [PATCHSET 0/2] sync discard Jens Axboe
2018-04-30 15:32 ` [PATCH 1/2] block: add BLKDEV_DISCARD_SYNC flag Jens Axboe
2018-04-30 15:32 ` [PATCH 2/2] xfs: add 'discard_sync' mount flag Jens Axboe
2018-04-30 17:19   ` Brian Foster
2018-04-30 18:07     ` Jens Axboe
2018-04-30 18:25       ` Luis R. Rodriguez
2018-04-30 18:31         ` Jens Axboe
2018-04-30 19:19         ` Eric Sandeen
2018-04-30 19:21           ` Jens Axboe
2018-04-30 19:57             ` Eric Sandeen
2018-04-30 19:58               ` Jens Axboe
2018-04-30 22:59                 ` Eric Sandeen
2018-04-30 23:02                   ` Jens Axboe
2018-04-30 19:18       ` Brian Foster
2018-04-30 21:31   ` Dave Chinner
2018-04-30 21:42     ` Jens Axboe
2018-04-30 22:28       ` Dave Chinner
2018-04-30 22:40         ` Jens Axboe
2018-04-30 23:00           ` Jens Axboe
2018-04-30 23:23             ` Dave Chinner
2018-05-01 11:11               ` Brian Foster
2018-05-01 15:23               ` Jens Axboe [this message]
2018-05-02  2:54                 ` Martin K. Petersen
2018-05-02 14:20                   ` Jens Axboe
2018-04-30 23:01           ` Darrick J. Wong
2018-05-02 12:45 ` [PATCHSET 0/2] sync discard Christoph Hellwig
2018-05-02 14:19   ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bacc9dbb-c38b-9258-cd95-7582c27a97f7@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=david@fromorbit.com \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).