public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Keith Busch <keith.busch@intel.com>
Cc: "jianchao.wang" <jianchao.w.wang@oracle.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Christoph Hellwig <hch@lst.de>, Ming Lei <ming.lei@redhat.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: WARNING: CPU: 2 PID: 207 at drivers/nvme/host/core.c:527 nvme_setup_cmd+0x3d3
Date: Wed, 31 Jan 2018 20:03:38 -0700	[thread overview]
Message-ID: <78434914-2904-8be6-9451-914849fbac49@kernel.dk> (raw)
In-Reply-To: <20180131233304.GE27735@localhost.localdomain>

On 1/31/18 4:33 PM, Keith Busch wrote:
> On Wed, Jan 31, 2018 at 08:29:37AM -0700, Jens Axboe wrote:
>>
>> How about something like the below?
>>
>>
>> diff --git a/block/blk-merge.c b/block/blk-merge.c
>> index 8452fc7164cc..cee102fb060e 100644
>> --- a/block/blk-merge.c
>> +++ b/block/blk-merge.c
>> @@ -574,8 +574,13 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
>>  	    blk_rq_get_max_sectors(req, blk_rq_pos(req)))
>>  		return 0;
>>  
>> +	/*
>> +	 * For DISCARDs, the segment count isn't interesting since
>> +	 * the requests have no data attached.
>> +	 */
>>  	total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
>> -	if (blk_phys_contig_segment(q, req->biotail, next->bio)) {
>> +	if (total_phys_segments &&
>> +	    blk_phys_contig_segment(q, req->biotail, next->bio)) {
>>  		if (req->nr_phys_segments == 1)
>>  			req->bio->bi_seg_front_size = seg_size;
>>  		if (next->nr_phys_segments == 1)
> 
> That'll keep it from going to 0xffff, but you'll still hit the warning and
> IO error. Even worse, this will corrupt memory: blk_rq_nr_discard_segments
> will return 1, and since you really had 2 segments, the nvme driver will
> overrun its array.

Yeah you are right, that patch was shit. How about the below? We only
need to worry about segment size and number of segments if we are
carrying data. req->biotail and next->bio must be the same type, so
should be safe.


diff --git a/block/blk-merge.c b/block/blk-merge.c
index 8452fc7164cc..cf9adc4c64b5 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -553,9 +553,7 @@ static bool req_no_special_merge(struct request *req)
 static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
 				struct request *next)
 {
-	int total_phys_segments;
-	unsigned int seg_size =
-		req->biotail->bi_seg_back_size + next->bio->bi_seg_front_size;
+	int total_phys_segments = 0;
 
 	/*
 	 * First check if the either of the requests are re-queued
@@ -574,17 +572,27 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
 	    blk_rq_get_max_sectors(req, blk_rq_pos(req)))
 		return 0;
 
-	total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
-	if (blk_phys_contig_segment(q, req->biotail, next->bio)) {
-		if (req->nr_phys_segments == 1)
-			req->bio->bi_seg_front_size = seg_size;
-		if (next->nr_phys_segments == 1)
-			next->biotail->bi_seg_back_size = seg_size;
-		total_phys_segments--;
-	}
+	/*
+	 * If the requests aren't carrying any data payloads, we don't need
+	 * to look at the segment count
+	 */
+	if (bio_has_data(next->bio)) {
+		total_phys_segments = req->nr_phys_segments +
+					next->nr_phys_segments;
+		if (blk_phys_contig_segment(q, req->biotail, next->bio)) {
+			unsigned int seg_size = req->biotail->bi_seg_back_size +
+						next->bio->bi_seg_front_size;
+
+			if (req->nr_phys_segments == 1)
+				req->bio->bi_seg_front_size = seg_size;
+			if (next->nr_phys_segments == 1)
+				next->biotail->bi_seg_back_size = seg_size;
+			total_phys_segments--;
+		}
 
-	if (total_phys_segments > queue_max_segments(q))
-		return 0;
+		if (total_phys_segments > queue_max_segments(q))
+			return 0;
+	}
 
 	if (blk_integrity_merge_rq(q, req, next) == false)
 		return 0;

-- 
Jens Axboe

  reply	other threads:[~2018-02-01  3:03 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-30 15:41 WARNING: CPU: 2 PID: 207 at drivers/nvme/host/core.c:527 nvme_setup_cmd+0x3d3 Jens Axboe
2018-01-30 15:57 ` Jens Axboe
2018-01-30 20:30   ` Keith Busch
2018-01-30 20:32     ` Jens Axboe
2018-01-30 20:49       ` Keith Busch
2018-01-30 20:55         ` Jens Axboe
2018-01-31  4:25   ` jianchao.wang
2018-01-31 15:29     ` Jens Axboe
2018-01-31 23:33       ` Keith Busch
2018-02-01  3:03         ` Jens Axboe [this message]
2018-02-01  3:03       ` jianchao.wang
2018-02-01  3:07         ` Jens Axboe
2018-02-01  3:33           ` jianchao.wang
2018-02-01  3:35             ` Jens Axboe
2018-02-01  4:56           ` Keith Busch
2018-02-01 15:26             ` Jens Axboe
2018-02-01 17:58               ` Jens Axboe
2018-02-01 18:12                 ` Keith Busch
2018-02-01 19:52                 ` Keith Busch
2018-02-01 20:55                   ` Jens Axboe
2018-02-01 18:01               ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=78434914-2904-8be6-9451-914849fbac49@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=jianchao.w.wang@oracle.com \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox