From: Minchan Kim <minchan@kernel.org>
To: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>,
kernel-team@lge.com, linux-kernel@vger.kernel.org,
Hannes Reinecke <hare@suse.com>,
Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Subject: Re: [PATCH v2 1/6] zram: handle multiple pages attached bio's bvec
Date: Thu, 13 Apr 2017 22:40:57 +0900 [thread overview]
Message-ID: <20170413134057.GA27499@bbox> (raw)
In-Reply-To: <20170413071500.GA5457@linux-x5ow.site>
Hi Johannes,
On Thu, Apr 13, 2017 at 09:15:00AM +0200, Johannes Thumshirn wrote:
> On Thu, Apr 13, 2017 at 11:59:20AM +0900, Minchan Kim wrote:
> > Johannes Thumshirn reported system goes the panic when using NVMe over
> > Fabrics loopback target with zram.
> >
> > The reason is zram expects each bvec in bio contains a single page
> > but nvme can attach a huge bulk of pages attached to the bio's bvec
> > so that zram's index arithmetic could be wrong so that out-of-bound
> > access makes system panic.
> >
> > [1] in mainline solved solved the problem by limiting max_sectors with
> > SECTORS_PER_PAGE but it makes zram slow because bio should split with
> > each pages so this patch makes zram aware of multiple pages in a bvec
> > so it could solve without any regression(ie, bio split).
> >
> > [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of
> > bounds accesses
> >
> > * from v1
> > * Do not exceed page boundary when set up bv.bv_len in make_request
> > * change "remained" variable name with "unwritten"
> >
> > Cc: Hannes Reinecke <hare@suse.com>
> > Reported-by: Johannes Thumshirn <jthumshirn@suse.de>
> > Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
> > Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
> > Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
>
> Hi Minchan,
>
> A quick amendment to your patch you forgot to remove the queue limit
> setting which I introduced with commit 0bc315381fe9.
It was my mistake. I made patches with reverting 0bc315381fe9.
Thanks for the review!
I think it is convenient to resend just this one for Andrew.
Andrew, Please take this as [1/6]. Sorry for the confusion.
>From c2f0e777f40a29c4fb56794b42df09f2188be9d6 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Mon, 3 Apr 2017 08:46:45 +0900
Subject: [PATCH] zram: handle multiple pages attached bio's bvec
Johannes Thumshirn reported system goes the panic when using NVMe over
Fabrics loopback target with zram.
The reason is zram expects each bvec in bio contains a single page
but nvme can attach a huge bulk of pages attached to the bio's bvec
so that zram's index arithmetic could be wrong so that out-of-bound
access makes system panic.
[1] in mainline solved solved the problem by limiting max_sectors with
SECTORS_PER_PAGE but it makes zram slow because bio should split with
each pages so this patch makes zram aware of multiple pages in a bvec
so it could solve without any regression(ie, bio split).
[1] 0bc315381fe9, zram: set physical queue limits to avoid array out of
bounds accesses
* from v1
* Do not exceed page boundary when set up bv.bv_len in make_request
* change "remained" variable name with "unwritten"
Cc: Hannes Reinecke <hare@suse.com>
Reported-by: Johannes Thumshirn <jthumshirn@suse.de>
Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
drivers/block/zram/zram_drv.c | 40 +++++++++++-----------------------------
1 file changed, 11 insertions(+), 29 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 11cc8767af99..ea0d20fabd89 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -137,8 +137,7 @@ static inline bool valid_io_request(struct zram *zram,
static void update_position(u32 *index, int *offset, struct bio_vec *bvec)
{
- if (*offset + bvec->bv_len >= PAGE_SIZE)
- (*index)++;
+ *index += (*offset + bvec->bv_len) / PAGE_SIZE;
*offset = (*offset + bvec->bv_len) % PAGE_SIZE;
}
@@ -838,34 +837,21 @@ static void __zram_make_request(struct zram *zram, struct bio *bio)
}
bio_for_each_segment(bvec, bio, iter) {
- int max_transfer_size = PAGE_SIZE - offset;
-
- if (bvec.bv_len > max_transfer_size) {
- /*
- * zram_bvec_rw() can only make operation on a single
- * zram page. Split the bio vector.
- */
- struct bio_vec bv;
-
- bv.bv_page = bvec.bv_page;
- bv.bv_len = max_transfer_size;
- bv.bv_offset = bvec.bv_offset;
+ struct bio_vec bv = bvec;
+ unsigned int unwritten = bvec.bv_len;
+ do {
+ bv.bv_len = min_t(unsigned int, PAGE_SIZE - offset,
+ unwritten);
if (zram_bvec_rw(zram, &bv, index, offset,
- op_is_write(bio_op(bio))) < 0)
+ op_is_write(bio_op(bio))) < 0)
goto out;
- bv.bv_len = bvec.bv_len - max_transfer_size;
- bv.bv_offset += max_transfer_size;
- if (zram_bvec_rw(zram, &bv, index + 1, 0,
- op_is_write(bio_op(bio))) < 0)
- goto out;
- } else
- if (zram_bvec_rw(zram, &bvec, index, offset,
- op_is_write(bio_op(bio))) < 0)
- goto out;
+ bv.bv_offset += bv.bv_len;
+ unwritten -= bv.bv_len;
- update_position(&index, &offset, &bvec);
+ update_position(&index, &offset, &bv);
+ } while (unwritten);
}
bio_endio(bio);
@@ -882,8 +868,6 @@ static blk_qc_t zram_make_request(struct request_queue *queue, struct bio *bio)
{
struct zram *zram = queue->queuedata;
- blk_queue_split(queue, &bio, queue->bio_split);
-
if (!valid_io_request(zram, bio->bi_iter.bi_sector,
bio->bi_iter.bi_size)) {
atomic64_inc(&zram->stats.invalid_io);
@@ -1191,8 +1175,6 @@ static int zram_add(void)
blk_queue_io_min(zram->disk->queue, PAGE_SIZE);
blk_queue_io_opt(zram->disk->queue, PAGE_SIZE);
zram->disk->queue->limits.discard_granularity = PAGE_SIZE;
- zram->disk->queue->limits.max_sectors = SECTORS_PER_PAGE;
- zram->disk->queue->limits.chunk_sectors = 0;
blk_queue_max_discard_sectors(zram->disk->queue, UINT_MAX);
/*
* zram_bio_discard() will clear all logical blocks if logical block
--
2.7.4
next prev parent reply other threads:[~2017-04-13 13:41 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-13 2:59 [PATCH v2 0/6] zram clean up Minchan Kim
2017-04-13 2:59 ` [PATCH v2 1/6] zram: handle multiple pages attached bio's bvec Minchan Kim
2017-04-13 7:15 ` Johannes Thumshirn
2017-04-13 13:40 ` Minchan Kim [this message]
2017-04-13 2:59 ` [PATCH v2 2/6] zram: partial IO refactoring Minchan Kim
2017-04-13 2:59 ` [PATCH v2 3/6] zram: use zram_slot_lock instead of raw bit_spin_lock op Minchan Kim
2017-04-13 2:59 ` [PATCH v2 4/6] zram: remove zram_meta structure Minchan Kim
2017-04-13 2:59 ` [PATCH v2 5/6] zram: introduce zram data accessor Minchan Kim
2017-04-13 2:59 ` [PATCH v2 6/6] zram: use zram_free_page instead of open-coded Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170413134057.GA27499@bbox \
--to=minchan@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=hare@suse.com \
--cc=jthumshirn@suse.de \
--cc=kernel-team@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=sergey.senozhatsky@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox