From: Minchan Kim <minchan@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: <kernel-team@lge.com>, <linux-kernel@vger.kernel.org>,
Minchan Kim <minchan@kernel.org>, Hannes Reinecke <hare@suse.com>,
Johannes Thumshirn <jthumshirn@suse.de>,
Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Subject: [PATCH v2 1/6] zram: handle multiple pages attached bio's bvec
Date: Thu, 13 Apr 2017 11:59:20 +0900 [thread overview]
Message-ID: <1492052365-16169-2-git-send-email-minchan@kernel.org> (raw)
In-Reply-To: <1492052365-16169-1-git-send-email-minchan@kernel.org>
Johannes Thumshirn reported system goes the panic when using NVMe over
Fabrics loopback target with zram.
The reason is zram expects each bvec in bio contains a single page
but nvme can attach a huge bulk of pages attached to the bio's bvec
so that zram's index arithmetic could be wrong so that out-of-bound
access makes system panic.
[1] in mainline solved solved the problem by limiting max_sectors with
SECTORS_PER_PAGE but it makes zram slow because bio should split with
each pages so this patch makes zram aware of multiple pages in a bvec
so it could solve without any regression(ie, bio split).
[1] 0bc315381fe9, zram: set physical queue limits to avoid array out of
bounds accesses
* from v1
* Do not exceed page boundary when set up bv.bv_len in make_request
* change "remained" variable name with "unwritten"
Cc: Hannes Reinecke <hare@suse.com>
Reported-by: Johannes Thumshirn <jthumshirn@suse.de>
Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
drivers/block/zram/zram_drv.c | 38 +++++++++++---------------------------
1 file changed, 11 insertions(+), 27 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 11cc8767af99..9c3f862b72f4 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -137,8 +137,7 @@ static inline bool valid_io_request(struct zram *zram,
static void update_position(u32 *index, int *offset, struct bio_vec *bvec)
{
- if (*offset + bvec->bv_len >= PAGE_SIZE)
- (*index)++;
+ *index += (*offset + bvec->bv_len) / PAGE_SIZE;
*offset = (*offset + bvec->bv_len) % PAGE_SIZE;
}
@@ -838,34 +837,21 @@ static void __zram_make_request(struct zram *zram, struct bio *bio)
}
bio_for_each_segment(bvec, bio, iter) {
- int max_transfer_size = PAGE_SIZE - offset;
-
- if (bvec.bv_len > max_transfer_size) {
- /*
- * zram_bvec_rw() can only make operation on a single
- * zram page. Split the bio vector.
- */
- struct bio_vec bv;
-
- bv.bv_page = bvec.bv_page;
- bv.bv_len = max_transfer_size;
- bv.bv_offset = bvec.bv_offset;
+ struct bio_vec bv = bvec;
+ unsigned int unwritten = bvec.bv_len;
+ do {
+ bv.bv_len = min_t(unsigned int, PAGE_SIZE - offset,
+ unwritten);
if (zram_bvec_rw(zram, &bv, index, offset,
- op_is_write(bio_op(bio))) < 0)
+ op_is_write(bio_op(bio))) < 0)
goto out;
- bv.bv_len = bvec.bv_len - max_transfer_size;
- bv.bv_offset += max_transfer_size;
- if (zram_bvec_rw(zram, &bv, index + 1, 0,
- op_is_write(bio_op(bio))) < 0)
- goto out;
- } else
- if (zram_bvec_rw(zram, &bvec, index, offset,
- op_is_write(bio_op(bio))) < 0)
- goto out;
+ bv.bv_offset += bv.bv_len;
+ unwritten -= bv.bv_len;
- update_position(&index, &offset, &bvec);
+ update_position(&index, &offset, &bv);
+ } while (unwritten);
}
bio_endio(bio);
@@ -882,8 +868,6 @@ static blk_qc_t zram_make_request(struct request_queue *queue, struct bio *bio)
{
struct zram *zram = queue->queuedata;
- blk_queue_split(queue, &bio, queue->bio_split);
-
if (!valid_io_request(zram, bio->bi_iter.bi_sector,
bio->bi_iter.bi_size)) {
atomic64_inc(&zram->stats.invalid_io);
--
2.7.4
next prev parent reply other threads:[~2017-04-13 2:59 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-13 2:59 [PATCH v2 0/6] zram clean up Minchan Kim
2017-04-13 2:59 ` Minchan Kim [this message]
2017-04-13 7:15 ` [PATCH v2 1/6] zram: handle multiple pages attached bio's bvec Johannes Thumshirn
2017-04-13 13:40 ` Minchan Kim
2017-04-13 2:59 ` [PATCH v2 2/6] zram: partial IO refactoring Minchan Kim
2017-04-13 2:59 ` [PATCH v2 3/6] zram: use zram_slot_lock instead of raw bit_spin_lock op Minchan Kim
2017-04-13 2:59 ` [PATCH v2 4/6] zram: remove zram_meta structure Minchan Kim
2017-04-13 2:59 ` [PATCH v2 5/6] zram: introduce zram data accessor Minchan Kim
2017-04-13 2:59 ` [PATCH v2 6/6] zram: use zram_free_page instead of open-coded Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1492052365-16169-2-git-send-email-minchan@kernel.org \
--to=minchan@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=hare@suse.com \
--cc=jthumshirn@suse.de \
--cc=kernel-team@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=sergey.senozhatsky@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox