From: Johannes Thumshirn <jthumshirn@suse.de>
To: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
kernel-team@lge.com, linux-kernel@vger.kernel.org,
Hannes Reinecke <hare@suse.com>,
Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Subject: Re: [PATCH v2 1/6] zram: handle multiple pages attached bio's bvec
Date: Thu, 13 Apr 2017 09:15:00 +0200 [thread overview]
Message-ID: <20170413071500.GA5457@linux-x5ow.site> (raw)
In-Reply-To: <1492052365-16169-2-git-send-email-minchan@kernel.org>
On Thu, Apr 13, 2017 at 11:59:20AM +0900, Minchan Kim wrote:
> Johannes Thumshirn reported system goes the panic when using NVMe over
> Fabrics loopback target with zram.
>
> The reason is zram expects each bvec in bio contains a single page
> but nvme can attach a huge bulk of pages attached to the bio's bvec
> so that zram's index arithmetic could be wrong so that out-of-bound
> access makes system panic.
>
> [1] in mainline solved solved the problem by limiting max_sectors with
> SECTORS_PER_PAGE but it makes zram slow because bio should split with
> each pages so this patch makes zram aware of multiple pages in a bvec
> so it could solve without any regression(ie, bio split).
>
> [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of
> bounds accesses
>
> * from v1
> * Do not exceed page boundary when set up bv.bv_len in make_request
> * change "remained" variable name with "unwritten"
>
> Cc: Hannes Reinecke <hare@suse.com>
> Reported-by: Johannes Thumshirn <jthumshirn@suse.de>
> Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
Hi Minchan,
A quick amendment to your patch you forgot to remove the queue limit
setting which I introduced with commit 0bc315381fe9.
Thanks,
Johannes
--
Johannes Thumshirn Storage
jthumshirn@suse.de +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
next prev parent reply other threads:[~2017-04-13 7:15 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-13 2:59 [PATCH v2 0/6] zram clean up Minchan Kim
2017-04-13 2:59 ` [PATCH v2 1/6] zram: handle multiple pages attached bio's bvec Minchan Kim
2017-04-13 7:15 ` Johannes Thumshirn [this message]
2017-04-13 13:40 ` Minchan Kim
2017-04-13 2:59 ` [PATCH v2 2/6] zram: partial IO refactoring Minchan Kim
2017-04-13 2:59 ` [PATCH v2 3/6] zram: use zram_slot_lock instead of raw bit_spin_lock op Minchan Kim
2017-04-13 2:59 ` [PATCH v2 4/6] zram: remove zram_meta structure Minchan Kim
2017-04-13 2:59 ` [PATCH v2 5/6] zram: introduce zram data accessor Minchan Kim
2017-04-13 2:59 ` [PATCH v2 6/6] zram: use zram_free_page instead of open-coded Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170413071500.GA5457@linux-x5ow.site \
--to=jthumshirn@suse.de \
--cc=akpm@linux-foundation.org \
--cc=hare@suse.com \
--cc=kernel-team@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=minchan@kernel.org \
--cc=sergey.senozhatsky@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox