From: Ming Lei <ming.lei@redhat.com>
To: Mikulas Patocka <mpatocka@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
Jinyoung Choi <j-young.choi@samsung.com>,
Christoph Hellwig <hch@lst.de>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
Alasdair Kergon <agk@redhat.com>,
Mike Snitzer <snitzer@kernel.org>,
linux-block@vger.kernel.org, dm-devel@lists.linux.dev,
ming.lei@redhat.com
Subject: Re: [PATCH] bio-integrity: don't restrict the size of integrity metadata
Date: Wed, 4 Sep 2024 10:57:23 +0800 [thread overview]
Message-ID: <ZtfMkzqP3ONMCZL0@fedora> (raw)
In-Reply-To: <e41b3b8e-16c2-70cb-97cb-881234bb200d@redhat.com>
On Tue, Sep 03, 2024 at 09:47:59PM +0200, Mikulas Patocka wrote:
> Hi Jens
>
> I added dm-integrity inline mode in the 6.11 merge window. I've found out
> that it doesn't work with large bios - the reason is that the function
> bio_integrity_add_page refuses to add more metadata than
> queue_max_hw_sectors(q). This restriction is no longer needed, because
> big bios are split automatically. I'd like to ask you if you could send
> this commit to Linus before 6.11 comes out, so that the bug is fixed
> before the final release.
>
> Mikulas
>
>
> From: Mikulas Patocka <mpatocka@redhat.com>
>
> bio_integrity_add_page restricts the size of the integrity metadata to
> queue_max_hw_sectors(q). This restriction is not needed because oversized
> bios are split automatically. This restriction causes problems with
> dm-integrity 'inline' mode - if we send a large bio to dm-integrity and
> the bio's metadata are larger than queue_max_hw_sectors(q),
> bio_integrity_add_page fails and the bio is ended with BLK_STS_RESOURCE
> error.
>
> An example that triggers it:
>
> # modprobe brd rd_size=1048576
> # dmsetup create in1 --table '0 1847320 integrity /dev/ram0 0 64 D 1 fix_padding'
> # dmsetup create in2 --table '0 1847312 integrity /dev/mapper/in1 0 64 I 1 internal_hash:sha512'
> # dd if=/dev/zero of=/dev/mapper/in2 bs=1M oflag=direct status=progress
> dd: error writing '/dev/mapper/in2': Cannot allocate memory
> 1+0 records in
> 0+0 records out
> 0 bytes copied, 0.00169291 s, 0.0 kB/s
>
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> Fixes: fb0987682c62 ("dm-integrity: introduce the Inline mode")
> Fixes: 0ece1d649b6d ("bio-integrity: create multi-page bvecs in bio_integrity_add_page()")
Firstly meta size is always < bio size.
Secondly the check isn't needed either bio_integrity_add_page() is called on
to-be-splited bio(DM), or splited bio(blk-mq).
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Thanks,
Ming
next prev parent reply other threads:[~2024-09-04 2:57 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-03 19:47 [PATCH] bio-integrity: don't restrict the size of integrity metadata Mikulas Patocka
2024-09-04 2:57 ` Ming Lei [this message]
2024-09-04 5:19 ` Christoph Hellwig
2024-09-04 13:02 ` Anuj gupta
2024-09-04 13:17 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZtfMkzqP3ONMCZL0@fedora \
--to=ming.lei@redhat.com \
--cc=agk@redhat.com \
--cc=axboe@kernel.dk \
--cc=dm-devel@lists.linux.dev \
--cc=hch@lst.de \
--cc=j-young.choi@samsung.com \
--cc=linux-block@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=mpatocka@redhat.com \
--cc=snitzer@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).