* [PATCH] zram: do not forget to endio for partial discard requests
@ 2026-03-31 7:15 Sergey Senozhatsky
2026-03-31 7:26 ` Christoph Hellwig
2026-03-31 7:32 ` Qu Wenruo
0 siblings, 2 replies; 5+ messages in thread
From: Sergey Senozhatsky @ 2026-03-31 7:15 UTC (permalink / raw)
To: Andrew Morton
Cc: Minchan Kim, Brian Geffon, linux-block, linux-mm,
Sergey Senozhatsky, Qu Wenruo, Christoph Hellwig
As reported by Qu Wenruo, the following
getconf PAGESIZE
65536
blkdiscard -p 4k /dev/zram0
takes literally forever to complete. zram doesn't support
partial discards and just returns immediately w/o doing any
discard work in such cases. The problem is that we forget
to endio on our way out, so blkdiscard sleeps forever in
submit_bio_wait(). Fix this by adding a missing bio_endio()
call.
Fixes: 0120dd6e4e202 ("zram: make zram_bio_discard more self-contained")
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Reported-by: Qu Wenruo <wqu@suse.com>
Closes: https://lore.kernel.org/linux-block/92361cd3-fb8b-482e-bc89-15ff1acb9a59@suse.com
Cc: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index dcea703a6766..b0637423953b 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -2683,8 +2683,10 @@ static void zram_bio_discard(struct zram *zram, struct bio *bio)
* skipping this logical block is appropriate here.
*/
if (offset) {
- if (n <= (PAGE_SIZE - offset))
+ if (n <= (PAGE_SIZE - offset)) {
+ bio_endio(bio);
return;
+ }
n -= (PAGE_SIZE - offset);
index++;
--
2.53.0.1018.g2bb0e51243-goog
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] zram: do not forget to endio for partial discard requests
2026-03-31 7:15 [PATCH] zram: do not forget to endio for partial discard requests Sergey Senozhatsky
@ 2026-03-31 7:26 ` Christoph Hellwig
2026-03-31 7:27 ` Sergey Senozhatsky
2026-03-31 7:32 ` Qu Wenruo
1 sibling, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2026-03-31 7:26 UTC (permalink / raw)
To: Sergey Senozhatsky
Cc: Andrew Morton, Minchan Kim, Brian Geffon, linux-block, linux-mm,
Qu Wenruo, Christoph Hellwig
On Tue, Mar 31, 2026 at 04:15:06PM +0900, Sergey Senozhatsky wrote:
> +++ b/drivers/block/zram/zram_drv.c
> @@ -2683,8 +2683,10 @@ static void zram_bio_discard(struct zram *zram, struct bio *bio)
> * skipping this logical block is appropriate here.
> */
> if (offset) {
> - if (n <= (PAGE_SIZE - offset))
> + if (n <= (PAGE_SIZE - offset)) {
> + bio_endio(bio);
> return;
> + }
Use goto end_bio and share the code with the final completion at the
end of the function?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] zram: do not forget to endio for partial discard requests
2026-03-31 7:26 ` Christoph Hellwig
@ 2026-03-31 7:27 ` Sergey Senozhatsky
0 siblings, 0 replies; 5+ messages in thread
From: Sergey Senozhatsky @ 2026-03-31 7:27 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Sergey Senozhatsky, Andrew Morton, Minchan Kim, Brian Geffon,
linux-block, linux-mm, Qu Wenruo
On (26/03/31 09:26), Christoph Hellwig wrote:
> On Tue, Mar 31, 2026 at 04:15:06PM +0900, Sergey Senozhatsky wrote:
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -2683,8 +2683,10 @@ static void zram_bio_discard(struct zram *zram, struct bio *bio)
> > * skipping this logical block is appropriate here.
> > */
> > if (offset) {
> > - if (n <= (PAGE_SIZE - offset))
> > + if (n <= (PAGE_SIZE - offset)) {
> > + bio_endio(bio);
> > return;
> > + }
>
> Use goto end_bio and share the code with the final completion at the
> end of the function?
OK, will do in v2.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] zram: do not forget to endio for partial discard requests
2026-03-31 7:15 [PATCH] zram: do not forget to endio for partial discard requests Sergey Senozhatsky
2026-03-31 7:26 ` Christoph Hellwig
@ 2026-03-31 7:32 ` Qu Wenruo
2026-03-31 7:37 ` Sergey Senozhatsky
1 sibling, 1 reply; 5+ messages in thread
From: Qu Wenruo @ 2026-03-31 7:32 UTC (permalink / raw)
To: Sergey Senozhatsky, Andrew Morton
Cc: Minchan Kim, Brian Geffon, linux-block, linux-mm,
Christoph Hellwig
在 2026/3/31 17:45, Sergey Senozhatsky 写道:
> As reported by Qu Wenruo, the following
>
> getconf PAGESIZE
> 65536
> blkdiscard -p 4k /dev/zram0
>
> takes literally forever to complete. zram doesn't support
> partial discards and just returns immediately w/o doing any
> discard work in such cases. The problem is that we forget
> to endio on our way out, so blkdiscard sleeps forever in
> submit_bio_wait(). Fix this by adding a missing bio_endio()
> call.
>
> Fixes: 0120dd6e4e202 ("zram: make zram_bio_discard more self-contained")
> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> Reported-by: Qu Wenruo <wqu@suse.com>
> Closes: https://lore.kernel.org/linux-block/92361cd3-fb8b-482e-bc89-15ff1acb9a59@suse.com
> Cc: Christoph Hellwig <hch@lst.de>
Test-by: Qu Wenruo <wqu@suse.com>
Now all discard related works like mkfs.btrfs and mounting btrfs with
async discard on zram devices works fine.
Thanks a lot for such a quick debugging and fix!
Although I'm still seeing not ideal performance if a btrfs with 4K block
size on that zram device.
On a regular block device (LVM):
# mkfs.btrfs -f /dev/test/scratch1
# mount /dev/test/scratch1 /mnt/btrfs
# time sudo ./xfstests-dev/ltp/fsstress -d /mnt/btrfs/ -n 10000 \
-s 1774231493 -w
real 0m3.435s
user 0m0.005s
sys 0m0.009s
On a zram block device: (mkfs.btrfs defaults to 4K block size)
# mkfs.btrfs -f /dev/zram0
# mount /dev/test/scratch1 /mnt/btrfs
# time sudo ./xfstests-dev/ltp/fsstress -d /mnt/btrfs/ -n 10000 \
-s 1774231493 -w
real 0m10.726s
user 0m0.005s
sys 0m0.009s
Which is 3 times slower than regular block devices.
And after that short fsstress run:
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 zstd 1G 271.2M 10M 20.4M
I guess this time the overhead is just in the compression so that's
something expected?
Thanks,
Qu
> ---
> drivers/block/zram/zram_drv.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index dcea703a6766..b0637423953b 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -2683,8 +2683,10 @@ static void zram_bio_discard(struct zram *zram, struct bio *bio)
> * skipping this logical block is appropriate here.
> */
> if (offset) {
> - if (n <= (PAGE_SIZE - offset))
> + if (n <= (PAGE_SIZE - offset)) {
> + bio_endio(bio);
> return;
> + }
>
> n -= (PAGE_SIZE - offset);
> index++;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] zram: do not forget to endio for partial discard requests
2026-03-31 7:32 ` Qu Wenruo
@ 2026-03-31 7:37 ` Sergey Senozhatsky
0 siblings, 0 replies; 5+ messages in thread
From: Sergey Senozhatsky @ 2026-03-31 7:37 UTC (permalink / raw)
To: Qu Wenruo
Cc: Sergey Senozhatsky, Andrew Morton, Minchan Kim, Brian Geffon,
linux-block, linux-mm, Christoph Hellwig
On (26/03/31 18:02), Qu Wenruo wrote:
[..]
> Which is 3 times slower than regular block devices.
>
> And after that short fsstress run:
>
> NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
> /dev/zram0 zstd 1G 271.2M 10M 20.4M
>
> I guess this time the overhead is just in the compression so that's
> something expected?
Yes, pretty sure that is compression overhead, zstd is not the fastest
algorithm. If you need something just for testing then lzo or lz4
should be a better choice.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-03-31 7:37 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 7:15 [PATCH] zram: do not forget to endio for partial discard requests Sergey Senozhatsky
2026-03-31 7:26 ` Christoph Hellwig
2026-03-31 7:27 ` Sergey Senozhatsky
2026-03-31 7:32 ` Qu Wenruo
2026-03-31 7:37 ` Sergey Senozhatsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox