linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Sergey Senozhatsky <senozhatsky@chromium.org>,
	Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org
Subject: Re: [PATCH 15/16] zram: fix synchronous reads
Date: Fri, 7 Apr 2023 15:37:44 -0700	[thread overview]
Message-ID: <ZDCbOCza3bK5JSOB@google.com> (raw)
In-Reply-To: <20230407043743.GB5674@lst.de>

On Fri, Apr 07, 2023 at 06:37:43AM +0200, Christoph Hellwig wrote:
> On Thu, Apr 06, 2023 at 05:23:51PM -0700, Minchan Kim wrote:
> > rw_page path so that bio comes next to serve the rw_page failure.
> > In the case, zram will always do chained bio so we are fine with
> > asynchronous IO.
> > 
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index b8549c61ff2c..23fa0e03cdc1 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -1264,6 +1264,8 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
> >                 struct bio_vec bvec;
> > 
> >                 zram_slot_unlock(zram, index);
> > +               if (partial_io)
> > +                       return -EAGAIN;
> > 
> >                 bvec.bv_page = page;
> >                 bvec.bv_len = PAGE_SIZE;
> 
> What tree is this supposed to apply to?  6.2 already has the
> zram_bvec_read_from_bdev helper.

It was for prior to 6.1 where doesn't have (94541bc3fbde4, zram: always
expose rw_page). Since 6.1, with returning -EOPNOTSUPP on the rw_page,
it will defer the request to be handled via submit_bio so every IO to
read data from backing device will have parent bio and chained.

> 
> 
> But either way partial_io can be true when called from zram_bvec_write,
> where we can't just -EAGAIN as ->submit_bio is not allowed to return
> that except for REQ_NOWAIT bios, and even then it needs to be handle
> them when submitted without REQ_NOWAIT.

The case returning -EAGAIN is only rw_page path not the submit_bio path
so the request will be deferred to submit_bio path. However, yeah,
I see it's still broken since read_from_bdev_sync never wait the read
complettion again.

Then, as you suggested, just disable the feature for non-4K would be
the simple option.

I see only this way to achieve it and not sure it will cover all the
cases. If you have better idea, let me know.

Thank you!

diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig
index 0386b7da02aa..ae7662430789 100644
--- a/drivers/block/zram/Kconfig
+++ b/drivers/block/zram/Kconfig
@@ -58,6 +58,12 @@ config ZRAM_DEF_COMP
 config ZRAM_WRITEBACK
        bool "Write back incompressible or idle page to backing device"
        depends on ZRAM
+       depends on !PAGE_SIZE_8KB
+       depends on !PAGE_SIZE_16KB
+       depends on !PAGE_SIZE_32KB
+       depends on !PAGE_SIZE_64KB
+       depends on !PAGE_SIZE_256KB
+       depends on !PAGE_SIZE_1MB
        help
         With incompressible page, there is no memory saving to keep it
         in memory. Instead, write it out to backing device.


  reply	other threads:[~2023-04-07 22:37 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-06 14:40 zram I/O path cleanups and fixups v2 Christoph Hellwig
2023-04-06 14:40 ` [PATCH 01/16] zram: remove valid_io_request Christoph Hellwig
2023-04-06 21:10   ` Minchan Kim
2023-04-07  1:04     ` Sergey Senozhatsky
2023-04-07  1:14       ` Minchan Kim
2023-04-06 14:40 ` [PATCH 02/16] zram: make zram_bio_discard more self-contained Christoph Hellwig
2023-04-06 21:10   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 03/16] zram: simplify bvec iteration in __zram_make_request Christoph Hellwig
2023-04-06 21:12   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 04/16] zram: move discard handling to zram_submit_bio Christoph Hellwig
2023-04-06 21:12   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 05/16] zram: return early on error in zram_bvec_rw Christoph Hellwig
2023-04-06 21:13   ` Minchan Kim
2023-04-07  1:01   ` Sergey Senozhatsky
2023-04-06 14:40 ` [PATCH 06/16] zram: refactor highlevel read and write handling Christoph Hellwig
2023-04-06 21:15   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 07/16] zram: don't use highmem for the bounce buffer in zram_bvec_{read,write} Christoph Hellwig
2023-04-06 21:16   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 08/16] zram: rename __zram_bvec_read to zram_read_page Christoph Hellwig
2023-04-06 21:17   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 09/16] zram: directly call zram_read_page in writeback_store Christoph Hellwig
2023-04-06 21:19   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 10/16] zram: refactor zram_bdev_read Christoph Hellwig
2023-04-06 21:20   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 11/16] zram: don't pass a bvec to __zram_bvec_write Christoph Hellwig
2023-04-06 21:22   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 12/16] zram: refactor zram_bdev_write Christoph Hellwig
2023-04-06 21:22   ` Minchan Kim
2023-04-06 14:40 ` [PATCH 13/16] zram: pass a page to read_from_bdev Christoph Hellwig
2023-04-06 21:23   ` Minchan Kim
2023-04-06 14:41 ` [PATCH 14/16] zram: don't return errors from read_from_bdev_async Christoph Hellwig
2023-04-06 21:24   ` Minchan Kim
2023-04-06 14:41 ` [PATCH 15/16] zram: fix synchronous reads Christoph Hellwig
2023-04-06 22:05   ` Minchan Kim
2023-04-06 22:58     ` Andrew Morton
2023-04-07  0:23       ` Minchan Kim
2023-04-07  0:37         ` Andrew Morton
2023-04-07  4:37         ` Christoph Hellwig
2023-04-07 22:37           ` Minchan Kim [this message]
2023-04-07  4:33     ` Christoph Hellwig
2023-04-07  7:22   ` Christoph Hellwig
2023-04-06 14:41 ` [PATCH 16/16] zram: return errors from read_from_bdev_sync Christoph Hellwig
2023-04-06 22:05   ` Minchan Kim
2023-04-06 22:06 ` zram I/O path cleanups and fixups v2 Minchan Kim
  -- strict thread matches above, loose matches on Subject: below --
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
2023-04-04 15:05 ` [PATCH 15/16] zram: fix synchronous reads Christoph Hellwig
2023-04-06  6:41   ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZDCbOCza3bK5JSOB@google.com \
    --to=minchan@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=senozhatsky@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).