* [PATCH 1/2] zram: use zram_read_from_zspool() in writeback
@ 2024-12-11 8:25 Sergey Senozhatsky
2024-12-11 8:25 ` [PATCH 2/2] zram: cond_resched() in writeback loop Sergey Senozhatsky
2024-12-11 10:08 ` [PATCH 1/2] zram: use zram_read_from_zspool() in writeback Sergey Senozhatsky
0 siblings, 2 replies; 3+ messages in thread
From: Sergey Senozhatsky @ 2024-12-11 8:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Minchan Kim, linux-kernel, Sergey Senozhatsky
We only can read pages from zspool in writeback, zram_read_page()
is not really right in that context not only because it's a more
generic function that handles ZRAM_WB pages, but also because it
requires us to unlock slot between slot flag check and actual page
read. Use zram_read_from_zspool() instead and do slot flags check
and page read under the same slot lock.
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
This is not -stable material. The patch simply makes things look
more logical and corect. The page content still can change while
we are in submit_bio_wait() and we try to handle that when
submit_bio_wait() returns.
drivers/block/zram/zram_drv.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index d06761ad541f..0a924fae02a4 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -55,8 +55,8 @@ static size_t huge_class_size;
static const struct block_device_operations zram_devops;
static void zram_free_page(struct zram *zram, size_t index);
-static int zram_read_page(struct zram *zram, struct page *page, u32 index,
- struct bio *parent);
+static int zram_read_from_zspool(struct zram *zram, struct page *page,
+ u32 index);
static int zram_slot_trylock(struct zram *zram, u32 index)
{
@@ -831,13 +831,10 @@ static ssize_t writeback_store(struct device *dev,
*/
if (!zram_test_flag(zram, index, ZRAM_PP_SLOT))
goto next;
+ if (zram_read_from_zspool(zram, page, index))
+ goto next;
zram_slot_unlock(zram, index);
- if (zram_read_page(zram, page, index, NULL)) {
- release_pp_slot(zram, pps);
- continue;
- }
-
bio_init(&bio, zram->bdev, &bio_vec, 1,
REQ_OP_WRITE | REQ_SYNC);
bio.bi_iter.bi_sector = blk_idx * (PAGE_SIZE >> 9);
--
2.47.1.613.gc27f4b7a9f-goog
^ permalink raw reply related [flat|nested] 3+ messages in thread* [PATCH 2/2] zram: cond_resched() in writeback loop
2024-12-11 8:25 [PATCH 1/2] zram: use zram_read_from_zspool() in writeback Sergey Senozhatsky
@ 2024-12-11 8:25 ` Sergey Senozhatsky
2024-12-11 10:08 ` [PATCH 1/2] zram: use zram_read_from_zspool() in writeback Sergey Senozhatsky
1 sibling, 0 replies; 3+ messages in thread
From: Sergey Senozhatsky @ 2024-12-11 8:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Minchan Kim, linux-kernel, Sergey Senozhatsky, stable
zram writeback is a costly operation, because every target slot
(unless ZRAM_HUGE) is decompressed before it gets written to a
backing device. The writeback to a backing device uses
submit_bio_wait() which may look like a rescheduling point.
However, if the backing device has BD_HAS_SUBMIT_BIO bit set
__submit_bio() calls directly disk->fops->submit_bio(bio) on
the backing device and so when submit_bio_wait() calls
blk_wait_io() the I/O is already done. On such systems we
effective end up in a loop
for_each (target slot) {
decompress(slot)
__submit_bio()
disk->fops->submit_bio(bio)
}
Which on PREEMPT_NONE systems triggers watchdogs (since there
are no explicit rescheduling points). Add cond_resched() to
the zram writeback loop.
Fixes: a939888ec38b ("zram: support idle/huge page writeback")
Cc: stable@vger.kernel.org
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
drivers/block/zram/zram_drv.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 0a924fae02a4..f5fa3db6b4f8 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -884,6 +884,8 @@ static ssize_t writeback_store(struct device *dev,
next:
zram_slot_unlock(zram, index);
release_pp_slot(zram, pps);
+
+ cond_resched();
}
if (blk_idx)
--
2.47.1.613.gc27f4b7a9f-goog
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH 1/2] zram: use zram_read_from_zspool() in writeback
2024-12-11 8:25 [PATCH 1/2] zram: use zram_read_from_zspool() in writeback Sergey Senozhatsky
2024-12-11 8:25 ` [PATCH 2/2] zram: cond_resched() in writeback loop Sergey Senozhatsky
@ 2024-12-11 10:08 ` Sergey Senozhatsky
1 sibling, 0 replies; 3+ messages in thread
From: Sergey Senozhatsky @ 2024-12-11 10:08 UTC (permalink / raw)
To: Andrew Morton; +Cc: Sergey Senozhatsky, Minchan Kim, linux-kernel
On (24/12/11 17:25), Sergey Senozhatsky wrote:
> We only can read pages from zspool in writeback, zram_read_page()
> is not really right in that context not only because it's a more
> generic function that handles ZRAM_WB pages, but also because it
> requires us to unlock slot between slot flag check and actual page
> read. Use zram_read_from_zspool() instead and do slot flags check
> and page read under the same slot lock.
Andrew, sorry for the noise, please ignore this mini-series. Let me
re-send it combined with v2 of `zram: split page type read/write handling`
series [1]
[1] https://lore.kernel.org/linux-kernel/20241210105420.1888790-1-senozhatsky@chromium.org
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-12-11 10:08 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-11 8:25 [PATCH 1/2] zram: use zram_read_from_zspool() in writeback Sergey Senozhatsky
2024-12-11 8:25 ` [PATCH 2/2] zram: cond_resched() in writeback loop Sergey Senozhatsky
2024-12-11 10:08 ` [PATCH 1/2] zram: use zram_read_from_zspool() in writeback Sergey Senozhatsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox