* [PATCH 01/16] zram: remove valid_io_request
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 1:44 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 02/16] zram: make zram_bio_discard more self-contained Christoph Hellwig
` (16 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
All bios hande to drivers from the block layer are checked against the
device size and for logical block alignment already, so don't duplicate
those checks. Merge __zram_make_request into zram_submit_bio now that
there is no point in splitting them.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 34 +---------------------------------
drivers/block/zram/zram_drv.h | 1 -
2 files changed, 1 insertion(+), 34 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index aa490da3cef233..0c6b3ba2970b5f 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -174,30 +174,6 @@ static inline u32 zram_get_priority(struct zram *zram, u32 index)
return prio & ZRAM_COMP_PRIORITY_MASK;
}
-/*
- * Check if request is within bounds and aligned on zram logical blocks.
- */
-static inline bool valid_io_request(struct zram *zram,
- sector_t start, unsigned int size)
-{
- u64 end, bound;
-
- /* unaligned request */
- if (unlikely(start & (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1)))
- return false;
- if (unlikely(size & (ZRAM_LOGICAL_BLOCK_SIZE - 1)))
- return false;
-
- end = start + (size >> SECTOR_SHIFT);
- bound = zram->disksize >> SECTOR_SHIFT;
- /* out of range */
- if (unlikely(start >= bound || end > bound || start > end))
- return false;
-
- /* I/O request is valid */
- return true;
-}
-
static void update_position(u32 *index, int *offset, struct bio_vec *bvec)
{
*index += (*offset + bvec->bv_len) / PAGE_SIZE;
@@ -1190,10 +1166,9 @@ static ssize_t io_stat_show(struct device *dev,
down_read(&zram->init_lock);
ret = scnprintf(buf, PAGE_SIZE,
- "%8llu %8llu %8llu %8llu\n",
+ "%8llu %8llu 0 %8llu\n",
(u64)atomic64_read(&zram->stats.failed_reads),
(u64)atomic64_read(&zram->stats.failed_writes),
- (u64)atomic64_read(&zram->stats.invalid_io),
(u64)atomic64_read(&zram->stats.notify_free));
up_read(&zram->init_lock);
@@ -2043,13 +2018,6 @@ static void zram_submit_bio(struct bio *bio)
{
struct zram *zram = bio->bi_bdev->bd_disk->private_data;
- if (!valid_io_request(zram, bio->bi_iter.bi_sector,
- bio->bi_iter.bi_size)) {
- atomic64_inc(&zram->stats.invalid_io);
- bio_io_error(bio);
- return;
- }
-
__zram_make_request(zram, bio);
}
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index c5254626f051fa..ca7a15bd48456a 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -78,7 +78,6 @@ struct zram_stats {
atomic64_t compr_data_size; /* compressed size of pages stored */
atomic64_t failed_reads; /* can happen when memory is too low */
atomic64_t failed_writes; /* can happen when memory is too low */
- atomic64_t invalid_io; /* non-page-aligned I/O requests */
atomic64_t notify_free; /* no. of swap slot free notifications */
atomic64_t same_pages; /* no. of same element filled pages */
atomic64_t huge_pages; /* no. of huge pages */
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH 01/16] zram: remove valid_io_request
2023-04-04 15:05 ` [PATCH 01/16] zram: remove valid_io_request Christoph Hellwig
@ 2023-04-06 1:44 ` Sergey Senozhatsky
2023-04-06 5:39 ` Christoph Hellwig
0 siblings, 1 reply; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 1:44 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
> All bios hande to drivers from the block layer are checked against the
> device size and for logical block alignment already, so don't duplicate
> those checks.
Has this changed "recently"? I can trace that valid_io() function back
to initial zram commit 306b0c957f3f0e7 (Staging: virtual block device
driver (ramzswap)). Was that check always redundant?
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH 01/16] zram: remove valid_io_request
2023-04-06 1:44 ` Sergey Senozhatsky
@ 2023-04-06 5:39 ` Christoph Hellwig
2023-04-06 6:28 ` Sergey Senozhatsky
0 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-06 5:39 UTC (permalink / raw)
To: Sergey Senozhatsky; +Cc: Minchan Kim, Jens Axboe, linux-block
On Thu, Apr 06, 2023 at 10:44:49AM +0900, Sergey Senozhatsky wrote:
> On (23/04/04 17:05), Christoph Hellwig wrote:
> > All bios hande to drivers from the block layer are checked against the
> > device size and for logical block alignment already, so don't duplicate
> > those checks.
>
> Has this changed "recently"? I can trace that valid_io() function back
> to initial zram commit 306b0c957f3f0e7 (Staging: virtual block device
> driver (ramzswap)). Was that check always redundant?
It was always redundant.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH 01/16] zram: remove valid_io_request
2023-04-06 5:39 ` Christoph Hellwig
@ 2023-04-06 6:28 ` Sergey Senozhatsky
0 siblings, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 6:28 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Sergey Senozhatsky, Minchan Kim, Jens Axboe, linux-block
On (23/04/05 22:39), Christoph Hellwig wrote:
> On Thu, Apr 06, 2023 at 10:44:49AM +0900, Sergey Senozhatsky wrote:
> > On (23/04/04 17:05), Christoph Hellwig wrote:
> > > All bios hande to drivers from the block layer are checked against the
> > > device size and for logical block alignment already, so don't duplicate
> > > those checks.
> >
> > Has this changed "recently"? I can trace that valid_io() function back
> > to initial zram commit 306b0c957f3f0e7 (Staging: virtual block device
> > driver (ramzswap)). Was that check always redundant?
>
> It was always redundant.
OK, thanks.
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH 02/16] zram: make zram_bio_discard more self-contained
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
2023-04-04 15:05 ` [PATCH 01/16] zram: remove valid_io_request Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 1:45 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 03/16] zram: implify bvec iteration in __zram_make_request Christoph Hellwig
` (15 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
Derive the index and offset variables inside the function, and complete
the bio directly in preparation for cleaning up the I/O path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 16 +++++++---------
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 0c6b3ba2970b5f..3d8e2a1db7c7d3 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1896,15 +1896,12 @@ static ssize_t recompress_store(struct device *dev,
}
#endif
-/*
- * zram_bio_discard - handler on discard request
- * @index: physical block index in PAGE_SIZE units
- * @offset: byte offset within physical block
- */
-static void zram_bio_discard(struct zram *zram, u32 index,
- int offset, struct bio *bio)
+static void zram_bio_discard(struct zram *zram, struct bio *bio)
{
size_t n = bio->bi_iter.bi_size;
+ u32 index = bio->bi_iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
+ u32 offset = (bio->bi_iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
+ SECTOR_SHIFT;
/*
* zram manages data in physical block size units. Because logical block
@@ -1932,6 +1929,8 @@ static void zram_bio_discard(struct zram *zram, u32 index,
index++;
n -= PAGE_SIZE;
}
+
+ bio_endio(bio);
}
/*
@@ -1980,8 +1979,7 @@ static void __zram_make_request(struct zram *zram, struct bio *bio)
switch (bio_op(bio)) {
case REQ_OP_DISCARD:
case REQ_OP_WRITE_ZEROES:
- zram_bio_discard(zram, index, offset, bio);
- bio_endio(bio);
+ zram_bio_discard(zram, bio);
return;
default:
break;
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH 03/16] zram: implify bvec iteration in __zram_make_request
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
2023-04-04 15:05 ` [PATCH 01/16] zram: remove valid_io_request Christoph Hellwig
2023-04-04 15:05 ` [PATCH 02/16] zram: make zram_bio_discard more self-contained Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 2:31 ` Sergey Senozhatsky
2023-04-06 5:40 ` Christoph Hellwig
2023-04-04 15:05 ` [PATCH 04/16] zram: move discard handling to zram_submit_bio Christoph Hellwig
` (14 subsequent siblings)
17 siblings, 2 replies; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
bio_for_each_segment synthetize bvecs that never cross page boundaries,
so don't duplicate that work in an inner loop.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 42 +++++++++--------------------------
1 file changed, 11 insertions(+), 31 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 3d8e2a1db7c7d3..5f1e4fd23ade32 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -174,12 +174,6 @@ static inline u32 zram_get_priority(struct zram *zram, u32 index)
return prio & ZRAM_COMP_PRIORITY_MASK;
}
-static void update_position(u32 *index, int *offset, struct bio_vec *bvec)
-{
- *index += (*offset + bvec->bv_len) / PAGE_SIZE;
- *offset = (*offset + bvec->bv_len) % PAGE_SIZE;
-}
-
static inline void update_used_max(struct zram *zram,
const unsigned long pages)
{
@@ -1966,16 +1960,10 @@ static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
static void __zram_make_request(struct zram *zram, struct bio *bio)
{
- int offset;
- u32 index;
- struct bio_vec bvec;
struct bvec_iter iter;
+ struct bio_vec bv;
unsigned long start_time;
- index = bio->bi_iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
- offset = (bio->bi_iter.bi_sector &
- (SECTORS_PER_PAGE - 1)) << SECTOR_SHIFT;
-
switch (bio_op(bio)) {
case REQ_OP_DISCARD:
case REQ_OP_WRITE_ZEROES:
@@ -1986,24 +1974,16 @@ static void __zram_make_request(struct zram *zram, struct bio *bio)
}
start_time = bio_start_io_acct(bio);
- bio_for_each_segment(bvec, bio, iter) {
- struct bio_vec bv = bvec;
- unsigned int unwritten = bvec.bv_len;
-
- do {
- bv.bv_len = min_t(unsigned int, PAGE_SIZE - offset,
- unwritten);
- if (zram_bvec_rw(zram, &bv, index, offset,
- bio_op(bio), bio) < 0) {
- bio->bi_status = BLK_STS_IOERR;
- break;
- }
-
- bv.bv_offset += bv.bv_len;
- unwritten -= bv.bv_len;
-
- update_position(&index, &offset, &bv);
- } while (unwritten);
+ bio_for_each_segment(bv, bio, iter) {
+ u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
+ u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
+ SECTOR_SHIFT;
+
+ if (zram_bvec_rw(zram, &bv, index, offset, bio_op(bio),
+ bio) < 0) {
+ bio->bi_status = BLK_STS_IOERR;
+ break;
+ }
}
bio_end_io_acct(bio, start_time);
bio_endio(bio);
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH 03/16] zram: implify bvec iteration in __zram_make_request
2023-04-04 15:05 ` [PATCH 03/16] zram: implify bvec iteration in __zram_make_request Christoph Hellwig
@ 2023-04-06 2:31 ` Sergey Senozhatsky
2023-04-06 5:40 ` Christoph Hellwig
1 sibling, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 2:31 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
>
> bio_for_each_segment synthetize bvecs that never cross page boundaries,
> so don't duplicate that work in an inner loop.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH 03/16] zram: implify bvec iteration in __zram_make_request
2023-04-04 15:05 ` [PATCH 03/16] zram: implify bvec iteration in __zram_make_request Christoph Hellwig
2023-04-06 2:31 ` Sergey Senozhatsky
@ 2023-04-06 5:40 ` Christoph Hellwig
1 sibling, 0 replies; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-06 5:40 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
self nit: there is a s missing in simplify in the Subject.
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH 04/16] zram: move discard handling to zram_submit_bio
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (2 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 03/16] zram: implify bvec iteration in __zram_make_request Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 2:33 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 05/16] zram: return early on error in zram_bvec_rw Christoph Hellwig
` (13 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
Switch on the bio operation in zram_submit_bio and only call into
__zram_make_request for read and write operations.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 5f1e4fd23ade32..6522b888839cb1 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1964,15 +1964,6 @@ static void __zram_make_request(struct zram *zram, struct bio *bio)
struct bio_vec bv;
unsigned long start_time;
- switch (bio_op(bio)) {
- case REQ_OP_DISCARD:
- case REQ_OP_WRITE_ZEROES:
- zram_bio_discard(zram, bio);
- return;
- default:
- break;
- }
-
start_time = bio_start_io_acct(bio);
bio_for_each_segment(bv, bio, iter) {
u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
@@ -1996,7 +1987,19 @@ static void zram_submit_bio(struct bio *bio)
{
struct zram *zram = bio->bi_bdev->bd_disk->private_data;
- __zram_make_request(zram, bio);
+ switch (bio_op(bio)) {
+ case REQ_OP_READ:
+ case REQ_OP_WRITE:
+ __zram_make_request(zram, bio);
+ break;
+ case REQ_OP_DISCARD:
+ case REQ_OP_WRITE_ZEROES:
+ zram_bio_discard(zram, bio);
+ return;
+ default:
+ WARN_ON_ONCE(1);
+ bio_endio(bio);
+ }
}
static void zram_slot_free_notify(struct block_device *bdev,
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH 04/16] zram: move discard handling to zram_submit_bio
2023-04-04 15:05 ` [PATCH 04/16] zram: move discard handling to zram_submit_bio Christoph Hellwig
@ 2023-04-06 2:33 ` Sergey Senozhatsky
0 siblings, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 2:33 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
>
> Switch on the bio operation in zram_submit_bio and only call into
> __zram_make_request for read and write operations.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
[..]
> @@ -1996,7 +1987,19 @@ static void zram_submit_bio(struct bio *bio)
> {
> struct zram *zram = bio->bi_bdev->bd_disk->private_data;
>
> - __zram_make_request(zram, bio);
> + switch (bio_op(bio)) {
> + case REQ_OP_READ:
> + case REQ_OP_WRITE:
> + __zram_make_request(zram, bio);
> + break;
> + case REQ_OP_DISCARD:
> + case REQ_OP_WRITE_ZEROES:
> + zram_bio_discard(zram, bio);
> + return;
Super nitpick, break;
> + default:
> + WARN_ON_ONCE(1);
> + bio_endio(bio);
> + }
> }
>
> static void zram_slot_free_notify(struct block_device *bdev,
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH 05/16] zram: return early on error in zram_bvec_rw
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (3 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 04/16] zram: move discard handling to zram_submit_bio Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 2:38 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 06/16] zram: refactor highlevel read and write handling Christoph Hellwig
` (12 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
When the low-level access fails, don't clear the idle flag or clear
the caches, and just return.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 6522b888839cb1..6196187e9ac36f 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1939,23 +1939,23 @@ static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
if (!op_is_write(op)) {
ret = zram_bvec_read(zram, bvec, index, offset, bio);
+ if (unlikely(ret < 0)) {
+ atomic64_inc(&zram->stats.failed_writes);
+ return ret;
+ }
flush_dcache_page(bvec->bv_page);
} else {
ret = zram_bvec_write(zram, bvec, index, offset, bio);
+ if (unlikely(ret < 0)) {
+ atomic64_inc(&zram->stats.failed_reads);
+ return ret;
+ }
}
zram_slot_lock(zram, index);
zram_accessed(zram, index);
zram_slot_unlock(zram, index);
-
- if (unlikely(ret < 0)) {
- if (!op_is_write(op))
- atomic64_inc(&zram->stats.failed_reads);
- else
- atomic64_inc(&zram->stats.failed_writes);
- }
-
- return ret;
+ return 0;
}
static void __zram_make_request(struct zram *zram, struct bio *bio)
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH 05/16] zram: return early on error in zram_bvec_rw
2023-04-04 15:05 ` [PATCH 05/16] zram: return early on error in zram_bvec_rw Christoph Hellwig
@ 2023-04-06 2:38 ` Sergey Senozhatsky
0 siblings, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 2:38 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
> @@ -1939,23 +1939,23 @@ static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
>
> if (!op_is_write(op)) {
> ret = zram_bvec_read(zram, bvec, index, offset, bio);
> + if (unlikely(ret < 0)) {
> + atomic64_inc(&zram->stats.failed_writes);
This is !op_is_write() so we need to increment ->failed_reads
> + return ret;
> + }
> flush_dcache_page(bvec->bv_page);
> } else {
> ret = zram_bvec_write(zram, bvec, index, offset, bio);
> + if (unlikely(ret < 0)) {
> + atomic64_inc(&zram->stats.failed_reads);
and ->failed_writes here.
> + return ret;
> + }
> }
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH 06/16] zram: refactor highlevel read and write handling
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (4 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 05/16] zram: return early on error in zram_bvec_rw Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 2:42 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 07/16] zram: don't use highmem for the bounce buffer in zram_bvec_{read,write} Christoph Hellwig
` (11 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
Instead of having an outer loop in __zram_make_request and then branch
out for reads vs writes for each loop iteration in zram_bvec_rw, split
the main handler into separat zram_bio_read and zram_bio_write handlers
that also include the functionality formerly in zram_bvec_rw.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 58 ++++++++++++++++++-----------------
1 file changed, 30 insertions(+), 28 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 6196187e9ac36f..83991cb61edaf6 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1927,38 +1927,34 @@ static void zram_bio_discard(struct zram *zram, struct bio *bio)
bio_endio(bio);
}
-/*
- * Returns errno if it has some problem. Otherwise return 0 or 1.
- * Returns 0 if IO request was done synchronously
- * Returns 1 if IO request was successfully submitted.
- */
-static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
- int offset, enum req_op op, struct bio *bio)
+static void zram_bio_read(struct zram *zram, struct bio *bio)
{
- int ret;
+ struct bvec_iter iter;
+ struct bio_vec bv;
+ unsigned long start_time;
- if (!op_is_write(op)) {
- ret = zram_bvec_read(zram, bvec, index, offset, bio);
- if (unlikely(ret < 0)) {
+ start_time = bio_start_io_acct(bio);
+ bio_for_each_segment(bv, bio, iter) {
+ u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
+ u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
+ SECTOR_SHIFT;
+
+ if (zram_bvec_read(zram, &bv, index, offset, bio) < 0) {
atomic64_inc(&zram->stats.failed_writes);
- return ret;
- }
- flush_dcache_page(bvec->bv_page);
- } else {
- ret = zram_bvec_write(zram, bvec, index, offset, bio);
- if (unlikely(ret < 0)) {
- atomic64_inc(&zram->stats.failed_reads);
- return ret;
+ bio->bi_status = BLK_STS_IOERR;
+ break;
}
- }
+ flush_dcache_page(bv.bv_page);
- zram_slot_lock(zram, index);
- zram_accessed(zram, index);
- zram_slot_unlock(zram, index);
- return 0;
+ zram_slot_lock(zram, index);
+ zram_accessed(zram, index);
+ zram_slot_unlock(zram, index);
+ }
+ bio_end_io_acct(bio, start_time);
+ bio_endio(bio);
}
-static void __zram_make_request(struct zram *zram, struct bio *bio)
+static void zram_bio_write(struct zram *zram, struct bio *bio)
{
struct bvec_iter iter;
struct bio_vec bv;
@@ -1970,11 +1966,15 @@ static void __zram_make_request(struct zram *zram, struct bio *bio)
u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
SECTOR_SHIFT;
- if (zram_bvec_rw(zram, &bv, index, offset, bio_op(bio),
- bio) < 0) {
+ if (zram_bvec_write(zram, &bv, index, offset, bio) < 0) {
+ atomic64_inc(&zram->stats.failed_reads);
bio->bi_status = BLK_STS_IOERR;
break;
}
+
+ zram_slot_lock(zram, index);
+ zram_accessed(zram, index);
+ zram_slot_unlock(zram, index);
}
bio_end_io_acct(bio, start_time);
bio_endio(bio);
@@ -1989,8 +1989,10 @@ static void zram_submit_bio(struct bio *bio)
switch (bio_op(bio)) {
case REQ_OP_READ:
+ zram_bio_read(zram, bio);
+ break;
case REQ_OP_WRITE:
- __zram_make_request(zram, bio);
+ zram_bio_write(zram, bio);
break;
case REQ_OP_DISCARD:
case REQ_OP_WRITE_ZEROES:
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH 06/16] zram: refactor highlevel read and write handling
2023-04-04 15:05 ` [PATCH 06/16] zram: refactor highlevel read and write handling Christoph Hellwig
@ 2023-04-06 2:42 ` Sergey Senozhatsky
0 siblings, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 2:42 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
> +static void zram_bio_read(struct zram *zram, struct bio *bio)
> {
> - int ret;
> + struct bvec_iter iter;
> + struct bio_vec bv;
> + unsigned long start_time;
>
> - if (!op_is_write(op)) {
> - ret = zram_bvec_read(zram, bvec, index, offset, bio);
> - if (unlikely(ret < 0)) {
> + start_time = bio_start_io_acct(bio);
> + bio_for_each_segment(bv, bio, iter) {
> + u32 index = iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
> + u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
> + SECTOR_SHIFT;
> +
> + if (zram_bvec_read(zram, &bv, index, offset, bio) < 0) {
> atomic64_inc(&zram->stats.failed_writes);
->failed_reads
> - return ret;
> - }
> - flush_dcache_page(bvec->bv_page);
> - } else {
> - ret = zram_bvec_write(zram, bvec, index, offset, bio);
> - if (unlikely(ret < 0)) {
> - atomic64_inc(&zram->stats.failed_reads);
> - return ret;
> + bio->bi_status = BLK_STS_IOERR;
> + break;
> }
> - }
> + flush_dcache_page(bv.bv_page);
>
> - zram_slot_lock(zram, index);
> - zram_accessed(zram, index);
> - zram_slot_unlock(zram, index);
> - return 0;
> + zram_slot_lock(zram, index);
> + zram_accessed(zram, index);
> + zram_slot_unlock(zram, index);
> + }
> + bio_end_io_acct(bio, start_time);
> + bio_endio(bio);
> }
>
> -static void __zram_make_request(struct zram *zram, struct bio *bio)
> +static void zram_bio_write(struct zram *zram, struct bio *bio)
> {
> struct bvec_iter iter;
> struct bio_vec bv;
> @@ -1970,11 +1966,15 @@ static void __zram_make_request(struct zram *zram, struct bio *bio)
> u32 offset = (iter.bi_sector & (SECTORS_PER_PAGE - 1)) <<
> SECTOR_SHIFT;
>
> - if (zram_bvec_rw(zram, &bv, index, offset, bio_op(bio),
> - bio) < 0) {
> + if (zram_bvec_write(zram, &bv, index, offset, bio) < 0) {
> + atomic64_inc(&zram->stats.failed_reads);
->failed_writes
> bio->bi_status = BLK_STS_IOERR;
> break;
> }
> +
> + zram_slot_lock(zram, index);
> + zram_accessed(zram, index);
> + zram_slot_unlock(zram, index);
> }
> bio_end_io_acct(bio, start_time);
> bio_endio(bio);
> @@ -1989,8 +1989,10 @@ static void zram_submit_bio(struct bio *bio)
>
> switch (bio_op(bio)) {
> case REQ_OP_READ:
> + zram_bio_read(zram, bio);
> + break;
> case REQ_OP_WRITE:
> - __zram_make_request(zram, bio);
> + zram_bio_write(zram, bio);
> break;
> case REQ_OP_DISCARD:
> case REQ_OP_WRITE_ZEROES:
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH 07/16] zram: don't use highmem for the bounce buffer in zram_bvec_{read,write}
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (5 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 06/16] zram: refactor highlevel read and write handling Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 2:46 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 08/16] zram: rename __zram_bvec_read to zram_read_page Christoph Hellwig
` (10 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
There is no point in allocation a highmem page when we instantly need
to copy from it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 83991cb61edaf6..0e91e94f24cd09 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1437,7 +1437,7 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
page = bvec->bv_page;
if (is_partial_io(bvec)) {
/* Use a temporary buffer to decompress the page */
- page = alloc_page(GFP_NOIO|__GFP_HIGHMEM);
+ page = alloc_page(GFP_NOIO);
if (!page)
return -ENOMEM;
}
@@ -1446,12 +1446,8 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
if (unlikely(ret))
goto out;
- if (is_partial_io(bvec)) {
- void *src = kmap_atomic(page);
-
- memcpy_to_bvec(bvec, src + offset);
- kunmap_atomic(src);
- }
+ if (is_partial_io(bvec))
+ memcpy_to_bvec(bvec, page_address(page) + offset);
out:
if (is_partial_io(bvec))
__free_page(page);
@@ -1595,12 +1591,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
vec = *bvec;
if (is_partial_io(bvec)) {
- void *dst;
/*
* This is a partial IO. We need to read the full page
* before to write the changes.
*/
- page = alloc_page(GFP_NOIO|__GFP_HIGHMEM);
+ page = alloc_page(GFP_NOIO);
if (!page)
return -ENOMEM;
@@ -1608,9 +1603,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
if (ret)
goto out;
- dst = kmap_atomic(page);
- memcpy_from_bvec(dst + offset, bvec);
- kunmap_atomic(dst);
+ memcpy_from_bvec(page_address(page) + offset, bvec);
bvec_set_page(&vec, page, PAGE_SIZE, 0);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH 08/16] zram: rename __zram_bvec_read to zram_read_page
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (6 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 07/16] zram: don't use highmem for the bounce buffer in zram_bvec_{read,write} Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 2:47 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 09/16] zram: directly call zram_read_page in writeback_store Christoph Hellwig
` (9 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
__zram_bvec_read doesn't get passed a bvec, but always read a whole
page. Rename it to make the usage more clear.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 0e91e94f24cd09..6187e41a346fd8 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1403,8 +1403,8 @@ static int zram_read_from_zspool(struct zram *zram, struct page *page,
return ret;
}
-static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
- struct bio *bio, bool partial_io)
+static int zram_read_page(struct zram *zram, struct page *page, u32 index,
+ struct bio *bio, bool partial_io)
{
int ret;
@@ -1442,7 +1442,7 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
return -ENOMEM;
}
- ret = __zram_bvec_read(zram, page, index, bio, is_partial_io(bvec));
+ ret = zram_read_page(zram, page, index, bio, is_partial_io(bvec));
if (unlikely(ret))
goto out;
@@ -1599,7 +1599,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
if (!page)
return -ENOMEM;
- ret = __zram_bvec_read(zram, page, index, bio, true);
+ ret = zram_read_page(zram, page, index, bio, true);
if (ret)
goto out;
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH 09/16] zram: directly call zram_read_page in writeback_store
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (7 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 08/16] zram: rename __zram_bvec_read to zram_read_page Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 2:56 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 10/16] zram: refactor zram_bdev_read Christoph Hellwig
` (8 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
writeback_store always reads a full page, so just call zram_read_page
directly and bypass the boune buffer handling.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 16 +++++-----------
1 file changed, 5 insertions(+), 11 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 6187e41a346fd8..9d2b6ef4638903 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -54,9 +54,8 @@ static size_t huge_class_size;
static const struct block_device_operations zram_devops;
static void zram_free_page(struct zram *zram, size_t index);
-static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
- u32 index, int offset, struct bio *bio);
-
+static int zram_read_page(struct zram *zram, struct page *page, u32 index,
+ struct bio *bio, bool partial_io);
static int zram_slot_trylock(struct zram *zram, u32 index)
{
@@ -671,10 +670,6 @@ static ssize_t writeback_store(struct device *dev,
}
for (; nr_pages != 0; index++, nr_pages--) {
- struct bio_vec bvec;
-
- bvec_set_page(&bvec, page, PAGE_SIZE, 0);
-
spin_lock(&zram->wb_limit_lock);
if (zram->wb_limit_enable && !zram->bd_wb_limit) {
spin_unlock(&zram->wb_limit_lock);
@@ -718,7 +713,7 @@ static ssize_t writeback_store(struct device *dev,
/* Need for hugepage writeback racing */
zram_set_flag(zram, index, ZRAM_IDLE);
zram_slot_unlock(zram, index);
- if (zram_bvec_read(zram, &bvec, index, 0, NULL)) {
+ if (zram_read_page(zram, page, index, NULL, false)) {
zram_slot_lock(zram, index);
zram_clear_flag(zram, index, ZRAM_UNDER_WB);
zram_clear_flag(zram, index, ZRAM_IDLE);
@@ -729,9 +724,8 @@ static ssize_t writeback_store(struct device *dev,
bio_init(&bio, zram->bdev, &bio_vec, 1,
REQ_OP_WRITE | REQ_SYNC);
bio.bi_iter.bi_sector = blk_idx * (PAGE_SIZE >> 9);
-
- bio_add_page(&bio, bvec.bv_page, bvec.bv_len,
- bvec.bv_offset);
+ bio_add_page(&bio, page, PAGE_SIZE, 0);
+
/*
* XXX: A single page IO would be inefficient for write
* but it would be not bad as starter.
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH 09/16] zram: directly call zram_read_page in writeback_store
2023-04-04 15:05 ` [PATCH 09/16] zram: directly call zram_read_page in writeback_store Christoph Hellwig
@ 2023-04-06 2:56 ` Sergey Senozhatsky
0 siblings, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 2:56 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
>
> writeback_store always reads a full page, so just call zram_read_page
> directly and bypass the boune buffer handling.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> ---
> drivers/block/zram/zram_drv.c | 16 +++++-----------
> 1 file changed, 5 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 6187e41a346fd8..9d2b6ef4638903 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -54,9 +54,8 @@ static size_t huge_class_size;
> static const struct block_device_operations zram_devops;
>
> static void zram_free_page(struct zram *zram, size_t index);
> -static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
> - u32 index, int offset, struct bio *bio);
> -
> +static int zram_read_page(struct zram *zram, struct page *page, u32 index,
> + struct bio *bio, bool partial_io);
>
> static int zram_slot_trylock(struct zram *zram, u32 index)
> {
> @@ -671,10 +670,6 @@ static ssize_t writeback_store(struct device *dev,
> }
>
> for (; nr_pages != 0; index++, nr_pages--) {
> - struct bio_vec bvec;
> -
> - bvec_set_page(&bvec, page, PAGE_SIZE, 0);
> -
> spin_lock(&zram->wb_limit_lock);
> if (zram->wb_limit_enable && !zram->bd_wb_limit) {
> spin_unlock(&zram->wb_limit_lock);
> @@ -718,7 +713,7 @@ static ssize_t writeback_store(struct device *dev,
> /* Need for hugepage writeback racing */
> zram_set_flag(zram, index, ZRAM_IDLE);
> zram_slot_unlock(zram, index);
> - if (zram_bvec_read(zram, &bvec, index, 0, NULL)) {
> + if (zram_read_page(zram, page, index, NULL, false)) {
In theory we can call zram_read_from_zspool() here, but zram_read_page()
is fine with me.
> zram_slot_lock(zram, index);
> zram_clear_flag(zram, index, ZRAM_UNDER_WB);
> zram_clear_flag(zram, index, ZRAM_IDLE);
> @@ -729,9 +724,8 @@ static ssize_t writeback_store(struct device *dev,
> bio_init(&bio, zram->bdev, &bio_vec, 1,
> REQ_OP_WRITE | REQ_SYNC);
> bio.bi_iter.bi_sector = blk_idx * (PAGE_SIZE >> 9);
> -
> - bio_add_page(&bio, bvec.bv_page, bvec.bv_len,
> - bvec.bv_offset);
> + bio_add_page(&bio, page, PAGE_SIZE, 0);
> +
> /*
> * XXX: A single page IO would be inefficient for write
> * but it would be not bad as starter.
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH 10/16] zram: refactor zram_bdev_read
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (8 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 09/16] zram: directly call zram_read_page in writeback_store Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 2:58 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 11/16] zram: don't pass a bvec to __zram_bvec_write Christoph Hellwig
` (7 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
Split the partial read into a separate helper.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 40 +++++++++++++++++------------------
1 file changed, 20 insertions(+), 20 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 9d2b6ef4638903..1d32290cd6d995 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1422,33 +1422,33 @@ static int zram_read_page(struct zram *zram, struct page *page, u32 index,
return ret;
}
-static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
- u32 index, int offset, struct bio *bio)
+/*
+ * Use a temporary buffer to decompress the page, as the decompressor
+ * always expects a full page for the output.
+ */
+static int zram_bvec_read_partial(struct zram *zram, struct bio_vec *bvec,
+ u32 index, int offset, struct bio *bio)
{
+ struct page *page = alloc_page(GFP_NOIO);
int ret;
- struct page *page;
-
- page = bvec->bv_page;
- if (is_partial_io(bvec)) {
- /* Use a temporary buffer to decompress the page */
- page = alloc_page(GFP_NOIO);
- if (!page)
- return -ENOMEM;
- }
-
- ret = zram_read_page(zram, page, index, bio, is_partial_io(bvec));
- if (unlikely(ret))
- goto out;
- if (is_partial_io(bvec))
+ if (!page)
+ return -ENOMEM;
+ ret = zram_read_page(zram, page, index, bio, true);
+ if (likely(!ret))
memcpy_to_bvec(bvec, page_address(page) + offset);
-out:
- if (is_partial_io(bvec))
- __free_page(page);
-
+ __free_page(page);
return ret;
}
+static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
+ u32 index, int offset, struct bio *bio)
+{
+ if (is_partial_io(bvec))
+ return zram_bvec_read_partial(zram, bvec, index, offset, bio);
+ return zram_read_page(zram, bvec->bv_page, index, bio, false);
+}
+
static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
u32 index, struct bio *bio)
{
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH 11/16] zram: don't pass a bvec to __zram_bvec_write
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (9 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 10/16] zram: refactor zram_bdev_read Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 5:11 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 12/16] zram: refactor zram_bdev_write Christoph Hellwig
` (6 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
__zram_bvec_write only extracts the page from __zram_bvec_write and
always expects a full page of input. Pass the page directly instead
of the bvec and rename the function to zram_write_bvec.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 1d32290cd6d995..1706462495edda 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1449,8 +1449,7 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
return zram_read_page(zram, bvec->bv_page, index, bio, false);
}
-static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
- u32 index, struct bio *bio)
+static int zram_write_page(struct zram *zram, struct page *page, u32 index)
{
int ret = 0;
unsigned long alloced_pages;
@@ -1458,7 +1457,6 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
unsigned int comp_len = 0;
void *src, *dst, *mem;
struct zcomp_strm *zstrm;
- struct page *page = bvec->bv_page;
unsigned long element = 0;
enum zram_pageflags flags = 0;
@@ -1579,11 +1577,9 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
u32 index, int offset, struct bio *bio)
{
+ struct page *page = bvec->bv_page;
int ret;
- struct page *page = NULL;
- struct bio_vec vec;
- vec = *bvec;
if (is_partial_io(bvec)) {
/*
* This is a partial IO. We need to read the full page
@@ -1598,11 +1594,9 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
goto out;
memcpy_from_bvec(page_address(page) + offset, bvec);
-
- bvec_set_page(&vec, page, PAGE_SIZE, 0);
}
- ret = __zram_bvec_write(zram, &vec, index, bio);
+ ret = zram_write_page(zram, page, index);
out:
if (is_partial_io(bvec))
__free_page(page);
@@ -1717,7 +1711,7 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page,
/*
* No direct reclaim (slow path) for handle allocation and no
- * re-compression attempt (unlike in __zram_bvec_write()) since
+ * re-compression attempt (unlike in zram_write_bvec()) since
* we already have stored that object in zsmalloc. If we cannot
* alloc memory for recompressed object then we bail out and
* simply keep the old (existing) object in zsmalloc.
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH 12/16] zram: refactor zram_bdev_write
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (10 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 11/16] zram: don't pass a bvec to __zram_bvec_write Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 5:15 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 13/16] zram: pass a page to read_from_bdev Christoph Hellwig
` (5 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
Split the read/modify/write case into a separate helper.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 38 +++++++++++++++++------------------
1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 1706462495edda..9a3bee1dda01c0 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1574,33 +1574,33 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
return ret;
}
-static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
- u32 index, int offset, struct bio *bio)
+/*
+ * This is a partial IO. Read the full page before to writing the changes.
+ */
+static int zram_bvec_write_partial(struct zram *zram, struct bio_vec *bvec,
+ u32 index, int offset, struct bio *bio)
{
- struct page *page = bvec->bv_page;
+ struct page *page = alloc_page(GFP_NOIO);
int ret;
- if (is_partial_io(bvec)) {
- /*
- * This is a partial IO. We need to read the full page
- * before to write the changes.
- */
- page = alloc_page(GFP_NOIO);
- if (!page)
- return -ENOMEM;
-
- ret = zram_read_page(zram, page, index, bio, true);
- if (ret)
- goto out;
+ if (!page)
+ return -ENOMEM;
+ ret = zram_read_page(zram, page, index, bio, true);
+ if (!ret) {
memcpy_from_bvec(page_address(page) + offset, bvec);
+ ret = zram_write_page(zram, page, index);
}
+ __free_page(page);
+ return ret;
+}
- ret = zram_write_page(zram, page, index);
-out:
+static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
+ u32 index, int offset, struct bio *bio)
+{
if (is_partial_io(bvec))
- __free_page(page);
- return ret;
+ return zram_bvec_write_partial(zram, bvec, index, offset, bio);
+ return zram_write_page(zram, bvec->bv_page, index);
}
#ifdef CONFIG_ZRAM_MULTI_COMP
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH 12/16] zram: refactor zram_bdev_write
2023-04-04 15:05 ` [PATCH 12/16] zram: refactor zram_bdev_write Christoph Hellwig
@ 2023-04-06 5:15 ` Sergey Senozhatsky
2023-04-06 5:45 ` Christoph Hellwig
0 siblings, 1 reply; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 5:15 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
>
> -static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
> - u32 index, int offset, struct bio *bio)
> +/*
> + * This is a partial IO. Read the full page before to writing the changes.
A super nit: double spaces and "before writing"?
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH 12/16] zram: refactor zram_bdev_write
2023-04-06 5:15 ` Sergey Senozhatsky
@ 2023-04-06 5:45 ` Christoph Hellwig
2023-04-06 6:30 ` Sergey Senozhatsky
0 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-06 5:45 UTC (permalink / raw)
To: Sergey Senozhatsky
Cc: Christoph Hellwig, Minchan Kim, Jens Axboe, linux-block
On Thu, Apr 06, 2023 at 02:15:05PM +0900, Sergey Senozhatsky wrote:
> > -static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
> > - u32 index, int offset, struct bio *bio)
> > +/*
> > + * This is a partial IO. Read the full page before to writing the changes.
>
> A super nit: double spaces and "before writing"?
double space afrer . is the usual style more monospace fonts such
as code. The to is indeed superflous and a leftover from the previous
comment.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH 12/16] zram: refactor zram_bdev_write
2023-04-06 5:45 ` Christoph Hellwig
@ 2023-04-06 6:30 ` Sergey Senozhatsky
0 siblings, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 6:30 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Sergey Senozhatsky, Christoph Hellwig, Minchan Kim, Jens Axboe,
linux-block
On (23/04/05 22:45), Christoph Hellwig wrote:
> On Thu, Apr 06, 2023 at 02:15:05PM +0900, Sergey Senozhatsky wrote:
> > > -static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
> > > - u32 index, int offset, struct bio *bio)
> > > +/*
> > > + * This is a partial IO. Read the full page before to writing the changes.
> >
> > A super nit: double spaces and "before writing"?
>
> double space afrer . is the usual style more monospace fonts such
> as code.
OK, yeah I've noticed that in the commit messages as well.
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH 13/16] zram: pass a page to read_from_bdev
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (11 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 12/16] zram: refactor zram_bdev_write Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 5:28 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 14/16] zram: don't return errors from read_from_bdev_async Christoph Hellwig
` (4 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
read_from_bdev always reads a whole page, so pass a page to it instead
of the bvec and remove the now pointless zram_bvec_read_from_bdev
wrapper.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 46 +++++++++++++----------------------
1 file changed, 17 insertions(+), 29 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 9a3bee1dda01c0..762f617d8ba9ad 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -587,7 +587,7 @@ static void zram_page_end_io(struct bio *bio)
/*
* Returns 1 if the submission is successful.
*/
-static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec,
+static int read_from_bdev_async(struct zram *zram, struct page *page,
unsigned long entry, struct bio *parent)
{
struct bio *bio;
@@ -598,7 +598,7 @@ static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec,
return -ENOMEM;
bio->bi_iter.bi_sector = entry * (PAGE_SIZE >> 9);
- if (!bio_add_page(bio, bvec->bv_page, bvec->bv_len, bvec->bv_offset)) {
+ if (!bio_add_page(bio, page, PAGE_SIZE, 0)) {
bio_put(bio);
return -EIO;
}
@@ -794,7 +794,7 @@ struct zram_work {
struct zram *zram;
unsigned long entry;
struct bio *bio;
- struct bio_vec bvec;
+ struct page *page;
};
#if PAGE_SIZE != 4096
@@ -805,7 +805,7 @@ static void zram_sync_read(struct work_struct *work)
unsigned long entry = zw->entry;
struct bio *bio = zw->bio;
- read_from_bdev_async(zram, &zw->bvec, entry, bio);
+ read_from_bdev_async(zram, zw->page, entry, bio);
}
/*
@@ -813,12 +813,12 @@ static void zram_sync_read(struct work_struct *work)
* chained IO with parent IO in same context, it's a deadlock. To avoid that,
* use a worker thread context.
*/
-static int read_from_bdev_sync(struct zram *zram, struct bio_vec *bvec,
+static int read_from_bdev_sync(struct zram *zram, struct page *page,
unsigned long entry, struct bio *bio)
{
struct zram_work work;
- work.bvec = *bvec;
+ work.page = page;
work.zram = zram;
work.entry = entry;
work.bio = bio;
@@ -831,7 +831,7 @@ static int read_from_bdev_sync(struct zram *zram, struct bio_vec *bvec,
return 1;
}
#else
-static int read_from_bdev_sync(struct zram *zram, struct bio_vec *bvec,
+static int read_from_bdev_sync(struct zram *zram, struct page *page,
unsigned long entry, struct bio *bio)
{
WARN_ON(1);
@@ -839,18 +839,17 @@ static int read_from_bdev_sync(struct zram *zram, struct bio_vec *bvec,
}
#endif
-static int read_from_bdev(struct zram *zram, struct bio_vec *bvec,
+static int read_from_bdev(struct zram *zram, struct page *page,
unsigned long entry, struct bio *parent, bool sync)
{
atomic64_inc(&zram->stats.bd_reads);
if (sync)
- return read_from_bdev_sync(zram, bvec, entry, parent);
- else
- return read_from_bdev_async(zram, bvec, entry, parent);
+ return read_from_bdev_sync(zram, page, entry, parent);
+ return read_from_bdev_async(zram, page, entry, parent);
}
#else
static inline void reset_bdev(struct zram *zram) {};
-static int read_from_bdev(struct zram *zram, struct bio_vec *bvec,
+static int read_from_bdev(struct zram *zram, struct page *page,
unsigned long entry, struct bio *parent, bool sync)
{
return -EIO;
@@ -1334,20 +1333,6 @@ static void zram_free_page(struct zram *zram, size_t index)
~(1UL << ZRAM_LOCK | 1UL << ZRAM_UNDER_WB));
}
-/*
- * Reads a page from the writeback devices. Corresponding ZRAM slot
- * should be unlocked.
- */
-static int zram_bvec_read_from_bdev(struct zram *zram, struct page *page,
- u32 index, struct bio *bio, bool partial_io)
-{
- struct bio_vec bvec;
-
- bvec_set_page(&bvec, page, PAGE_SIZE, 0);
- return read_from_bdev(zram, &bvec, zram_get_element(zram, index), bio,
- partial_io);
-}
-
/*
* Reads (decompresses if needed) a page from zspool (zsmalloc).
* Corresponding ZRAM slot should be locked.
@@ -1408,11 +1393,14 @@ static int zram_read_page(struct zram *zram, struct page *page, u32 index,
ret = zram_read_from_zspool(zram, page, index);
zram_slot_unlock(zram, index);
} else {
- /* Slot should be unlocked before the function call */
+ /*
+ * The slot should be unlocked before reading from the backing
+ * device.
+ */
zram_slot_unlock(zram, index);
- ret = zram_bvec_read_from_bdev(zram, page, index, bio,
- partial_io);
+ ret = read_from_bdev(zram, page, zram_get_element(zram, index),
+ bio, partial_io);
}
/* Should NEVER happen. Return bio error if it does. */
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH 14/16] zram: don't return errors from read_from_bdev_async
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (12 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 13/16] zram: pass a page to read_from_bdev Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 5:24 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 15/16] zram: fix synchronous reads Christoph Hellwig
` (3 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
bio_alloc will never return a NULL bio when it is allowed to sleep,
and adding a single page to bio with a single vector also can't
fail, so switch to the asserting __bio_add_page variant and drop the
error returns.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 16 ++++------------
1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 762f617d8ba9ad..b9b14165e651f9 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -584,24 +584,16 @@ static void zram_page_end_io(struct bio *bio)
bio_put(bio);
}
-/*
- * Returns 1 if the submission is successful.
- */
-static int read_from_bdev_async(struct zram *zram, struct page *page,
+static void read_from_bdev_async(struct zram *zram, struct page *page,
unsigned long entry, struct bio *parent)
{
struct bio *bio;
bio = bio_alloc(zram->bdev, 1, parent ? parent->bi_opf : REQ_OP_READ,
GFP_NOIO);
- if (!bio)
- return -ENOMEM;
bio->bi_iter.bi_sector = entry * (PAGE_SIZE >> 9);
- if (!bio_add_page(bio, page, PAGE_SIZE, 0)) {
- bio_put(bio);
- return -EIO;
- }
+ __bio_add_page(bio, page, PAGE_SIZE, 0);
if (!parent)
bio->bi_end_io = zram_page_end_io;
@@ -609,7 +601,6 @@ static int read_from_bdev_async(struct zram *zram, struct page *page,
bio_chain(bio, parent);
submit_bio(bio);
- return 1;
}
#define PAGE_WB_SIG "page_index="
@@ -845,7 +836,8 @@ static int read_from_bdev(struct zram *zram, struct page *page,
atomic64_inc(&zram->stats.bd_reads);
if (sync)
return read_from_bdev_sync(zram, page, entry, parent);
- return read_from_bdev_async(zram, page, entry, parent);
+ read_from_bdev_async(zram, page, entry, parent);
+ return 1;
}
#else
static inline void reset_bdev(struct zram *zram) {};
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH 15/16] zram: fix synchronous reads
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (13 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 14/16] zram: don't return errors from read_from_bdev_async Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 6:41 ` Sergey Senozhatsky
2023-04-04 15:05 ` [PATCH 16/16] zram: return errors from read_from_bdev_sync Christoph Hellwig
` (2 subsequent siblings)
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
Currently nothing waits for the synchronous reads before accessing
the data. Switch them to an on-stack bio and submit_bio_wait to
make sure the I/O has actually completed when the work item has been
flushed. This also removes the call to page_endio that would unlock
a page that has never been locked.
Drop the partial_io/sync flag, as chaining only makes sense for the
asynchronous reads of the entire page.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 64 +++++++++++++----------------------
1 file changed, 24 insertions(+), 40 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index b9b14165e651f9..0a98a0ba14607a 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -55,7 +55,7 @@ static const struct block_device_operations zram_devops;
static void zram_free_page(struct zram *zram, size_t index);
static int zram_read_page(struct zram *zram, struct page *page, u32 index,
- struct bio *bio, bool partial_io);
+ struct bio *parent);
static int zram_slot_trylock(struct zram *zram, u32 index)
{
@@ -575,31 +575,15 @@ static void free_block_bdev(struct zram *zram, unsigned long blk_idx)
atomic64_dec(&zram->stats.bd_count);
}
-static void zram_page_end_io(struct bio *bio)
-{
- struct page *page = bio_first_page_all(bio);
-
- page_endio(page, op_is_write(bio_op(bio)),
- blk_status_to_errno(bio->bi_status));
- bio_put(bio);
-}
-
static void read_from_bdev_async(struct zram *zram, struct page *page,
unsigned long entry, struct bio *parent)
{
struct bio *bio;
- bio = bio_alloc(zram->bdev, 1, parent ? parent->bi_opf : REQ_OP_READ,
- GFP_NOIO);
-
+ bio = bio_alloc(zram->bdev, 1, parent->bi_opf, GFP_NOIO);
bio->bi_iter.bi_sector = entry * (PAGE_SIZE >> 9);
__bio_add_page(bio, page, PAGE_SIZE, 0);
-
- if (!parent)
- bio->bi_end_io = zram_page_end_io;
- else
- bio_chain(bio, parent);
-
+ bio_chain(bio, parent);
submit_bio(bio);
}
@@ -704,7 +688,7 @@ static ssize_t writeback_store(struct device *dev,
/* Need for hugepage writeback racing */
zram_set_flag(zram, index, ZRAM_IDLE);
zram_slot_unlock(zram, index);
- if (zram_read_page(zram, page, index, NULL, false)) {
+ if (zram_read_page(zram, page, index, NULL)) {
zram_slot_lock(zram, index);
zram_clear_flag(zram, index, ZRAM_UNDER_WB);
zram_clear_flag(zram, index, ZRAM_IDLE);
@@ -780,23 +764,24 @@ static ssize_t writeback_store(struct device *dev,
return ret;
}
+#if PAGE_SIZE != 4096
struct zram_work {
struct work_struct work;
struct zram *zram;
unsigned long entry;
- struct bio *bio;
struct page *page;
};
-#if PAGE_SIZE != 4096
static void zram_sync_read(struct work_struct *work)
{
struct zram_work *zw = container_of(work, struct zram_work, work);
- struct zram *zram = zw->zram;
- unsigned long entry = zw->entry;
- struct bio *bio = zw->bio;
+ struct bio_bvec bv;
+ struct bio bio;
- read_from_bdev_async(zram, zw->page, entry, bio);
+ bio_init(&bio, zw->zram->bdev, &bv, 1, REQ_OP_READ);
+ bio.bi_iter.bi_sector = zw->entry * (PAGE_SIZE >> 9);
+ __bio_add_page(&bio, zw->page, PAGE_SIZE, 0);
+ submit_bio_wait(&bio);
}
/*
@@ -805,14 +790,13 @@ static void zram_sync_read(struct work_struct *work)
* use a worker thread context.
*/
static int read_from_bdev_sync(struct zram *zram, struct page *page,
- unsigned long entry, struct bio *bio)
+ unsigned long entry)
{
struct zram_work work;
work.page = page;
work.zram = zram;
work.entry = entry;
- work.bio = bio;
INIT_WORK_ONSTACK(&work.work, zram_sync_read);
queue_work(system_unbound_wq, &work.work);
@@ -823,7 +807,7 @@ static int read_from_bdev_sync(struct zram *zram, struct page *page,
}
#else
static int read_from_bdev_sync(struct zram *zram, struct page *page,
- unsigned long entry, struct bio *bio)
+ unsigned long entry)
{
WARN_ON(1);
return -EIO;
@@ -831,18 +815,18 @@ static int read_from_bdev_sync(struct zram *zram, struct page *page,
#endif
static int read_from_bdev(struct zram *zram, struct page *page,
- unsigned long entry, struct bio *parent, bool sync)
+ unsigned long entry, struct bio *parent)
{
atomic64_inc(&zram->stats.bd_reads);
- if (sync)
- return read_from_bdev_sync(zram, page, entry, parent);
+ if (!parent)
+ return read_from_bdev_sync(zram, page, entry);
read_from_bdev_async(zram, page, entry, parent);
return 1;
}
#else
static inline void reset_bdev(struct zram *zram) {};
static int read_from_bdev(struct zram *zram, struct page *page,
- unsigned long entry, struct bio *parent, bool sync)
+ unsigned long entry, struct bio *parent)
{
return -EIO;
}
@@ -1375,7 +1359,7 @@ static int zram_read_from_zspool(struct zram *zram, struct page *page,
}
static int zram_read_page(struct zram *zram, struct page *page, u32 index,
- struct bio *bio, bool partial_io)
+ struct bio *parent)
{
int ret;
@@ -1392,7 +1376,7 @@ static int zram_read_page(struct zram *zram, struct page *page, u32 index,
zram_slot_unlock(zram, index);
ret = read_from_bdev(zram, page, zram_get_element(zram, index),
- bio, partial_io);
+ parent);
}
/* Should NEVER happen. Return bio error if it does. */
@@ -1407,14 +1391,14 @@ static int zram_read_page(struct zram *zram, struct page *page, u32 index,
* always expects a full page for the output.
*/
static int zram_bvec_read_partial(struct zram *zram, struct bio_vec *bvec,
- u32 index, int offset, struct bio *bio)
+ u32 index, int offset)
{
struct page *page = alloc_page(GFP_NOIO);
int ret;
if (!page)
return -ENOMEM;
- ret = zram_read_page(zram, page, index, bio, true);
+ ret = zram_read_page(zram, page, index, NULL);
if (likely(!ret))
memcpy_to_bvec(bvec, page_address(page) + offset);
__free_page(page);
@@ -1425,8 +1409,8 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
u32 index, int offset, struct bio *bio)
{
if (is_partial_io(bvec))
- return zram_bvec_read_partial(zram, bvec, index, offset, bio);
- return zram_read_page(zram, bvec->bv_page, index, bio, false);
+ return zram_bvec_read_partial(zram, bvec, index, offset);
+ return zram_read_page(zram, bvec->bv_page, index, bio);
}
static int zram_write_page(struct zram *zram, struct page *page, u32 index)
@@ -1566,7 +1550,7 @@ static int zram_bvec_write_partial(struct zram *zram, struct bio_vec *bvec,
if (!page)
return -ENOMEM;
- ret = zram_read_page(zram, page, index, bio, true);
+ ret = zram_read_page(zram, page, index, bio);
if (!ret) {
memcpy_from_bvec(page_address(page) + offset, bvec);
ret = zram_write_page(zram, page, index);
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH 15/16] zram: fix synchronous reads
2023-04-04 15:05 ` [PATCH 15/16] zram: fix synchronous reads Christoph Hellwig
@ 2023-04-06 6:41 ` Sergey Senozhatsky
0 siblings, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 6:41 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
>
> Currently nothing waits for the synchronous reads before accessing
> the data. Switch them to an on-stack bio and submit_bio_wait to
> make sure the I/O has actually completed when the work item has been
> flushed. This also removes the call to page_endio that would unlock
> a page that has never been locked.
>
> Drop the partial_io/sync flag, as chaining only makes sense for the
> asynchronous reads of the entire page.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH 16/16] zram: return errors from read_from_bdev_sync
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (14 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 15/16] zram: fix synchronous reads Christoph Hellwig
@ 2023-04-04 15:05 ` Christoph Hellwig
2023-04-06 5:36 ` Sergey Senozhatsky
2023-04-06 6:43 ` zram I/O path cleanups and fixups Sergey Senozhatsky
2023-04-06 6:55 ` Sergey Senozhatsky
17 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2023-04-04 15:05 UTC (permalink / raw)
To: Minchan Kim, Sergey Senozhatsky; +Cc: Jens Axboe, linux-block
Propagate read errors to the caller instead of dropping them on
the floor, and stop returning the somewhat dangerous 1 on success
from read_from_bdev*.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/block/zram/zram_drv.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 0a98a0ba14607a..0be1633b0f4e94 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -770,6 +770,7 @@ struct zram_work {
struct zram *zram;
unsigned long entry;
struct page *page;
+ int error;
};
static void zram_sync_read(struct work_struct *work)
@@ -781,7 +782,7 @@ static void zram_sync_read(struct work_struct *work)
bio_init(&bio, zw->zram->bdev, &bv, 1, REQ_OP_READ);
bio.bi_iter.bi_sector = zw->entry * (PAGE_SIZE >> 9);
__bio_add_page(&bio, zw->page, PAGE_SIZE, 0);
- submit_bio_wait(&bio);
+ zw->error = submit_bio_wait(&bio);
}
/*
@@ -803,7 +804,7 @@ static int read_from_bdev_sync(struct zram *zram, struct page *page,
flush_work(&work.work);
destroy_work_on_stack(&work.work);
- return 1;
+ return work.error;
}
#else
static int read_from_bdev_sync(struct zram *zram, struct page *page,
@@ -821,7 +822,7 @@ static int read_from_bdev(struct zram *zram, struct page *page,
if (!parent)
return read_from_bdev_sync(zram, page, entry);
read_from_bdev_async(zram, page, entry, parent);
- return 1;
+ return 0;
}
#else
static inline void reset_bdev(struct zram *zram) {};
--
2.39.2
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: zram I/O path cleanups and fixups
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (15 preceding siblings ...)
2023-04-04 15:05 ` [PATCH 16/16] zram: return errors from read_from_bdev_sync Christoph Hellwig
@ 2023-04-06 6:43 ` Sergey Senozhatsky
2023-04-06 6:46 ` Sergey Senozhatsky
2023-04-06 6:55 ` Sergey Senozhatsky
17 siblings, 1 reply; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 6:43 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
>
> I don't actually have a large PAGE_SIZE system to test that path on.
Likewise. Don't have a large PAGE_SIZE system, so tested on a 4k x86_64.
^ permalink raw reply [flat|nested] 46+ messages in thread* Re: zram I/O path cleanups and fixups
2023-04-06 6:43 ` zram I/O path cleanups and fixups Sergey Senozhatsky
@ 2023-04-06 6:46 ` Sergey Senozhatsky
2023-04-06 6:51 ` Sergey Senozhatsky
0 siblings, 1 reply; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 6:46 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Jens Axboe, linux-block, Sergey Senozhatsky
On (23/04/06 15:43), Sergey Senozhatsky wrote:
> On (23/04/04 17:05), Christoph Hellwig wrote:
> >
> > I don't actually have a large PAGE_SIZE system to test that path on.
>
> Likewise. Don't have a large PAGE_SIZE system, so tested on a 4k x86_64.
Hold on... Something isn't right with the writeback test. It doesn't
look like reads of ZRAM_WB pages work.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: zram I/O path cleanups and fixups
2023-04-06 6:46 ` Sergey Senozhatsky
@ 2023-04-06 6:51 ` Sergey Senozhatsky
0 siblings, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 6:51 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Jens Axboe, linux-block, Sergey Senozhatsky
On (23/04/06 15:46), Sergey Senozhatsky wrote:
> On (23/04/06 15:43), Sergey Senozhatsky wrote:
> > On (23/04/04 17:05), Christoph Hellwig wrote:
> > >
> > > I don't actually have a large PAGE_SIZE system to test that path on.
> >
> > Likewise. Don't have a large PAGE_SIZE system, so tested on a 4k x86_64.
>
> Hold on... Something isn't right with the writeback test. It doesn't
> look like reads of ZRAM_WB pages work.
False alarm, sorry for the noise!
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: zram I/O path cleanups and fixups
2023-04-04 15:05 zram I/O path cleanups and fixups Christoph Hellwig
` (16 preceding siblings ...)
2023-04-06 6:43 ` zram I/O path cleanups and fixups Sergey Senozhatsky
@ 2023-04-06 6:55 ` Sergey Senozhatsky
17 siblings, 0 replies; 46+ messages in thread
From: Sergey Senozhatsky @ 2023-04-06 6:55 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Minchan Kim, Sergey Senozhatsky, Jens Axboe, linux-block
On (23/04/04 17:05), Christoph Hellwig wrote:
> Hi all,
>
> this series cleans up the zram I/O path, and fixes the handling of
> synchronous I/O to the underlying device in the writeback_store
> function or for > 4K PAGE_SIZE systems.
>
> The fixes are at the end, as I could not fully reason about them being
> safe before untangling the callchain.
>
> This survives xfstests on two zram devices on x86, but I don't actually
> have a large PAGE_SIZE system to test that path on.
Christoph, if you'll do v2 please Cc Andrew on it.
^ permalink raw reply [flat|nested] 46+ messages in thread