* [PATCH 14/13] iomap: Inline iomap_iop_set_range_uptodate into its one caller
[not found] <20200917151050.5363-1-willy@infradead.org>
@ 2020-09-17 22:56 ` Matthew Wilcox (Oracle)
2020-09-17 22:56 ` [PATCH 15/13] iomap: Inline iomap_read_finish " Matthew Wilcox (Oracle)
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-17 22:56 UTC (permalink / raw)
To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle), linux-xfs
iomap_set_range_uptodate() is the only caller of
iomap_iop_set_range_uptodate() and it makes future patches easier to
have it inline.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/iomap/buffered-io.c | 24 ++++++++++--------------
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 897ab9a26a74..2a6492b3c4db 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -135,8 +135,8 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop,
*lenp = plen;
}
-static void
-iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len)
+static
+void iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len)
{
struct iomap_page *iop = to_iomap_page(page);
struct inode *inode = page->mapping->host;
@@ -146,6 +146,14 @@ iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len)
unsigned long flags;
unsigned int i;
+ if (PageError(page))
+ return;
+
+ if (!iop) {
+ SetPageUptodate(page);
+ return;
+ }
+
spin_lock_irqsave(&iop->uptodate_lock, flags);
for (i = 0; i < PAGE_SIZE / i_blocksize(inode); i++) {
if (i >= first && i <= last)
@@ -159,18 +167,6 @@ iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len)
spin_unlock_irqrestore(&iop->uptodate_lock, flags);
}
-static void
-iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len)
-{
- if (PageError(page))
- return;
-
- if (page_has_private(page))
- iomap_iop_set_range_uptodate(page, off, len);
- else
- SetPageUptodate(page);
-}
-
static void
iomap_read_finish(struct iomap_page *iop, struct page *page)
{
--
2.28.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 15/13] iomap: Inline iomap_read_finish into its one caller
2020-09-17 22:56 ` [PATCH 14/13] iomap: Inline iomap_iop_set_range_uptodate into its one caller Matthew Wilcox (Oracle)
@ 2020-09-17 22:56 ` Matthew Wilcox (Oracle)
2020-09-19 6:31 ` Christoph Hellwig
2020-09-17 22:56 ` [PATCH 16/13] iomap: Make readpage synchronous Matthew Wilcox (Oracle)
2020-09-19 6:31 ` [PATCH 14/13] iomap: Inline iomap_iop_set_range_uptodate into its one caller Christoph Hellwig
2 siblings, 1 reply; 9+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-17 22:56 UTC (permalink / raw)
To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle), linux-xfs
iomap_read_page_end_io() is the only caller of iomap_read_finish()
and it makes future patches easier to have it inline.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/iomap/buffered-io.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 2a6492b3c4db..13b56d656337 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -167,13 +167,6 @@ void iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len)
spin_unlock_irqrestore(&iop->uptodate_lock, flags);
}
-static void
-iomap_read_finish(struct iomap_page *iop, struct page *page)
-{
- if (!iop || atomic_dec_and_test(&iop->read_count))
- unlock_page(page);
-}
-
static void
iomap_read_page_end_io(struct bio_vec *bvec, int error)
{
@@ -187,7 +180,8 @@ iomap_read_page_end_io(struct bio_vec *bvec, int error)
iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len);
}
- iomap_read_finish(iop, page);
+ if (!iop || atomic_dec_and_test(&iop->read_count))
+ unlock_page(page);
}
static void
--
2.28.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 16/13] iomap: Make readpage synchronous
2020-09-17 22:56 ` [PATCH 14/13] iomap: Inline iomap_iop_set_range_uptodate into its one caller Matthew Wilcox (Oracle)
2020-09-17 22:56 ` [PATCH 15/13] iomap: Inline iomap_read_finish " Matthew Wilcox (Oracle)
@ 2020-09-17 22:56 ` Matthew Wilcox (Oracle)
2020-09-19 6:39 ` Christoph Hellwig
2020-09-19 6:31 ` [PATCH 14/13] iomap: Inline iomap_iop_set_range_uptodate into its one caller Christoph Hellwig
2 siblings, 1 reply; 9+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-17 22:56 UTC (permalink / raw)
To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle), linux-xfs
A synchronous readpage lets us report the actual errno instead of
ineffectively setting PageError.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/iomap/buffered-io.c | 64 +++++++++++++++++++++++++-----------------
1 file changed, 38 insertions(+), 26 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 13b56d656337..aec95996bd4b 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -146,9 +146,6 @@ void iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len)
unsigned long flags;
unsigned int i;
- if (PageError(page))
- return;
-
if (!iop) {
SetPageUptodate(page);
return;
@@ -167,32 +164,41 @@ void iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len)
spin_unlock_irqrestore(&iop->uptodate_lock, flags);
}
-static void
-iomap_read_page_end_io(struct bio_vec *bvec, int error)
+struct iomap_sync_end {
+ blk_status_t status;
+ struct completion done;
+};
+
+static void iomap_read_page_end_io(struct bio_vec *bvec,
+ struct iomap_sync_end *end, bool error)
{
struct page *page = bvec->bv_page;
struct iomap_page *iop = to_iomap_page(page);
- if (unlikely(error)) {
- ClearPageUptodate(page);
- SetPageError(page);
- } else {
+ if (!error)
iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len);
- }
- if (!iop || atomic_dec_and_test(&iop->read_count))
- unlock_page(page);
+ if (!iop || atomic_dec_and_test(&iop->read_count)) {
+ if (end)
+ complete(&end->done);
+ else
+ unlock_page(page);
+ }
}
static void
iomap_read_end_io(struct bio *bio)
{
- int error = blk_status_to_errno(bio->bi_status);
+ struct iomap_sync_end *end = bio->bi_private;
struct bio_vec *bvec;
struct bvec_iter_all iter_all;
+ /* Capture the first error */
+ if (end && end->status == BLK_STS_OK)
+ end->status = bio->bi_status;
+
bio_for_each_segment_all(bvec, bio, iter_all)
- iomap_read_page_end_io(bvec, error);
+ iomap_read_page_end_io(bvec, end, bio->bi_status != BLK_STS_OK);
bio_put(bio);
}
@@ -201,6 +207,7 @@ struct iomap_readpage_ctx {
bool cur_page_in_bio;
struct bio *bio;
struct readahead_control *rac;
+ struct iomap_sync_end *end;
};
static void
@@ -307,6 +314,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
ctx->bio->bi_opf |= REQ_RAHEAD;
ctx->bio->bi_iter.bi_sector = sector;
bio_set_dev(ctx->bio, iomap->bdev);
+ ctx->bio->bi_private = ctx->end;
ctx->bio->bi_end_io = iomap_read_end_io;
}
@@ -324,22 +332,25 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
int
iomap_readpage(struct page *page, const struct iomap_ops *ops)
{
- struct iomap_readpage_ctx ctx = { .cur_page = page };
+ struct iomap_sync_end end;
+ struct iomap_readpage_ctx ctx = { .cur_page = page, .end = &end, };
struct inode *inode = page->mapping->host;
unsigned poff;
loff_t ret;
trace_iomap_readpage(page->mapping->host, 1);
+ end.status = BLK_STS_OK;
+ init_completion(&end.done);
+
for (poff = 0; poff < PAGE_SIZE; poff += ret) {
ret = iomap_apply(inode, page_offset(page) + poff,
PAGE_SIZE - poff, 0, ops, &ctx,
iomap_readpage_actor);
- if (ret <= 0) {
- WARN_ON_ONCE(ret == 0);
- SetPageError(page);
+ if (WARN_ON_ONCE(ret == 0))
+ ret = -EIO;
+ if (ret < 0)
break;
- }
}
if (ctx.bio) {
@@ -347,15 +358,16 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops)
WARN_ON_ONCE(!ctx.cur_page_in_bio);
} else {
WARN_ON_ONCE(ctx.cur_page_in_bio);
- unlock_page(page);
+ complete(&end.done);
}
- /*
- * Just like mpage_readahead and block_read_full_page we always
- * return 0 and just mark the page as PageError on errors. This
- * should be cleaned up all through the stack eventually.
- */
- return 0;
+ wait_for_completion(&end.done);
+ if (ret >= 0)
+ ret = blk_status_to_errno(end.status);
+ if (ret == 0)
+ return AOP_UPDATED_PAGE;
+ unlock_page(page);
+ return ret;
}
EXPORT_SYMBOL_GPL(iomap_readpage);
--
2.28.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 14/13] iomap: Inline iomap_iop_set_range_uptodate into its one caller
2020-09-17 22:56 ` [PATCH 14/13] iomap: Inline iomap_iop_set_range_uptodate into its one caller Matthew Wilcox (Oracle)
2020-09-17 22:56 ` [PATCH 15/13] iomap: Inline iomap_read_finish " Matthew Wilcox (Oracle)
2020-09-17 22:56 ` [PATCH 16/13] iomap: Make readpage synchronous Matthew Wilcox (Oracle)
@ 2020-09-19 6:31 ` Christoph Hellwig
2 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-09-19 6:31 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: linux-fsdevel, linux-xfs
On Thu, Sep 17, 2020 at 11:56:45PM +0100, Matthew Wilcox (Oracle) wrote:
> iomap_set_range_uptodate() is the only caller of
> iomap_iop_set_range_uptodate() and it makes future patches easier to
> have it inline.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> fs/iomap/buffered-io.c | 24 ++++++++++--------------
> 1 file changed, 10 insertions(+), 14 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 897ab9a26a74..2a6492b3c4db 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -135,8 +135,8 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop,
> *lenp = plen;
> }
>
> -static void
> -iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len)
> +static
> +void iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len)
Please don't use such weird formatting for the prototype, the existing
one is perfectly fine, and the Linus approved version would be the
static and the type on the same line.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 15/13] iomap: Inline iomap_read_finish into its one caller
2020-09-17 22:56 ` [PATCH 15/13] iomap: Inline iomap_read_finish " Matthew Wilcox (Oracle)
@ 2020-09-19 6:31 ` Christoph Hellwig
0 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-09-19 6:31 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: linux-fsdevel, linux-xfs
On Thu, Sep 17, 2020 at 11:56:46PM +0100, Matthew Wilcox (Oracle) wrote:
> iomap_read_page_end_io() is the only caller of iomap_read_finish()
> and it makes future patches easier to have it inline.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 16/13] iomap: Make readpage synchronous
2020-09-17 22:56 ` [PATCH 16/13] iomap: Make readpage synchronous Matthew Wilcox (Oracle)
@ 2020-09-19 6:39 ` Christoph Hellwig
2020-09-19 6:43 ` Christoph Hellwig
2020-09-19 17:10 ` Matthew Wilcox
0 siblings, 2 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-09-19 6:39 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: linux-fsdevel, linux-xfs
I think just adding the completion and status to struct
iomap_readpage_ctx would be a lot easier to follow, at the cost
of bloating the structure a bit for the readahead case. If we
are realy concerned about that, the completion could be directly
on the iomap_readpage stack and we'd pass a pointer.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 16/13] iomap: Make readpage synchronous
2020-09-19 6:39 ` Christoph Hellwig
@ 2020-09-19 6:43 ` Christoph Hellwig
2020-09-19 17:03 ` Matthew Wilcox
2020-09-19 17:10 ` Matthew Wilcox
1 sibling, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2020-09-19 6:43 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: linux-fsdevel, linux-xfs
On Sat, Sep 19, 2020 at 07:39:08AM +0100, Christoph Hellwig wrote:
> I think just adding the completion and status to struct
> iomap_readpage_ctx would be a lot easier to follow, at the cost
> of bloating the structure a bit for the readahead case. If we
> are realy concerned about that, the completion could be directly
> on the iomap_readpage stack and we'd pass a pointer.
Anbother option would be to chain the bios and use submit_bio_wait.
That would take care of the completion and the status propagation
withour extra fields and extra code in iomap.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 16/13] iomap: Make readpage synchronous
2020-09-19 6:43 ` Christoph Hellwig
@ 2020-09-19 17:03 ` Matthew Wilcox
0 siblings, 0 replies; 9+ messages in thread
From: Matthew Wilcox @ 2020-09-19 17:03 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-fsdevel, linux-xfs
On Sat, Sep 19, 2020 at 07:43:08AM +0100, Christoph Hellwig wrote:
> On Sat, Sep 19, 2020 at 07:39:08AM +0100, Christoph Hellwig wrote:
> > I think just adding the completion and status to struct
> > iomap_readpage_ctx would be a lot easier to follow, at the cost
> > of bloating the structure a bit for the readahead case. If we
> > are realy concerned about that, the completion could be directly
> > on the iomap_readpage stack and we'd pass a pointer.
>
> Anbother option would be to chain the bios and use submit_bio_wait.
> That would take care of the completion and the status propagation
> withour extra fields and extra code in iomap.
But it wouldn't let us mark some blocks on the page as Uptodate, would it?
As I read the code, chaining two BIOs together will call the parent's
bi_end_io only once both the child and the parent BIOs have completed,
but at the point the parent's bi_end_io is called, the child bio has
already been put and we can't iterate over its bio_vec.
Maybe I misread the code; bio chaining does not seem to be well
documented.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 16/13] iomap: Make readpage synchronous
2020-09-19 6:39 ` Christoph Hellwig
2020-09-19 6:43 ` Christoph Hellwig
@ 2020-09-19 17:10 ` Matthew Wilcox
1 sibling, 0 replies; 9+ messages in thread
From: Matthew Wilcox @ 2020-09-19 17:10 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-fsdevel, linux-xfs
On Sat, Sep 19, 2020 at 07:39:08AM +0100, Christoph Hellwig wrote:
> I think just adding the completion and status to struct
> iomap_readpage_ctx would be a lot easier to follow, at the cost
> of bloating the structure a bit for the readahead case. If we
> are realy concerned about that, the completion could be directly
> on the iomap_readpage stack and we'd pass a pointer.
We could do that. I was intending to reuse the code for the write_begin
path so that a pathological case where a page straddles multiple extents
can be handled by sending multiple BIOs and waiting on both of them at
the same time, instead of the current way of sending a BIO, waiting for
it to complete, sending a second BIO, waiting for it to complete, ...
I haven't fully got my head around how to do that effectively yet.
The iomap buffered write path assumes that extents are larger than page
size and you're going to get multiple pages per extent, when the situation
could be reversed and we might need to stitch together multiple extents
in order to bring a page Uptodate. I also don't yet understand why
we read the current contents of a block when we're going to completely
overwrite it with data.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2020-09-19 17:10 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20200917151050.5363-1-willy@infradead.org>
2020-09-17 22:56 ` [PATCH 14/13] iomap: Inline iomap_iop_set_range_uptodate into its one caller Matthew Wilcox (Oracle)
2020-09-17 22:56 ` [PATCH 15/13] iomap: Inline iomap_read_finish " Matthew Wilcox (Oracle)
2020-09-19 6:31 ` Christoph Hellwig
2020-09-17 22:56 ` [PATCH 16/13] iomap: Make readpage synchronous Matthew Wilcox (Oracle)
2020-09-19 6:39 ` Christoph Hellwig
2020-09-19 6:43 ` Christoph Hellwig
2020-09-19 17:03 ` Matthew Wilcox
2020-09-19 17:10 ` Matthew Wilcox
2020-09-19 6:31 ` [PATCH 14/13] iomap: Inline iomap_iop_set_range_uptodate into its one caller Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox