linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Dave Kleikamp <shaggy@kernel.org>
Cc: jfs-discussion@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, Jan Kara <jack@suse.cz>
Subject: [PATCH v2 14/13] jfs: Stop using PG_error
Date: Thu, 18 Apr 2024 15:37:59 +0100	[thread overview]
Message-ID: <ZiEwRzu_H3pfs5pa@casper.infradead.org> (raw)
In-Reply-To: <20240417175659.818299-1-willy@infradead.org>

Jan pointed out that I'm really close to being able to remove PG_error
entirely with just jfs and btrfs still testing the flag.  So here's an
attempt to remove use of the PG_error from JFS.  We only need to
remember the 'status' if we have multiple metapage blocks per host page,
so I keep it in the meta_anchor.

What do you think?

diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c
index 19854bd8dfea..df575a873ec6 100644
--- a/fs/jfs/jfs_metapage.c
+++ b/fs/jfs/jfs_metapage.c
@@ -76,6 +76,7 @@ static mempool_t *metapage_mempool;
 struct meta_anchor {
 	int mp_count;
 	atomic_t io_count;
+	blk_status_t status;
 	struct metapage *mp[MPS_PER_PAGE];
 };
 
@@ -138,12 +139,16 @@ static inline void inc_io(struct folio *folio)
 	atomic_inc(&anchor->io_count);
 }
 
-static inline void dec_io(struct folio *folio, void (*handler) (struct folio *))
+static inline void dec_io(struct folio *folio, blk_status_t status,
+		void (*handler)(struct folio *, blk_status_t))
 {
 	struct meta_anchor *anchor = folio->private;
 
+	if (anchor->status == BLK_STS_OK)
+		anchor->status = status;
+
 	if (atomic_dec_and_test(&anchor->io_count))
-		handler(folio);
+		handler(folio, anchor->status);
 }
 
 #else
@@ -168,7 +173,7 @@ static inline void remove_metapage(struct folio *folio, struct metapage *mp)
 }
 
 #define inc_io(folio) do {} while(0)
-#define dec_io(folio, handler) handler(folio)
+#define dec_io(folio, status, handler) handler(folio, status)
 
 #endif
 
@@ -258,23 +263,20 @@ static sector_t metapage_get_blocks(struct inode *inode, sector_t lblock,
 	return lblock;
 }
 
-static void last_read_complete(struct folio *folio)
+static void last_read_complete(struct folio *folio, blk_status_t status)
 {
-	if (!folio_test_error(folio))
-		folio_mark_uptodate(folio);
-	folio_unlock(folio);
+	if (status)
+		printk(KERN_ERR "Read error %d at %#llx\n", status,
+				folio_pos(folio));
+
+	folio_end_read(folio, status == 0);
 }
 
 static void metapage_read_end_io(struct bio *bio)
 {
 	struct folio *folio = bio->bi_private;
 
-	if (bio->bi_status) {
-		printk(KERN_ERR "metapage_read_end_io: I/O error\n");
-		folio_set_error(folio);
-	}
-
-	dec_io(folio, last_read_complete);
+	dec_io(folio, bio->bi_status, last_read_complete);
 	bio_put(bio);
 }
 
@@ -300,11 +302,17 @@ static void remove_from_logsync(struct metapage *mp)
 	LOGSYNC_UNLOCK(log, flags);
 }
 
-static void last_write_complete(struct folio *folio)
+static void last_write_complete(struct folio *folio, blk_status_t status)
 {
 	struct metapage *mp;
 	unsigned int offset;
 
+	if (status) {
+		int err = blk_status_to_errno(status);
+		printk(KERN_ERR "metapage_write_end_io: I/O error\n");
+		mapping_set_error(folio->mapping, err);
+	}
+
 	for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
 		mp = folio_to_mp(folio, offset);
 		if (mp && test_bit(META_io, &mp->flag)) {
@@ -326,12 +334,7 @@ static void metapage_write_end_io(struct bio *bio)
 
 	BUG_ON(!folio->private);
 
-	if (bio->bi_status) {
-		int err = blk_status_to_errno(bio->bi_status);
-		printk(KERN_ERR "metapage_write_end_io: I/O error\n");
-		mapping_set_error(folio->mapping, err);
-	}
-	dec_io(folio, last_write_complete);
+	dec_io(folio, bio->bi_status, last_write_complete);
 	bio_put(bio);
 }
 
@@ -454,10 +457,10 @@ static int metapage_write_folio(struct folio *folio,
 		       4, bio, sizeof(*bio), 0);
 	bio_put(bio);
 	folio_unlock(folio);
-	dec_io(folio, last_write_complete);
+	dec_io(folio, BLK_STS_OK, last_write_complete);
 err_out:
 	while (bad_blocks--)
-		dec_io(folio, last_write_complete);
+		dec_io(folio, BLK_STS_OK, last_write_complete);
 	return -EIO;
 }
 

  parent reply	other threads:[~2024-04-18 14:38 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-17 17:56 [PATCH v2 00/13] JFS folio conversion Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 01/13] jfs: Convert metapage_read_folio to use folio APIs Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 02/13] jfs: Convert metapage_writepage to metapage_write_folio Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 03/13] jfs: Convert __get_metapage to use a folio Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 04/13] jfs: Convert insert_metapage() to take " Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 05/13] jfs; Convert release_metapage to use " Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 06/13] jfs: Convert drop_metapage and remove_metapage to take " Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 07/13] jfs: Convert dec_io " Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 08/13] jfs; Convert __invalidate_metapages to use " Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 09/13] jfs: Convert page_to_mp to folio_to_mp Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 10/13] jfs: Convert inc_io to take a folio Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 11/13] jfs: Convert force_metapage to use " Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 12/13] jfs: Change metapage->page to metapage->folio Matthew Wilcox (Oracle)
2024-04-17 17:56 ` [PATCH v2 13/13] fs: Remove i_blocks_per_page Matthew Wilcox (Oracle)
2024-04-18 14:37 ` Matthew Wilcox [this message]
2024-05-24 21:42 ` [PATCH v2 00/13] JFS folio conversion Dave Kleikamp

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZiEwRzu_H3pfs5pa@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=jack@suse.cz \
    --cc=jfs-discussion@lists.sourceforge.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=shaggy@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).