public inbox for linux-next@vger.kernel.org
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Mark Brown <broonie@kernel.org>
Cc: Christian Brauner <brauner@kernel.org>,
	Christoph Hellwig <hch@lst.de>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux Next Mailing List <linux-next@vger.kernel.org>
Subject: Re: linux-next: manual merge of the vfs-brauner tree with the vfs-brauner-fixes tree
Date: Wed, 25 Mar 2026 08:13:56 -0700	[thread overview]
Message-ID: <20260325151356.GF6212@frogsfrogsfrogs> (raw)
In-Reply-To: <acPjQfoS-k2oDp3i@sirena.org.uk>

On Wed, Mar 25, 2026 at 01:29:37PM +0000, Mark Brown wrote:
> Hi all,
> 
> Today's linux-next merge of the vfs-brauner tree got a conflict in:
> 
>   fs/iomap/bio.c
> 
> between commit:
> 
>   f621324dfb3d6 ("iomap: fix lockdep complaint when reads fail")
> 
> from the vfs-brauner-fixes tree and commit:
> 
>   e8f9cf03c9dc9 ("iomap: support ioends for buffered reads")
> 
> from the vfs-brauner tree.
> 
> I fixed it up (see below) and can carry the fix as necessary. This
> is now fixed as far as linux-next is concerned, but any non trivial
> conflicts should be mentioned to your upstream maintainer when your tree
> is submitted for merging.  You may also want to consider cooperating
> with the maintainer of the conflicting tree to minimise any particularly
> complex conflicts.

That looks correct to me, thanks for pointing out the merge conflict. :)

--D

> diff --cc fs/iomap/bio.c
> index edd908183058f,f989ffcaac96d..0000000000000
> --- a/fs/iomap/bio.c
> +++ b/fs/iomap/bio.c
> @@@ -8,66 -9,33 +9,78 @@@
>   #include "internal.h"
>   #include "trace.h"
>   
>  +static DEFINE_SPINLOCK(failed_read_lock);
>  +static struct bio_list failed_read_list = BIO_EMPTY_LIST;
>  +
> - static void __iomap_read_end_io(struct bio *bio)
> + static u32 __iomap_read_end_io(struct bio *bio, int error)
>   {
> - 	int error = blk_status_to_errno(bio->bi_status);
>   	struct folio_iter fi;
> + 	u32 folio_count = 0;
>   
> - 	bio_for_each_folio_all(fi, bio)
> + 	bio_for_each_folio_all(fi, bio) {
>   		iomap_finish_folio_read(fi.folio, fi.offset, fi.length, error);
> + 		folio_count++;
> + 	}
> + 	if (bio_integrity(bio))
> + 		fs_bio_integrity_free(bio);
>   	bio_put(bio);
> + 	return folio_count;
>   }
>   
>  +static void
>  +iomap_fail_reads(
>  +	struct work_struct	*work)
>  +{
>  +	struct bio		*bio;
>  +	struct bio_list		tmp = BIO_EMPTY_LIST;
>  +	unsigned long		flags;
>  +
>  +	spin_lock_irqsave(&failed_read_lock, flags);
>  +	bio_list_merge_init(&tmp, &failed_read_list);
>  +	spin_unlock_irqrestore(&failed_read_lock, flags);
>  +
>  +	while ((bio = bio_list_pop(&tmp)) != NULL) {
> - 		__iomap_read_end_io(bio);
> ++		__iomap_read_end_io(bio, blk_status_to_errno(bio->bi_status));
>  +		cond_resched();
>  +	}
>  +}
>  +
>  +static DECLARE_WORK(failed_read_work, iomap_fail_reads);
>  +
>  +static void iomap_fail_buffered_read(struct bio *bio)
>  +{
>  +	unsigned long flags;
>  +
>  +	/*
>  +	 * Bounce I/O errors to a workqueue to avoid nested i_lock acquisitions
>  +	 * in the fserror code.  The caller no longer owns the bio reference
>  +	 * after the spinlock drops.
>  +	 */
>  +	spin_lock_irqsave(&failed_read_lock, flags);
>  +	if (bio_list_empty(&failed_read_list))
>  +		WARN_ON_ONCE(!schedule_work(&failed_read_work));
>  +	bio_list_add(&failed_read_list, bio);
>  +	spin_unlock_irqrestore(&failed_read_lock, flags);
>  +}
>  +
>   static void iomap_read_end_io(struct bio *bio)
>   {
>  -	__iomap_read_end_io(bio, blk_status_to_errno(bio->bi_status));
>  +	if (bio->bi_status) {
>  +		iomap_fail_buffered_read(bio);
>  +		return;
>  +	}
>  +
> - 	__iomap_read_end_io(bio);
> ++	__iomap_read_end_io(bio, 0);
>   }
>   
> - static void iomap_bio_submit_read(struct iomap_read_folio_ctx *ctx)
> ++
> + u32 iomap_finish_ioend_buffered_read(struct iomap_ioend *ioend)
> + {
> + 	return __iomap_read_end_io(&ioend->io_bio, ioend->io_error);
> + }
> + 
> + static void iomap_bio_submit_read(const struct iomap_iter *iter,
> + 		struct iomap_read_folio_ctx *ctx)
>   {
>   	struct bio *bio = ctx->read_ctx;
>   



  reply	other threads:[~2026-03-25 15:13 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-25 13:29 linux-next: manual merge of the vfs-brauner tree with the vfs-brauner-fixes tree Mark Brown
2026-03-25 15:13 ` Darrick J. Wong [this message]
2026-03-26 13:24   ` Christian Brauner
2026-03-26 13:29     ` Mark Brown
2026-03-26  5:19 ` Christoph Hellwig
  -- strict thread matches above, loose matches on Subject: below --
2024-06-20 11:34 Mark Brown
2024-06-20 12:50 ` Mark Brown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260325151356.GF6212@frogsfrogsfrogs \
    --to=djwong@kernel.org \
    --cc=brauner@kernel.org \
    --cc=broonie@kernel.org \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-next@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox