public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [Bug][xfstests xfs/556] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage
       [not found] <20260319194303.efw4wcu7c4idhthz@doltdoltdolt>
@ 2026-03-20  7:23 ` Christoph Hellwig
  2026-03-20 14:27   ` Darrick J. Wong
       [not found] ` <20260320163444.GE6223@frogsfrogsfrogs>
  1 sibling, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2026-03-20  7:23 UTC (permalink / raw)
  To: Zorro Lang; +Cc: linux-xfs, brauner, djwong, linux-fsdevel

On Fri, Mar 20, 2026 at 03:43:03AM +0800, Zorro Lang wrote:
> Hi,
> 
> While running fstests xfs/556 on kernel 7.0.0-rc4+ (HEAD=04a9f1766954), a
> lockdep warning was triggered indicating an inconsistent lock state for
> sb->s_type->i_lock_key.
> 
> The deadlock might occur because iomap_read_end_io (called from a hardware
> interrupt completion path) invokes fserror_report, which then calls igrab.
> igrab attempts to acquire the i_lock spinlock. However, the i_lock is frequently
> acquired in process context with interrupts enabled. If an interrupt occurs while
> a process holds the i_lock, and that interrupt handler calls fserror_report, the
> system deadlocks.
> 
> I hit this warning several times by running xfs/556 (mostly) or generic/648
> on xfs. More details refer to below console log.

I've seen the same.  AFAIK this is because the patch Darrick did to
offload all bio errors to a workque hasn't been merged upstream.
Unfortunately I don't remember the subject for that anymore.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Bug][xfstests xfs/556] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage
  2026-03-20  7:23 ` [Bug][xfstests xfs/556] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage Christoph Hellwig
@ 2026-03-20 14:27   ` Darrick J. Wong
  2026-03-23  6:15     ` Christoph Hellwig
  0 siblings, 1 reply; 8+ messages in thread
From: Darrick J. Wong @ 2026-03-20 14:27 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Zorro Lang, linux-xfs, brauner, linux-fsdevel

On Fri, Mar 20, 2026 at 12:23:55AM -0700, Christoph Hellwig wrote:
> On Fri, Mar 20, 2026 at 03:43:03AM +0800, Zorro Lang wrote:
> > Hi,
> > 
> > While running fstests xfs/556 on kernel 7.0.0-rc4+ (HEAD=04a9f1766954), a
> > lockdep warning was triggered indicating an inconsistent lock state for
> > sb->s_type->i_lock_key.
> > 
> > The deadlock might occur because iomap_read_end_io (called from a hardware
> > interrupt completion path) invokes fserror_report, which then calls igrab.
> > igrab attempts to acquire the i_lock spinlock. However, the i_lock is frequently
> > acquired in process context with interrupts enabled. If an interrupt occurs while
> > a process holds the i_lock, and that interrupt handler calls fserror_report, the
> > system deadlocks.
> > 
> > I hit this warning several times by running xfs/556 (mostly) or generic/648
> > on xfs. More details refer to below console log.
> 
> I've seen the same.  AFAIK this is because the patch Darrick did to
> offload all bio errors to a workque hasn't been merged upstream.
> Unfortunately I don't remember the subject for that anymore.

That was only for writeback ioends[1], which went upstream a couple of
weeks ago.  This report is for read(ahead) completions, but there isn't
a quick fix because (AFAIK) the readahead ctx is gone by the time we get
to the bio endio handler.  I think we'd have to allocate a new struct
{bio, list_head} in iomap_read_end_io and bump the
iomap_finish_folio_read calls to process context via queue_work().

--D

[1] https://lore.kernel.org/linux-fsdevel/177148129564.716249.3069780698231701540.stgit@frogsfrogsfrogs/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Bug][xfstests xfs/556] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage
  2026-03-20 14:27   ` Darrick J. Wong
@ 2026-03-23  6:15     ` Christoph Hellwig
  0 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2026-03-23  6:15 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christoph Hellwig, Zorro Lang, linux-xfs, brauner, linux-fsdevel

On Fri, Mar 20, 2026 at 07:27:28AM -0700, Darrick J. Wong wrote:
> That was only for writeback ioends[1], which went upstream a couple of
> weeks ago.

Ah, right.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] iomap: fix lockdep complaint when reads fail
       [not found]     ` <20260323152231.GG6223@frogsfrogsfrogs>
@ 2026-03-23 21:00       ` Darrick J. Wong
  2026-03-24  6:14         ` Christoph Hellwig
  2026-03-24  8:15         ` Christian Brauner
  0 siblings, 2 replies; 8+ messages in thread
From: Darrick J. Wong @ 2026-03-23 21:00 UTC (permalink / raw)
  To: Christian Brauner
  Cc: Christoph Hellwig, Zorro Lang, linux-xfs, Tal Zussman, axboe,
	linux-block, linux-fsdevel

From: Darrick J. Wong <djwong@kernel.org>

Zorro Lang reported the following lockdep splat:

"While running fstests xfs/556 on kernel 7.0.0-rc4+ (HEAD=04a9f1766954),
a lockdep warning was triggered indicating an inconsistent lock state
for sb->s_type->i_lock_key.

"The deadlock might occur because iomap_read_end_io (called from a
hardware interrupt completion path) invokes fserror_report, which then
calls igrab.  igrab attempts to acquire the i_lock spinlock. However,
the i_lock is frequently acquired in process context with interrupts
enabled. If an interrupt occurs while a process holds the i_lock, and
that interrupt handler calls fserror_report, the system deadlocks.

"I hit this warning several times by running xfs/556 (mostly) or
generic/648 on xfs. More details refer to below console log."

along with this dmesg, for which I've cleaned up the stacktraces:

 run fstests xfs/556 at 2026-03-18 20:05:30
 XFS (sda3): Mounting V5 Filesystem 396e9164-c45a-4e05-be9d-b38c2c5c6477
 XFS (sda3): Ending clean mount
 XFS (sda3): Unmounting Filesystem 396e9164-c45a-4e05-be9d-b38c2c5c6477
 XFS (sda3): Mounting V5 Filesystem bf3f89c3-3c45-4650-a9c7-744f39c0191e
 XFS (sda3): Ending clean mount
 XFS (sda3): Unmounting Filesystem bf3f89c3-3c45-4650-a9c7-744f39c0191e
 XFS (dm-0): Mounting V5 Filesystem bf3f89c3-3c45-4650-a9c7-744f39c0191e
 XFS (dm-0): Ending clean mount
 device-mapper: table: 253:0: adding target device (start sect 209 len 1) caused an alignment inconsistency
 device-mapper: table: 253:0: adding target device (start sect 210 len 62914350) caused an alignment inconsistency
 buffer_io_error: 6 callbacks suppressed
 Buffer I/O error on dev dm-0, logical block 209, async page read
 Buffer I/O error on dev dm-0, logical block 209, async page read
 XFS (dm-0): Unmounting Filesystem bf3f89c3-3c45-4650-a9c7-744f39c0191e
 XFS (dm-0): Mounting V5 Filesystem bf3f89c3-3c45-4650-a9c7-744f39c0191e
 XFS (dm-0): Ending clean mount

 ================================
 WARNING: inconsistent lock state
 7.0.0-rc4+ #1 Tainted: G S      W
 --------------------------------
 inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
 od/2368602 [HC1[1]:SC0[0]:HE0:SE1] takes:
 ff1100069f2b4a98 (&sb->s_type->i_lock_key#31){?.+.}-{3:3}, at: igrab+0x28/0x1a0
 {HARDIRQ-ON-W} state was registered at:
   __lock_acquire+0x40d/0xbd0
   lock_acquire.part.0+0xbd/0x260
   _raw_spin_lock+0x37/0x80
   unlock_new_inode+0x66/0x2a0
   xfs_iget+0x67b/0x7b0 [xfs]
   xfs_mountfs+0xde4/0x1c80 [xfs]
   xfs_fs_fill_super+0xe86/0x17a0 [xfs]
   get_tree_bdev_flags+0x312/0x590
   vfs_get_tree+0x8d/0x2f0
   vfs_cmd_create+0xb2/0x240
   __do_sys_fsconfig+0x3d8/0x9a0
   do_syscall_64+0x13a/0x1520
   entry_SYSCALL_64_after_hwframe+0x76/0x7e
 irq event stamp: 3118
 hardirqs last  enabled at (3117): [<ffffffffb54e4ad8>] _raw_spin_unlock_irq+0x28/0x50
 hardirqs last disabled at (3118): [<ffffffffb54b84c9>] common_interrupt+0x19/0xe0
 softirqs last  enabled at (3040): [<ffffffffb290ca28>] handle_softirqs+0x6b8/0x950
 softirqs last disabled at (3023): [<ffffffffb290ce4d>] __irq_exit_rcu+0xfd/0x250

 other info that might help us debug this:
  Possible unsafe locking scenario:

        CPU0
        ----
   lock(&sb->s_type->i_lock_key#31);
   <Interrupt>
     lock(&sb->s_type->i_lock_key#31);

  *** DEADLOCK ***

 1 lock held by od/2368602:
  #0: ff1100069f2b4b58 (&sb->s_type->i_mutex_key#19){++++}-{4:4}, at: xfs_ilock+0x324/0x4b0 [xfs]

 stack backtrace:
 CPU: 15 UID: 0 PID: 2368602 Comm: od Kdump: loaded Tainted: G S      W           7.0.0-rc4+ #1 PREEMPT(full)
 Tainted: [S]=CPU_OUT_OF_SPEC, [W]=WARN
 Hardware name: Dell Inc. PowerEdge R660/0R5JJC, BIOS 2.1.5 03/14/2024
 Call Trace:
  <IRQ>
  dump_stack_lvl+0x6f/0xb0
  print_usage_bug.part.0+0x230/0x2c0
  mark_lock_irq+0x3ce/0x5b0
  mark_lock+0x1cb/0x3d0
  mark_usage+0x109/0x120
  __lock_acquire+0x40d/0xbd0
  lock_acquire.part.0+0xbd/0x260
  _raw_spin_lock+0x37/0x80
  igrab+0x28/0x1a0
  fserror_report+0x127/0x2d0
  iomap_finish_folio_read+0x13c/0x280
  iomap_read_end_io+0x10e/0x2c0
  clone_endio+0x37e/0x780 [dm_mod]
  blk_update_request+0x448/0xf00
  scsi_end_request+0x74/0x750
  scsi_io_completion+0xe9/0x7c0
  _scsih_io_done+0x6ba/0x1ca0 [mpt3sas]
  _base_process_reply_queue+0x249/0x15b0 [mpt3sas]
  _base_interrupt+0x95/0xe0 [mpt3sas]
  __handle_irq_event_percpu+0x1f0/0x780
  handle_irq_event+0xa9/0x1c0
  handle_edge_irq+0x2ef/0x8a0
  __common_interrupt+0xa0/0x170
  common_interrupt+0xb7/0xe0
  </IRQ>
  <TASK>
  asm_common_interrupt+0x26/0x40
 RIP: 0010:_raw_spin_unlock_irq+0x2e/0x50
 Code: 0f 1f 44 00 00 53 48 8b 74 24 08 48 89 fb 48 83 c7 18 e8 b5 73 5e fd 48 89 df e8 ed e2 5e fd e8 08 78 8f fd fb bf 01 00 00 00 <e8> 8d 56 4d fd 65 8b 05 46 d5 1d 03 85 c0 74 06 5b c3 cc cc cc cc
 RSP: 0018:ffa0000027d07538 EFLAGS: 00000206
 RAX: 0000000000000c2d RBX: ffffffffb6614bc8 RCX: 0000000000000080
 RDX: 0000000000000000 RSI: ffffffffb6306a01 RDI: 0000000000000001
 RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
 R10: ffffffffb75efc67 R11: 0000000000000001 R12: ff1100015ada0000
 R13: 0000000000000083 R14: 0000000000000002 R15: ffffffffb6614c10
  folio_wait_bit_common+0x407/0x780
  filemap_update_page+0x8e7/0xbd0
  filemap_get_pages+0x904/0xc50
  filemap_read+0x320/0xc20
  xfs_file_buffered_read+0x2aa/0x380 [xfs]
  xfs_file_read_iter+0x263/0x4a0 [xfs]
  vfs_read+0x6cb/0xb70
  ksys_read+0xf9/0x1d0
  do_syscall_64+0x13a/0x1520

Zorro's diagnosis makes sense, so the solution is to kick the failed
read handling to a workqueue much like we added for writeback ioends in
commit 294f54f849d846 ("fserror: fix lockdep complaint when igrabbing
inode").

Cc: Zorro Lang <zlang@redhat.com>
Link: https://lore.kernel.org/linux-xfs/20260319194303.efw4wcu7c4idhthz@doltdoltdolt/
Fixes: a9d573ee88af98 ("iomap: report file I/O errors to the VFS")
Signed-off-by: "Darrick J. Wong" <djwong@kernel.org>
---
 fs/iomap/bio.c |   51 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 50 insertions(+), 1 deletion(-)

diff --git a/fs/iomap/bio.c b/fs/iomap/bio.c
index fc045f2e4c459e..edd908183058f5 100644
--- a/fs/iomap/bio.c
+++ b/fs/iomap/bio.c
@@ -8,7 +8,10 @@
 #include "internal.h"
 #include "trace.h"
 
-static void iomap_read_end_io(struct bio *bio)
+static DEFINE_SPINLOCK(failed_read_lock);
+static struct bio_list failed_read_list = BIO_EMPTY_LIST;
+
+static void __iomap_read_end_io(struct bio *bio)
 {
 	int error = blk_status_to_errno(bio->bi_status);
 	struct folio_iter fi;
@@ -18,6 +21,52 @@ static void iomap_read_end_io(struct bio *bio)
 	bio_put(bio);
 }
 
+static void
+iomap_fail_reads(
+	struct work_struct	*work)
+{
+	struct bio		*bio;
+	struct bio_list		tmp = BIO_EMPTY_LIST;
+	unsigned long		flags;
+
+	spin_lock_irqsave(&failed_read_lock, flags);
+	bio_list_merge_init(&tmp, &failed_read_list);
+	spin_unlock_irqrestore(&failed_read_lock, flags);
+
+	while ((bio = bio_list_pop(&tmp)) != NULL) {
+		__iomap_read_end_io(bio);
+		cond_resched();
+	}
+}
+
+static DECLARE_WORK(failed_read_work, iomap_fail_reads);
+
+static void iomap_fail_buffered_read(struct bio *bio)
+{
+	unsigned long flags;
+
+	/*
+	 * Bounce I/O errors to a workqueue to avoid nested i_lock acquisitions
+	 * in the fserror code.  The caller no longer owns the bio reference
+	 * after the spinlock drops.
+	 */
+	spin_lock_irqsave(&failed_read_lock, flags);
+	if (bio_list_empty(&failed_read_list))
+		WARN_ON_ONCE(!schedule_work(&failed_read_work));
+	bio_list_add(&failed_read_list, bio);
+	spin_unlock_irqrestore(&failed_read_lock, flags);
+}
+
+static void iomap_read_end_io(struct bio *bio)
+{
+	if (bio->bi_status) {
+		iomap_fail_buffered_read(bio);
+		return;
+	}
+
+	__iomap_read_end_io(bio);
+}
+
 static void iomap_bio_submit_read(struct iomap_read_folio_ctx *ctx)
 {
 	struct bio *bio = ctx->read_ctx;

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] iomap: fix lockdep complaint when reads fail
  2026-03-23 21:00       ` [PATCH] iomap: fix lockdep complaint when reads fail Darrick J. Wong
@ 2026-03-24  6:14         ` Christoph Hellwig
  2026-03-25  0:16           ` Jens Axboe
  2026-03-24  8:15         ` Christian Brauner
  1 sibling, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2026-03-24  6:14 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christian Brauner, Christoph Hellwig, Zorro Lang, linux-xfs,
	Tal Zussman, axboe, linux-block, linux-fsdevel

Please reword the subject - your are not fixing a lockdep complain,
but the underlying issue of calling code from the wrong context.
Lockdeps as usual is just the messenger.

> Zorro's diagnosis makes sense, so the solution is to kick the failed
> read handling to a workqueue much like we added for writeback ioends in
> commit 294f54f849d846 ("fserror: fix lockdep complaint when igrabbing
> inode").

The code looks ok, although I'd much prefer generalizing it.  I guess
we're too late in the 7.0-cycle for that, so:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] iomap: fix lockdep complaint when reads fail
  2026-03-23 21:00       ` [PATCH] iomap: fix lockdep complaint when reads fail Darrick J. Wong
  2026-03-24  6:14         ` Christoph Hellwig
@ 2026-03-24  8:15         ` Christian Brauner
  2026-03-24 17:06           ` Darrick J. Wong
  1 sibling, 1 reply; 8+ messages in thread
From: Christian Brauner @ 2026-03-24  8:15 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christian Brauner, Christoph Hellwig, Zorro Lang, linux-xfs,
	Tal Zussman, axboe, linux-block, linux-fsdevel

On Mon, 23 Mar 2026 14:00:17 -0700, Darrick J. Wong wrote:
> Zorro Lang reported the following lockdep splat:
> 
> "While running fstests xfs/556 on kernel 7.0.0-rc4+ (HEAD=04a9f1766954),
> a lockdep warning was triggered indicating an inconsistent lock state
> for sb->s_type->i_lock_key.
> 
> "The deadlock might occur because iomap_read_end_io (called from a
> hardware interrupt completion path) invokes fserror_report, which then
> calls igrab.  igrab attempts to acquire the i_lock spinlock. However,
> the i_lock is frequently acquired in process context with interrupts
> enabled. If an interrupt occurs while a process holds the i_lock, and
> that interrupt handler calls fserror_report, the system deadlocks.
> 
> [...]

Applied to the vfs.fixes branch of the vfs/vfs.git tree.
Patches in the vfs.fixes branch should appear in linux-next soon.

Please report any outstanding bugs that were missed during review in a
new review to the original patch series allowing us to drop it.

It's encouraged to provide Acked-bys and Reviewed-bys even though the
patch has now been applied. If possible patch trailers will be updated.

Note that commit hashes shown below are subject to change due to rebase,
trailer updates or similar. If in doubt, please check the listed branch.

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git
branch: vfs.fixes

[1/1] iomap: fix lockdep complaint when reads fail
      https://git.kernel.org/vfs/vfs/c/f621324dfb3d

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] iomap: fix lockdep complaint when reads fail
  2026-03-24  8:15         ` Christian Brauner
@ 2026-03-24 17:06           ` Darrick J. Wong
  0 siblings, 0 replies; 8+ messages in thread
From: Darrick J. Wong @ 2026-03-24 17:06 UTC (permalink / raw)
  To: Christian Brauner
  Cc: Christoph Hellwig, Zorro Lang, linux-xfs, Tal Zussman, axboe,
	linux-block, linux-fsdevel

On Tue, Mar 24, 2026 at 09:15:10AM +0100, Christian Brauner wrote:
> On Mon, 23 Mar 2026 14:00:17 -0700, Darrick J. Wong wrote:
> > Zorro Lang reported the following lockdep splat:
> > 
> > "While running fstests xfs/556 on kernel 7.0.0-rc4+ (HEAD=04a9f1766954),
> > a lockdep warning was triggered indicating an inconsistent lock state
> > for sb->s_type->i_lock_key.
> > 
> > "The deadlock might occur because iomap_read_end_io (called from a
> > hardware interrupt completion path) invokes fserror_report, which then
> > calls igrab.  igrab attempts to acquire the i_lock spinlock. However,
> > the i_lock is frequently acquired in process context with interrupts
> > enabled. If an interrupt occurs while a process holds the i_lock, and
> > that interrupt handler calls fserror_report, the system deadlocks.
> > 
> > [...]
> 
> Applied to the vfs.fixes branch of the vfs/vfs.git tree.
> Patches in the vfs.fixes branch should appear in linux-next soon.
> 
> Please report any outstanding bugs that were missed during review in a
> new review to the original patch series allowing us to drop it.
> 
> It's encouraged to provide Acked-bys and Reviewed-bys even though the
> patch has now been applied. If possible patch trailers will be updated.
> 
> Note that commit hashes shown below are subject to change due to rebase,
> trailer updates or similar. If in doubt, please check the listed branch.
> 
> tree:   https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git
> branch: vfs.fixes
> 
> [1/1] iomap: fix lockdep complaint when reads fail

Hey Christian,

Is it not too late to rename this patch to:

"iomap: fix potential deadlock when buffered reads fail"

--D

>       https://git.kernel.org/vfs/vfs/c/f621324dfb3d

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] iomap: fix lockdep complaint when reads fail
  2026-03-24  6:14         ` Christoph Hellwig
@ 2026-03-25  0:16           ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2026-03-25  0:16 UTC (permalink / raw)
  To: Christoph Hellwig, Darrick J. Wong
  Cc: Christian Brauner, Zorro Lang, linux-xfs, Tal Zussman,
	linux-block, linux-fsdevel

On 3/24/26 12:14 AM, Christoph Hellwig wrote:
> Please reword the subject - your are not fixing a lockdep complain,
> but the underlying issue of calling code from the wrong context.
> Lockdeps as usual is just the messenger.
> 
>> Zorro's diagnosis makes sense, so the solution is to kick the failed
>> read handling to a workqueue much like we added for writeback ioends in
>> commit 294f54f849d846 ("fserror: fix lockdep complaint when igrabbing
>> inode").
> 
> The code looks ok, although I'd much prefer generalizing it.  I guess

I'll take a stab at it. This one is fine since it's an error handling
path, but previous tests I did with this kind of approach for every IO
hitting this path have been utterly terrible.

Once done though, we can swap stuff like this too.

IOW, for this one:

Reviewed-by: Jens Axboe <axboe@kernel.dk>

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-25  0:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20260319194303.efw4wcu7c4idhthz@doltdoltdolt>
2026-03-20  7:23 ` [Bug][xfstests xfs/556] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage Christoph Hellwig
2026-03-20 14:27   ` Darrick J. Wong
2026-03-23  6:15     ` Christoph Hellwig
     [not found] ` <20260320163444.GE6223@frogsfrogsfrogs>
     [not found]   ` <acDbFtQw0mom798e@infradead.org>
     [not found]     ` <20260323152231.GG6223@frogsfrogsfrogs>
2026-03-23 21:00       ` [PATCH] iomap: fix lockdep complaint when reads fail Darrick J. Wong
2026-03-24  6:14         ` Christoph Hellwig
2026-03-25  0:16           ` Jens Axboe
2026-03-24  8:15         ` Christian Brauner
2026-03-24 17:06           ` Darrick J. Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox