public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: syzbot <syzbot+6cc93ec9a4035badb85f@syzkaller.appspotmail.com>
To: linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com
Subject: Forwarded: Re: [syzbot] [mm?] KASAN: use-after-free Read in copy_folio_from_iter_atomic (2)
Date: Fri, 24 Apr 2026 14:16:24 -0700	[thread overview]
Message-ID: <69ebdda8.a00a0220.7773.0008.GAE@google.com> (raw)
In-Reply-To: <69ca48ca.050a0220.183828.001a.GAE@google.com>

For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com.

***

Subject: Re: [syzbot] [mm?] KASAN: use-after-free Read in copy_folio_from_iter_atomic (2)
Author: mashiro.chen@mailbox.org

#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master

From 14bc67ec17b9a209d97c08e9136eead0bd0c0914 Mon Sep 17 00:00:00 2001
From: Mashiro Chen <mashiro.chen@mailbox.org>
Date: Sat, 25 Apr 2026 01:56:00 +0800
Subject: [PATCH] jfs: fix use-after-free on log shutdown

During JFS log shutdown, log buffer pages can be freed while lower-layer
loop workers are still copying from them, triggering a use-after-free
reported by syzbot in copy_folio_from_iter_atomic().

Track in-flight log I/O in struct jfs_log and wait for completion before
freeing log buffers. Add io_inflight/io_waitq, increment io_inflight
before submitting BIOs in lbmRead() and lbmStartIO(), and decrement it
from lbmIODone() on all completion paths, including READ and DIRECT
early-return paths.

This closes the teardown race between buffer page free and late I/O
completion.

Reported-by: syzbot+6cc93ec9a4035badb85f@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=6cc93ec9a4035badb85f
Signed-off-by: Mashiro Chen <mashiro.chen@mailbox.org>
---
 fs/jfs/jfs_logmgr.c | 17 +++++++++++++++--
 fs/jfs/jfs_logmgr.h |  2 ++
 2 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/fs/jfs/jfs_logmgr.c b/fs/jfs/jfs_logmgr.c
index ada00d5bc214..a45c8e2559e4 100644
--- a/fs/jfs/jfs_logmgr.c
+++ b/fs/jfs/jfs_logmgr.c
@@ -1806,6 +1806,8 @@ static int lbmLogInit(struct jfs_log * log)
 	 * avoid deadlock here.
 	 */
 	init_waitqueue_head(&log->free_wait);
+	init_waitqueue_head(&log->io_waitq);
+	atomic_set(&log->io_inflight, 0);
 
 	log->lbuf_free = NULL;
 
@@ -1857,6 +1859,8 @@ static void lbmLogShutdown(struct jfs_log * log)
 
 	jfs_info("lbmLogShutdown: log:0x%p", log);
 
+	wait_event(log->io_waitq, !atomic_read(&log->io_inflight));
+
 	lbuf = log->lbuf_free;
 	while (lbuf) {
 		struct lbuf *next = lbuf->l_freelist;
@@ -1978,6 +1982,7 @@ static int lbmRead(struct jfs_log * log, int pn, struct lbuf ** bpp)
 
 	bio->bi_end_io = lbmIODone;
 	bio->bi_private = bp;
+	atomic_inc(&log->io_inflight);
 	/*check if journaling to disk has been disabled*/
 	if (log->no_integrity) {
 		bio->bi_iter.bi_size = 0;
@@ -2124,6 +2129,7 @@ static void lbmStartIO(struct lbuf * bp)
 
 	bio->bi_end_io = lbmIODone;
 	bio->bi_private = bp;
+	atomic_inc(&log->io_inflight);
 
 	/* check if journaling to disk has been disabled */
 	if (log->no_integrity) {
@@ -2170,7 +2176,7 @@ static void lbmIODone(struct bio *bio)
 {
 	struct lbuf *bp = bio->bi_private;
 	struct lbuf *nextbp, *tail;
-	struct jfs_log *log;
+	struct jfs_log *log = bp->l_log;
 	unsigned long flags;
 
 	/*
@@ -2201,6 +2207,9 @@ static void lbmIODone(struct bio *bio)
 		/* wakeup I/O initiator */
 		LCACHE_WAKEUP(&bp->l_ioevent);
 
+		if (atomic_dec_and_test(&log->io_inflight))
+			wake_up(&log->io_waitq);
+
 		return;
 	}
 
@@ -2220,12 +2229,13 @@ static void lbmIODone(struct bio *bio)
 	INCREMENT(lmStat.pagedone);
 
 	/* update committed lsn */
-	log = bp->l_log;
 	log->clsn = (bp->l_pn << L2LOGPSIZE) + bp->l_ceor;
 
 	if (bp->l_flag & lbmDIRECT) {
 		LCACHE_WAKEUP(&bp->l_ioevent);
 		LCACHE_UNLOCK(flags);
+		if (atomic_dec_and_test(&log->io_inflight))
+			wake_up(&log->io_waitq);
 		return;
 	}
 
@@ -2305,6 +2315,9 @@ static void lbmIODone(struct bio *bio)
 
 		LCACHE_UNLOCK(flags);	/* unlock+enable */
 	}
+
+	if (atomic_dec_and_test(&log->io_inflight))
+		wake_up(&log->io_waitq);
 }
 
 int jfsIOWait(void *arg)
diff --git a/fs/jfs/jfs_logmgr.h b/fs/jfs/jfs_logmgr.h
index 8b8994e48cd0..59cb0aca99c5 100644
--- a/fs/jfs/jfs_logmgr.h
+++ b/fs/jfs/jfs_logmgr.h
@@ -400,6 +400,8 @@ struct jfs_log {
 	uuid_t uuid;		/* 16: 128-bit uuid of log device */
 
 	int no_integrity;	/* 3: flag to disable journaling to disk */
+	atomic_t io_inflight;
+	wait_queue_head_t io_waitq;
 };
 
 /*
-- 
2.54.0


  parent reply	other threads:[~2026-04-24 21:16 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-30  9:56 [syzbot] [mm?] KASAN: use-after-free Read in copy_folio_from_iter_atomic (2) syzbot
2026-04-16 10:23 ` syzbot
2026-04-17 20:43 ` Forwarded: " syzbot
2026-04-20  4:08 ` syzbot
2026-04-20  4:11 ` syzbot
2026-04-23 13:26 ` Forwarded: [PATCH 1/1] jfs: try syzbot fix syzbot
2026-04-24 21:16 ` syzbot [this message]
2026-04-25 11:24 ` Forwarded: [PATCH] jfs: fix use-after-free on log shutdown syzbot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=69ebdda8.a00a0220.7773.0008.GAE@google.com \
    --to=syzbot+6cc93ec9a4035badb85f@syzkaller.appspotmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=syzkaller-bugs@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox