* Forwarded: [PATCH] ext4: fix NULL page dereference in ext4_bio_write_folio() with large folios
2026-03-20 22:44 [syzbot] [block?] general protection fault in bio_add_page syzbot
@ 2026-03-21 8:36 ` syzbot
2026-03-21 12:15 ` Forwarded: [PATCH] ext4: fix general protection fault in bio_add_page for encrypted " syzbot
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: syzbot @ 2026-03-21 8:36 UTC (permalink / raw)
To: linux-kernel, syzkaller-bugs
For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com.
***
Subject: [PATCH] ext4: fix NULL page dereference in ext4_bio_write_folio() with large folios
Author: kartikey406@gmail.com
#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
When blocksize < PAGE_SIZE, a folio can span multiple pages with
multiple buffer heads. ext4_bio_write_folio() encrypted the entire
folio once with offset=0 via fscrypt_encrypt_pagecache_blocks(),
which always returns a single bounce page covering only the first
page of the folio.
When the write loop iterated over buffer heads beyond the first page,
bio_add_folio() calculated nr = bh_offset(bh) / PAGE_SIZE which was
non-zero for bh's on subsequent pages. folio_page(io_folio, nr) then
went out of bounds on the single page bounce folio, returning a NULL
or garbage page pointer, causing a NULL pointer dereference in
bvec_set_page().
Fix this by moving the encryption inside the write loop and encrypting
per buffer head using the correct offset within the folio via
offset_in_folio(folio, bh->b_data). Each buffer head now gets its own
bounce page at index 0, so folio_page(io_folio, 0) is always valid.
The existing retry logic for -ENOMEM is preserved.
Reported-by: syzbot+ed8bc247f231c1a48e21@syzkaller.appspotmail.com
Signed-off-by: Deepanshu kartikey <Kartikey406@gmail.com>
---
fs/ext4/page-io.c | 87 +++++++++++++++++++++++------------------------
1 file changed, 43 insertions(+), 44 deletions(-)
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index a8c95eee91b7..d7114171cd52 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -537,56 +537,55 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio,
* (e.g. holes) to be unnecessarily encrypted, but this is rare and
* can't happen in the common case of blocksize == PAGE_SIZE.
*/
- if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
- gfp_t gfp_flags = GFP_NOFS;
- unsigned int enc_bytes = round_up(len, i_blocksize(inode));
- struct page *bounce_page;
-
- /*
- * Since bounce page allocation uses a mempool, we can only use
- * a waiting mask (i.e. request guaranteed allocation) on the
- * first page of the bio. Otherwise it can deadlock.
- */
- if (io->io_bio)
- gfp_flags = GFP_NOWAIT;
- retry_encrypt:
- bounce_page = fscrypt_encrypt_pagecache_blocks(folio,
- enc_bytes, 0, gfp_flags);
- if (IS_ERR(bounce_page)) {
- ret = PTR_ERR(bounce_page);
- if (ret == -ENOMEM &&
- (io->io_bio || wbc->sync_mode == WB_SYNC_ALL)) {
- gfp_t new_gfp_flags = GFP_NOFS;
- if (io->io_bio)
- ext4_io_submit(io);
- else
- new_gfp_flags |= __GFP_NOFAIL;
- memalloc_retry_wait(gfp_flags);
- gfp_flags = new_gfp_flags;
- goto retry_encrypt;
- }
-
- printk_ratelimited(KERN_ERR "%s: ret = %d\n", __func__, ret);
- folio_redirty_for_writepage(wbc, folio);
- do {
- if (buffer_async_write(bh)) {
- clear_buffer_async_write(bh);
- set_buffer_dirty(bh);
- }
- bh = bh->b_this_page;
- } while (bh != head);
-
- return ret;
- }
- io_folio = page_folio(bounce_page);
- }
-
__folio_start_writeback(folio, keep_towrite);
/* Now submit buffers to write */
do {
if (!buffer_async_write(bh))
continue;
+ if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
+ gfp_t gfp_flags = GFP_NOFS;
+ struct page *bounce_page;
+ /*
+ * Since bounce page allocation uses a mempool, we can
+ * only use a waiting mask (i.e. request guaranteed
+ * allocation) on the first page of the bio.
+ * Otherwise it can deadlock.
+ */
+ if (io->io_bio)
+ gfp_flags = GFP_NOWAIT;
+ retry_encrypt:
+ bounce_page = fscrypt_encrypt_pagecache_blocks(folio,
+ bh->b_size,
+ offset_in_folio(folio, bh->b_data),
+ gfp_flags);
+ if (IS_ERR(bounce_page)) {
+ ret = PTR_ERR(bounce_page);
+ if (ret == -ENOMEM &&
+ (io->io_bio || wbc->sync_mode == WB_SYNC_ALL)) {
+ gfp_t new_gfp_flags = GFP_NOFS;
+ if (io->io_bio)
+ ext4_io_submit(io);
+ else
+ new_gfp_flags |= __GFP_NOFAIL;
+ memalloc_retry_wait(gfp_flags);
+ gfp_flags = new_gfp_flags;
+ goto retry_encrypt;
+ }
+ printk_ratelimited(KERN_ERR "%s: ret = %d\n",
+ __func__, ret);
+ folio_redirty_for_writepage(wbc, folio);
+ do {
+ if (buffer_async_write(bh)) {
+ clear_buffer_async_write(bh);
+ set_buffer_dirty(bh);
+ }
+ bh = bh->b_this_page;
+ } while (bh != head);
+ return ret;
+ }
+ io_folio = page_folio(bounce_page);
+ }
io_submit_add_bh(io, inode, folio, io_folio, bh);
} while ((bh = bh->b_this_page) != head);
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread* Forwarded: [PATCH] ext4: fix general protection fault in bio_add_page for encrypted large folios
2026-03-20 22:44 [syzbot] [block?] general protection fault in bio_add_page syzbot
2026-03-21 8:36 ` Forwarded: [PATCH] ext4: fix NULL page dereference in ext4_bio_write_folio() with large folios syzbot
@ 2026-03-21 12:15 ` syzbot
2026-03-22 2:14 ` Forwarded: [PATCH] ext4: fix null-ptr-deref in bio_add_folio syzbot
2026-03-22 4:41 ` Forwarded: [PATCH] blktrace: reject buf_size smaller than struct blk_io_trace syzbot
3 siblings, 0 replies; 5+ messages in thread
From: syzbot @ 2026-03-21 12:15 UTC (permalink / raw)
To: linux-kernel, syzkaller-bugs
For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com.
***
Subject: [PATCH] ext4: fix general protection fault in bio_add_page for encrypted large folios
Author: kartikey406@gmail.com
#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
When writing back an encrypted file, ext4_bio_write_folio() encrypts the
folio into a single-page bounce buffer and passes it as io_folio to
io_submit_add_bh(). The offset passed to bio_add_folio() was always
bh_offset(bh), which is relative to the original folio.
For a large folio this offset can exceed PAGE_SIZE. bio_add_folio() calls
folio_page(io_folio, off >> PAGE_SHIFT) which computes &folio->page + N.
For a single-page bounce folio with N >= 1 this is out-of-bounds, causing
a general protection fault caught by KASAN:
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
RIP: 0010:bvec_set_page include/linux/bvec.h:44 [inline]
RIP: 0010:bio_add_page+0x462/0x6e0 block/bio.c:1048
Fix this by computing io_off at the call site. For the non-encrypted path
io_folio == folio so bh_offset(bh) is used unchanged. For the encrypted
path the bounce page is always a single PAGE_SIZE page, so the offset is
taken modulo PAGE_SIZE to map it correctly into the bounce page.
Using hardcoded 0 would be wrong for sub-page block sizes (e.g. 1024-byte
blocks) where multiple buffer heads exist within one page at offsets
0, 1024, 2048, 3072 etc. bh_offset(bh) % PAGE_SIZE handles all block
sizes correctly.
Reported-by: syzbot+ed8bc247f231c1a48e21@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=ed8bc247f231c1a48e21
Signed-off-by: Deepanshu Kartikey <Kartikey406@gmail.com>
---
fs/ext4/page-io.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index a8c95eee91b7..006b2f5173de 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -438,7 +438,8 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
struct inode *inode,
struct folio *folio,
struct folio *io_folio,
- struct buffer_head *bh)
+ struct buffer_head *bh,
+ size_t io_off)
{
if (io->io_bio && (bh->b_blocknr != io->io_next_block ||
!fscrypt_mergeable_bio_bh(io->io_bio, bh))) {
@@ -449,7 +450,7 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
io_submit_init_bio(io, bh);
io->io_bio->bi_write_hint = inode->i_write_hint;
}
- if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, bh_offset(bh)))
+ if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, io_off))
goto submit_and_retry;
wbc_account_cgroup_owner(io->io_wbc, folio, bh->b_size);
io->io_next_block++;
@@ -585,9 +586,20 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio,
/* Now submit buffers to write */
do {
+ size_t io_off;
+
if (!buffer_async_write(bh))
continue;
- io_submit_add_bh(io, inode, folio, io_folio, bh);
+ /*
+ * When io_folio is a single-page bounce buffer (fscrypt),
+ * normalise to PAGE_SIZE to handle all block sizes correctly.
+ * Using 0 would break sub-page block sizes (e.g. 1024-byte
+ * blocks) where multiple bh offsets exist within one page
+ */
+ io_off = (io_folio == folio)
+ ? bh_offset(bh)
+ : bh_offset(bh) % PAGE_SIZE;
+ io_submit_add_bh(io, inode, folio, io_folio, bh, io_off);
} while ((bh = bh->b_this_page) != head);
return 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread* Forwarded: [PATCH] ext4: fix null-ptr-deref in bio_add_folio
2026-03-20 22:44 [syzbot] [block?] general protection fault in bio_add_page syzbot
2026-03-21 8:36 ` Forwarded: [PATCH] ext4: fix NULL page dereference in ext4_bio_write_folio() with large folios syzbot
2026-03-21 12:15 ` Forwarded: [PATCH] ext4: fix general protection fault in bio_add_page for encrypted " syzbot
@ 2026-03-22 2:14 ` syzbot
2026-03-22 4:41 ` Forwarded: [PATCH] blktrace: reject buf_size smaller than struct blk_io_trace syzbot
3 siblings, 0 replies; 5+ messages in thread
From: syzbot @ 2026-03-22 2:14 UTC (permalink / raw)
To: linux-kernel, syzkaller-bugs
For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com.
***
Subject: [PATCH] ext4: fix null-ptr-deref in bio_add_folio
Author: kartikey406@gmail.com
#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
bio_alloc() is called with BIO_MAX_VECS=256 which exceeds
BIO_INLINE_VECS=4, so the bvec array is allocated separately.
Under GFP_NOIO memory pressure this allocation can fail and
bio_alloc() returns NULL.
io_submit_init_bio() does not check for NULL, so NULL gets
stored in io->io_bio and causes a null-ptr-deref when
bio_add_folio() tries to use it.
Fix by adding __GFP_DIRECT_RECLAIM to guarantee the allocation
always succeeds, as documented in bio_alloc_bioset().
Reported-by: syzbot+ed8bc247f231c1a48e21@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=ed8bc247f231c1a48e21
Signed-off-by: Deepanshu Kartikey <Kartikey406@gmail.com>
---
fs/ext4/page-io.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index a8c95eee91b7..aea28e5a5665 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -423,8 +423,14 @@ static void io_submit_init_bio(struct ext4_io_submit *io,
/*
* bio_alloc will _always_ be able to allocate a bio if
* __GFP_DIRECT_RECLAIM is set, see comments for bio_alloc_bioset().
- */
- bio = bio_alloc(bh->b_bdev, BIO_MAX_VECS, REQ_OP_WRITE, GFP_NOIO);
+ * We must use __GFP_DIRECT_RECLAIM to guarantee the bvec array
+ * allocation succeeds - BIO_MAX_VECS exceeds BIO_INLINE_VECS so
+ * bio_alloc_bioset() allocates the bvec array separately, which
+ * can fail under GFP_NOIO memory pressure, leaving bi_io_vec NULL
+ * and causing a null-ptr-deref in bio_add_folio().
+ */
+ bio = bio_alloc(bh->b_bdev, BIO_MAX_VECS, REQ_OP_WRITE,
+ GFP_NOIO | __GFP_DIRECT_RECLAIM);
fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
bio->bi_end_io = ext4_end_bio;
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Forwarded: [PATCH] blktrace: reject buf_size smaller than struct blk_io_trace
2026-03-20 22:44 [syzbot] [block?] general protection fault in bio_add_page syzbot
` (2 preceding siblings ...)
2026-03-22 2:14 ` Forwarded: [PATCH] ext4: fix null-ptr-deref in bio_add_folio syzbot
@ 2026-03-22 4:41 ` syzbot
3 siblings, 0 replies; 5+ messages in thread
From: syzbot @ 2026-03-22 4:41 UTC (permalink / raw)
To: linux-kernel, syzkaller-bugs
For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com.
***
Subject: [PATCH] blktrace: reject buf_size smaller than struct blk_io_trace
Author: kartikey406@gmail.com
#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
blk_trace_setup() accepts any non-zero buf_size from userspace
and passes it directly to relay_open(). If buf_size is smaller
than sizeof(struct blk_io_trace) = 40 bytes, relay_switch_subbuf()
always hits the toobig path and returns 0, causing memory pressure
that leads to bio_alloc() failing under GFP_NOIO and a
null-ptr-deref in bio_add_folio().
Reject buf_size values too small to hold a single trace event.
Reported-by: syzbot+ed8bc247f231c1a48e21@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=ed8bc247f231c1a48e21
Signed-off-by: Deepanshu Kartikey <Kartikey406@gmail.com>
---
kernel/trace/blktrace.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 8cd2520b4c99..6cc7d83ed1c2 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -773,7 +773,7 @@ int blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
if (ret)
return -EFAULT;
- if (!buts.buf_size || !buts.buf_nr)
+ if (buts.buf_size < sizeof(struct blk_io_trace) || !buts.buf_nr)
return -EINVAL;
buts2 = (struct blk_user_trace_setup2) {
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread