From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-oa1-f71.google.com (mail-oa1-f71.google.com [209.85.160.71]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC07B2BEC55 for ; Sat, 21 Mar 2026 08:36:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.71 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774082192; cv=none; b=HpwViu4g1n7SazSjqzr0NW3lFXkfImn1S8zgmPj8+GkNCHGtqVHvVMuhAhNeeKZxx8Hd6LhEh67LTV9tyBlgilEkzQAMgLDIj6opRHaNkgJ/SrWljDAXFKOeb8uqIOYnqynjZ8SDyBeUzrBIG5knhzZOOB6xAE563QSzAm2pn9E= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774082192; c=relaxed/simple; bh=7OsxTS0Erj9uMdn0mUGy+ECpFYWlwNtQ7ubqJ7E9GA8=; h=MIME-Version:Date:In-Reply-To:Message-ID:Subject:From:To: Content-Type; b=hvAiJf8E9I5KCSD0WjHWOU+f2olaXvFJr/ldIUQPgJb4DlWQEh4Fkd74nOqiPjlioH5Bvn6wMG9VdVxXWrwQNBP6J40dSSmYZ/glBGfnfLuMwz+RSYrp+E7Y4yeXsLk4OujgiqBxmMkUR3AfoU1soEWUZkuGKVKbezsical1sMo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=syzkaller.appspotmail.com; spf=pass smtp.mailfrom=M3KW2WVRGUFZ5GODRSRYTGD7.apphosting.bounces.google.com; arc=none smtp.client-ip=209.85.160.71 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=syzkaller.appspotmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=M3KW2WVRGUFZ5GODRSRYTGD7.apphosting.bounces.google.com Received: by mail-oa1-f71.google.com with SMTP id 586e51a60fabf-415e1e9aa5dso12792207fac.0 for ; Sat, 21 Mar 2026 01:36:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774082190; x=1774686990; h=to:from:subject:message-id:in-reply-to:date:mime-version :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wuuaQllsbtrMRyYhTnUw2DJVl3ga+qGprTYf7up9rvA=; b=A27FSa6FG1n4jzwze+pG4ulDvwvQWOtOBMFBMoFXazlNOUiJwzYGiNxh1VcULGIewC UUpkiGsFOkQ28+NNjs4wHDdG/tEiJrAoiKuVjch2B/ad4PTjkUvr4gfJbGWDHZFR7SHX 9l3k/uQ9+RUAjKzqG0QOtUh3VJmhvEYimUn9Ed9iM+eWtYTSgY9ji8QO2CSNNwQdJHQQ 8hQ+LgdHrpMO5zmMgifUOE5nnO7y/5uClaRVrwDCS0durn6i7WSs84pPlW3g26mZOZnb Y/xNxZHoPzccF308uC4GxPhZ8EM8qQDV1s40acMBm251+tzlEXpropiQJzu1u/mTJzIF w9Mg== X-Gm-Message-State: AOJu0YytNxojfl5rmpRIn66qFnfAGTL9BfcTtPnZz0xLh2WaSKcr7KjZ 1eVnhb2dBSKhy90FcKwta6zx1v7mtbkYSIuLbAUMPcvu5VEDiOBNyGJuhDSprfL2zeHRPI6CnHZ /2v4iHjVt3ygtx6OC4+x8TRUfbds1cdTMTfEH2EI7Ayo0XN+Yni9OxY8It1w= Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Received: by 2002:a4a:ec42:0:b0:67b:a3e0:2122 with SMTP id 006d021491bc7-67c22f86112mr3964205eaf.35.1774082189927; Sat, 21 Mar 2026 01:36:29 -0700 (PDT) Date: Sat, 21 Mar 2026 01:36:29 -0700 In-Reply-To: <69bdcdcd.050a0220.3bf4de.0030.GAE@google.com> X-Google-Appengine-App-Id: s~syzkaller X-Google-Appengine-App-Id-Alias: syzkaller Message-ID: <69be588d.050a0220.3bf4de.0046.GAE@google.com> Subject: Forwarded: [PATCH] ext4: fix NULL page dereference in ext4_bio_write_folio() with large folios From: syzbot To: linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com Content-Type: text/plain; charset="UTF-8" For archival purposes, forwarding an incoming command email to linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com. *** Subject: [PATCH] ext4: fix NULL page dereference in ext4_bio_write_folio() with large folios Author: kartikey406@gmail.com #syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master When blocksize < PAGE_SIZE, a folio can span multiple pages with multiple buffer heads. ext4_bio_write_folio() encrypted the entire folio once with offset=0 via fscrypt_encrypt_pagecache_blocks(), which always returns a single bounce page covering only the first page of the folio. When the write loop iterated over buffer heads beyond the first page, bio_add_folio() calculated nr = bh_offset(bh) / PAGE_SIZE which was non-zero for bh's on subsequent pages. folio_page(io_folio, nr) then went out of bounds on the single page bounce folio, returning a NULL or garbage page pointer, causing a NULL pointer dereference in bvec_set_page(). Fix this by moving the encryption inside the write loop and encrypting per buffer head using the correct offset within the folio via offset_in_folio(folio, bh->b_data). Each buffer head now gets its own bounce page at index 0, so folio_page(io_folio, 0) is always valid. The existing retry logic for -ENOMEM is preserved. Reported-by: syzbot+ed8bc247f231c1a48e21@syzkaller.appspotmail.com Signed-off-by: Deepanshu kartikey --- fs/ext4/page-io.c | 87 +++++++++++++++++++++++------------------------ 1 file changed, 43 insertions(+), 44 deletions(-) diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index a8c95eee91b7..d7114171cd52 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -537,56 +537,55 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio, * (e.g. holes) to be unnecessarily encrypted, but this is rare and * can't happen in the common case of blocksize == PAGE_SIZE. */ - if (fscrypt_inode_uses_fs_layer_crypto(inode)) { - gfp_t gfp_flags = GFP_NOFS; - unsigned int enc_bytes = round_up(len, i_blocksize(inode)); - struct page *bounce_page; - - /* - * Since bounce page allocation uses a mempool, we can only use - * a waiting mask (i.e. request guaranteed allocation) on the - * first page of the bio. Otherwise it can deadlock. - */ - if (io->io_bio) - gfp_flags = GFP_NOWAIT; - retry_encrypt: - bounce_page = fscrypt_encrypt_pagecache_blocks(folio, - enc_bytes, 0, gfp_flags); - if (IS_ERR(bounce_page)) { - ret = PTR_ERR(bounce_page); - if (ret == -ENOMEM && - (io->io_bio || wbc->sync_mode == WB_SYNC_ALL)) { - gfp_t new_gfp_flags = GFP_NOFS; - if (io->io_bio) - ext4_io_submit(io); - else - new_gfp_flags |= __GFP_NOFAIL; - memalloc_retry_wait(gfp_flags); - gfp_flags = new_gfp_flags; - goto retry_encrypt; - } - - printk_ratelimited(KERN_ERR "%s: ret = %d\n", __func__, ret); - folio_redirty_for_writepage(wbc, folio); - do { - if (buffer_async_write(bh)) { - clear_buffer_async_write(bh); - set_buffer_dirty(bh); - } - bh = bh->b_this_page; - } while (bh != head); - - return ret; - } - io_folio = page_folio(bounce_page); - } - __folio_start_writeback(folio, keep_towrite); /* Now submit buffers to write */ do { if (!buffer_async_write(bh)) continue; + if (fscrypt_inode_uses_fs_layer_crypto(inode)) { + gfp_t gfp_flags = GFP_NOFS; + struct page *bounce_page; + /* + * Since bounce page allocation uses a mempool, we can + * only use a waiting mask (i.e. request guaranteed + * allocation) on the first page of the bio. + * Otherwise it can deadlock. + */ + if (io->io_bio) + gfp_flags = GFP_NOWAIT; + retry_encrypt: + bounce_page = fscrypt_encrypt_pagecache_blocks(folio, + bh->b_size, + offset_in_folio(folio, bh->b_data), + gfp_flags); + if (IS_ERR(bounce_page)) { + ret = PTR_ERR(bounce_page); + if (ret == -ENOMEM && + (io->io_bio || wbc->sync_mode == WB_SYNC_ALL)) { + gfp_t new_gfp_flags = GFP_NOFS; + if (io->io_bio) + ext4_io_submit(io); + else + new_gfp_flags |= __GFP_NOFAIL; + memalloc_retry_wait(gfp_flags); + gfp_flags = new_gfp_flags; + goto retry_encrypt; + } + printk_ratelimited(KERN_ERR "%s: ret = %d\n", + __func__, ret); + folio_redirty_for_writepage(wbc, folio); + do { + if (buffer_async_write(bh)) { + clear_buffer_async_write(bh); + set_buffer_dirty(bh); + } + bh = bh->b_this_page; + } while (bh != head); + return ret; + } + io_folio = page_folio(bounce_page); + } io_submit_add_bh(io, inode, folio, io_folio, bh); } while ((bh = bh->b_this_page) != head); -- 2.43.0