From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-oo1-f71.google.com (mail-oo1-f71.google.com [209.85.161.71]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B00DB223DE9 for ; Sat, 21 Mar 2026 12:15:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.71 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774095310; cv=none; b=O/BX0ssEmPAEs0qgDCAumgR3coF2L1tTUsoJWZBkTseS0ZrX7epvxoKDyPxm3WXHHDJDy8qiWAjATCQrPnG7XZQr0dEpqy80sJp/NK9kUBvlA7jOiONtfk4r4bGn1bHmXiICwREPyN169x5J3Y1TflGFSmTtCeVfz//mVQKa6Ws= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774095310; c=relaxed/simple; bh=JYAOzojtYg7YCowObpBO8xKV1gBxcfN5PWudTfSKbXc=; h=MIME-Version:Date:In-Reply-To:Message-ID:Subject:From:To: Content-Type; b=r6mKZGl6B+hTLcdt4xkZ13DDti6aOM5CZHB5UKalYfiGEJqZ2fJqjlBNWWWu1oUPdgBfdwdCqCvsk45qUO7JT0A1J9gzS7QUQeIlrFl+udyBVvuVaOhnjHEQqa/3gR+f+aatLEyhAElsKJHF6M/EQsAswi8UW5NeUBSI+lpg2bo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=syzkaller.appspotmail.com; spf=pass smtp.mailfrom=M3KW2WVRGUFZ5GODRSRYTGD7.apphosting.bounces.google.com; arc=none smtp.client-ip=209.85.161.71 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=syzkaller.appspotmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=M3KW2WVRGUFZ5GODRSRYTGD7.apphosting.bounces.google.com Received: by mail-oo1-f71.google.com with SMTP id 006d021491bc7-67c1228b2a1so39722025eaf.0 for ; Sat, 21 Mar 2026 05:15:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774095307; x=1774700107; h=to:from:subject:message-id:in-reply-to:date:mime-version :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7RsE4eFIxcJzsuapIceSStsWLOWQGKM/lEUGkdZiA0E=; b=IcumvuMosgblJp8bIbsgrCuJQ9xK5AxjsPBA/qv9GM/E2l2ojVmQ+kjj9YUbmcW6NF IVFfIlMM6MSvSNTrzQxSOb8yq1z3yx7lSUfAsuaA6uZwnVS7+32njcpkgGAOlKzG+YF3 8F96nJknU1oFwpg1ZZzvAI30U0r7P9ylq0DVWn1oiyXvEITiWUTmtvz+ug9LkPJOd6Pv DfE6GRsrDYjZs2rbIWNU+IGH1qbv2UecNxCSW2/rYCjitcGtm8elHFPqN94qLGmZVXlG CNi/Lr5hwbgtTy2X0WKYH/NOvEQARYWdgSNu9QOg4s9ejHZx0ExjsZlUfGbZa5KLyUUU qYcQ== X-Gm-Message-State: AOJu0YxfrnSKSPXPqn4hyWh6+IpWQEuWMAKRNVtgqBCD+yEqbQEh3Xa+ 2ivHXIRAdmBNYkfvu2QoJ6alRGokx1e1Zi0gSOetTyjbBPVhirkodVIaczclRZtoEM8uNzsQsJ+ kLp6v/Z/05hrvO6kgKSQ34A8crUO/71UUCfl1hyXqa2Ol0GzAjmjwRyAY4ms= Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Received: by 2002:a05:6820:1a04:b0:67b:a647:eaaf with SMTP id 006d021491bc7-67c22f74a71mr4257032eaf.41.1774095307707; Sat, 21 Mar 2026 05:15:07 -0700 (PDT) Date: Sat, 21 Mar 2026 05:15:07 -0700 In-Reply-To: <69bdcdcd.050a0220.3bf4de.0030.GAE@google.com> X-Google-Appengine-App-Id: s~syzkaller X-Google-Appengine-App-Id-Alias: syzkaller Message-ID: <69be8bcb.050a0220.3bf4de.0050.GAE@google.com> Subject: Forwarded: [PATCH] ext4: fix general protection fault in bio_add_page for encrypted large folios From: syzbot To: linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com Content-Type: text/plain; charset="UTF-8" For archival purposes, forwarding an incoming command email to linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com. *** Subject: [PATCH] ext4: fix general protection fault in bio_add_page for encrypted large folios Author: kartikey406@gmail.com #syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master When writing back an encrypted file, ext4_bio_write_folio() encrypts the folio into a single-page bounce buffer and passes it as io_folio to io_submit_add_bh(). The offset passed to bio_add_folio() was always bh_offset(bh), which is relative to the original folio. For a large folio this offset can exceed PAGE_SIZE. bio_add_folio() calls folio_page(io_folio, off >> PAGE_SHIFT) which computes &folio->page + N. For a single-page bounce folio with N >= 1 this is out-of-bounds, causing a general protection fault caught by KASAN: KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] RIP: 0010:bvec_set_page include/linux/bvec.h:44 [inline] RIP: 0010:bio_add_page+0x462/0x6e0 block/bio.c:1048 Fix this by computing io_off at the call site. For the non-encrypted path io_folio == folio so bh_offset(bh) is used unchanged. For the encrypted path the bounce page is always a single PAGE_SIZE page, so the offset is taken modulo PAGE_SIZE to map it correctly into the bounce page. Using hardcoded 0 would be wrong for sub-page block sizes (e.g. 1024-byte blocks) where multiple buffer heads exist within one page at offsets 0, 1024, 2048, 3072 etc. bh_offset(bh) % PAGE_SIZE handles all block sizes correctly. Reported-by: syzbot+ed8bc247f231c1a48e21@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=ed8bc247f231c1a48e21 Signed-off-by: Deepanshu Kartikey --- fs/ext4/page-io.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index a8c95eee91b7..006b2f5173de 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -438,7 +438,8 @@ static void io_submit_add_bh(struct ext4_io_submit *io, struct inode *inode, struct folio *folio, struct folio *io_folio, - struct buffer_head *bh) + struct buffer_head *bh, + size_t io_off) { if (io->io_bio && (bh->b_blocknr != io->io_next_block || !fscrypt_mergeable_bio_bh(io->io_bio, bh))) { @@ -449,7 +450,7 @@ static void io_submit_add_bh(struct ext4_io_submit *io, io_submit_init_bio(io, bh); io->io_bio->bi_write_hint = inode->i_write_hint; } - if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, bh_offset(bh))) + if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, io_off)) goto submit_and_retry; wbc_account_cgroup_owner(io->io_wbc, folio, bh->b_size); io->io_next_block++; @@ -585,9 +586,20 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio, /* Now submit buffers to write */ do { + size_t io_off; + if (!buffer_async_write(bh)) continue; - io_submit_add_bh(io, inode, folio, io_folio, bh); + /* + * When io_folio is a single-page bounce buffer (fscrypt), + * normalise to PAGE_SIZE to handle all block sizes correctly. + * Using 0 would break sub-page block sizes (e.g. 1024-byte + * blocks) where multiple bh offsets exist within one page + */ + io_off = (io_folio == folio) + ? bh_offset(bh) + : bh_offset(bh) % PAGE_SIZE; + io_submit_add_bh(io, inode, folio, io_folio, bh, io_off); } while ((bh = bh->b_this_page) != head); return 0; -- 2.43.0