public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: David Sterba <dsterba@suse.cz>
To: Qu Wenruo <wqu@suse.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC] btrfs: get rid of btrfs_(alloc|free)_compr_folio()
Date: Thu, 5 Mar 2026 04:09:11 +0100	[thread overview]
Message-ID: <20260305030911.GD5735@suse.cz> (raw)
In-Reply-To: <20260305025611.GC5735@twin.jikos.cz>

On Thu, Mar 05, 2026 at 03:56:11AM +0100, David Sterba wrote:
> On Mon, Mar 02, 2026 at 06:30:30PM +1030, Qu Wenruo wrote:
> > And hopefully this will address David's recent crash (as usual I'm not
> > able to reproduce locally).
> 
> I'll run the test with this patch.

Still crashes so the lru is a false hunch.

[  110.693070] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffff888100000000 pfn:0x111262
[  110.694052] flags: 0x4000000000000000(node=0|zone=2)
[  110.694596] raw: 4000000000000000 ffffea00040f2008 ffffea00042088c8 0000000000000000
[  110.695383] raw: ffff888100000000 0000000000000000 00000000ffffffff 0000000000000000
[  110.696164] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
[  110.696925] ------------[ cut here ]------------
[  110.697414] kernel BUG at ./include/linux/mm.h:1493!
[  110.697955] Oops: invalid opcode: 0000 [#1] SMP KASAN
[  110.698482] CPU: 8 UID: 0 PID: 12 Comm: kworker/u64:0 Not tainted 7.0.0-rc1-default+ #626 PREEMPT(full) 
[  110.699385] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-2-gc13ff2cd-prebuilt.qemu.org 04/01/2014
[  110.700464] Workqueue: btrfs-delalloc btrfs_work_helper [btrfs]
[  110.701110] RIP: 0010:btrfs_compress_bio+0x5c2/0x6a0 [btrfs]
[  110.702716] RSP: 0018:ffff8881003b79a0 EFLAGS: 00010286
[  110.703082] RAX: 000000000000003e RBX: ffff88810a83d5f8 RCX: 0000000000000000
[  110.703550] RDX: 000000000000003e RSI: 0000000000000004 RDI: ffffed1020076f27
[  110.704019] RBP: 1ffff11020076f37 R08: ffffffff8a444651 R09: fffffbfff195c438
[  110.704484] R10: 0000000000000003 R11: 0000000000000001 R12: ffffea00044498c0
[  110.704956] R13: ffffea00044498b4 R14: 0000000000000000 R15: ffffea0004449880
[  110.705555] FS:  0000000000000000(0000) GS:ffff88818baa0000(0000) knlGS:0000000000000000
[  110.706197] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  110.706664] CR2: 00007f4aa3c715a0 CR3: 0000000097a59000 CR4: 00000000000006b0
[  110.707131] Call Trace:
[  110.707335]  <TASK>
[  110.707522]  ? btrfs_compress_filemap_get_folio+0x130/0x130 [btrfs]
[  110.707999]  ? _raw_spin_unlock+0x1a/0x30
[  110.708307]  ? btrfs_compress_heuristic+0x48c/0x700 [btrfs]
[  110.708766]  compress_file_range+0x7b7/0x1640 [btrfs]
[  110.709169]  ? cow_file_range_inline.constprop.0+0x1b0/0x1b0 [btrfs]
[  110.709629]  ? __lock_acquire+0x568/0xbd0
[  110.709934]  ? lock_acquire.part.0+0xad/0x230
[  110.710240]  ? process_one_work+0x7ec/0x1590
[  110.710550]  ? submit_one_async_extent+0xb00/0xb00 [btrfs]
[  110.710970]  btrfs_work_helper+0x1c1/0x760 [btrfs]
[  110.711354]  ? lock_acquire+0x128/0x150
[  110.711635]  process_one_work+0x86b/0x1590
[  110.711934]  ? pwq_dec_nr_in_flight+0x720/0x720
[  110.712255]  ? lock_is_held_type+0x83/0xe0
[  110.712584]  worker_thread+0x5e9/0xfc0
[  110.712869]  ? process_one_work+0x1590/0x1590
[  110.713179]  kthread+0x323/0x410
[  110.713430]  ? _raw_spin_unlock_irq+0x1f/0x40
[  110.713741]  ? kthread_affine_node+0x1c0/0x1c0
[  110.714058]  ret_from_fork+0x476/0x5f0
[  110.714339]  ? arch_exit_to_user_mode_prepare.isra.0+0x60/0x60
[  110.714730]  ? __switch_to+0x22/0xe00
[  110.715011]  ? kthread_affine_node+0x1c0/0x1c0
[  110.715327]  ret_from_fork_asm+0x11/0x20
[  110.715616]  </TASK>
[  110.715806] Modules linked in: btrfs xor raid6_pq loop
[  110.716186] ---[ end trace 0000000000000000 ]---
[  110.716538] RIP: 0010:btrfs_compress_bio+0x5c2/0x6a0 [btrfs]
[  110.718125] RSP: 0018:ffff8881003b79a0 EFLAGS: 00010286
[  110.718488] RAX: 000000000000003e RBX: ffff88810a83d5f8 RCX: 0000000000000000
[  110.718958] RDX: 000000000000003e RSI: 0000000000000004 RDI: ffffed1020076f27
[  110.719448] RBP: 1ffff11020076f37 R08: ffffffff8a444651 R09: fffffbfff195c438
[  110.719912] R10: 0000000000000003 R11: 0000000000000001 R12: ffffea00044498c0
[  110.720373] R13: ffffea00044498b4 R14: 0000000000000000 R15: ffffea0004449880
[  110.720871] FS:  0000000000000000(0000) GS:ffff88818baa0000(0000) knlGS:0000000000000000
[  110.721409] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  110.721800] CR2: 00007f4aa3c715a0 CR3: 0000000097a59000 CR4: 00000000000006b0

Looks like folio references are wrong, the assert in zlib related to the page
pool was just a symptom and I think actually correct.

The line numbers do not tell anything interesting:

(gdb) l *(btrfs_compress_bio+0x5c2)
0x1f38e2 is in btrfs_compress_bio (./include/linux/mm.h:1493).
1488    /*
1489     * Drop a ref, return true if the refcount fell to zero (the page has no users)
1490     */
1491    static inline int put_page_testzero(struct page *page)
1492    {
1493            VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
1494            return page_ref_dec_and_test(page);
1495    }
1496
1497    static inline int folio_put_testzero(struct folio *folio)

(gdb) l *(compress_file_range+0x7b7)
0xde947 is in compress_file_range (fs/btrfs/inode.c:1014).
1009            } else if (inode->prop_compress) {
1010                    compress_type = inode->prop_compress;
1011            }
1012
1013            /* Compression level is applied here. */
1014            cb = btrfs_compress_bio(inode, start, cur_len, compress_type,
1015                                     compress_level, async_chunk->write_flags);
1016            if (IS_ERR(cb)) {
1017                    cb = NULL;
1018                    goto mark_incompressible;

(gdb) l *(btrfs_compress_filemap_get_folio+0x130)
0x1f3320 is in btrfs_compress_bio (fs/btrfs/compression.c:902).
897      * to do the round up before submission.
898      */
899     struct compressed_bio *btrfs_compress_bio(struct btrfs_inode *inode,
900                                               u64 start, u32 len, unsigned int type,
901                                               int level, blk_opf_t write_flags)
902     {
903             struct btrfs_fs_info *fs_info = inode->root->fs_info;
904             struct list_head *workspace;
905             struct compressed_bio *cb;
906             int ret;

  reply	other threads:[~2026-03-05  3:09 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-02  8:00 [PATCH RFC] btrfs: get rid of btrfs_(alloc|free)_compr_folio() Qu Wenruo
2026-03-05  2:56 ` David Sterba
2026-03-05  3:09   ` David Sterba [this message]
2026-03-05  4:33     ` Qu Wenruo
2026-03-05 17:46   ` Boris Burkov
2026-03-05 22:43     ` Qu Wenruo
2026-03-05 22:45       ` Boris Burkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260305030911.GD5735@suse.cz \
    --to=dsterba@suse.cz \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox