From: Boris Burkov <boris@bur.io>
To: fdmanana@kernel.org
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH v3 1/2] btrfs: qgroup: fix race between quota disable and quota rescan ioctl
Date: Mon, 30 Jun 2025 09:32:42 -0700 [thread overview]
Message-ID: <20250630163242.GA61133@zen.localdomain> (raw)
In-Reply-To: <19f775a9f256c4a5146cc97b7f521464429c81bc.1751288689.git.fdmanana@suse.com>
On Mon, Jun 30, 2025 at 02:07:47PM +0100, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
>
> There's a race between a task disabling quotas and another running the
> rescan ioctl that can result in a use-after-free of qgroup records from
> the fs_info->qgroup_tree rbtree.
>
> This happens as follows:
>
> 1) Task A enters btrfs_ioctl_quota_rescan() -> btrfs_qgroup_rescan();
>
> 2) Task B enters btrfs_quota_disable() and calls
> btrfs_qgroup_wait_for_completion(), which does nothing because at that
> point fs_info->qgroup_rescan_running is false (it wasn't set yet by
> task A);
>
> 3) Task B calls btrfs_free_qgroup_config() which starts freeing qgroups
> from fs_info->qgroup_tree without taking the lock fs_info->qgroup_lock;
>
> 4) Task A enters qgroup_rescan_zero_tracking() which starts iterating
> the fs_info->qgroup_tree tree while holding fs_info->qgroup_lock,
> but task B is freeing qgroup records from that tree without holding
> the lock, resulting in a use-after-free.
>
> Fix this by taking fs_info->qgroup_lock at btrfs_free_qgroup_config().
> Also at btrfs_qgroup_rescan() don't start the rescan worker if quotas
> were already disabled.
>
> Reported-by: cen zhang <zzzccc427@gmail.com>
> Link: https://lore.kernel.org/linux-btrfs/CAFRLqsV+cMDETFuzqdKSHk_FDm6tneea45krsHqPD6B3FetLpQ@mail.gmail.com/
> Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Boris Burkov <boris@bur.io>
> ---
> fs/btrfs/qgroup.c | 26 +++++++++++++++++++-------
> 1 file changed, 19 insertions(+), 7 deletions(-)
>
> diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
> index b83d9534adae..8fa874ef80b3 100644
> --- a/fs/btrfs/qgroup.c
> +++ b/fs/btrfs/qgroup.c
> @@ -636,22 +636,30 @@ bool btrfs_check_quota_leak(const struct btrfs_fs_info *fs_info)
>
> /*
> * This is called from close_ctree() or open_ctree() or btrfs_quota_disable(),
> - * first two are in single-threaded paths.And for the third one, we have set
> - * quota_root to be null with qgroup_lock held before, so it is safe to clean
> - * up the in-memory structures without qgroup_lock held.
> + * first two are in single-threaded paths.
> */
> void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info)
> {
> struct rb_node *n;
> struct btrfs_qgroup *qgroup;
>
> + /*
> + * btrfs_quota_disable() can be called concurrently with
> + * btrfs_qgroup_rescan() -> qgroup_rescan_zero_tracking(), so take the
> + * lock.
> + */
> + spin_lock(&fs_info->qgroup_lock);
> while ((n = rb_first(&fs_info->qgroup_tree))) {
> qgroup = rb_entry(n, struct btrfs_qgroup, node);
> rb_erase(n, &fs_info->qgroup_tree);
> __del_qgroup_rb(qgroup);
> + spin_unlock(&fs_info->qgroup_lock);
> btrfs_sysfs_del_one_qgroup(fs_info, qgroup);
> kfree(qgroup);
> + spin_lock(&fs_info->qgroup_lock);
> }
> + spin_unlock(&fs_info->qgroup_lock);
> +
> /*
> * We call btrfs_free_qgroup_config() when unmounting
> * filesystem and disabling quota, so we set qgroup_ulist
> @@ -4036,12 +4044,16 @@ btrfs_qgroup_rescan(struct btrfs_fs_info *fs_info)
> qgroup_rescan_zero_tracking(fs_info);
>
> mutex_lock(&fs_info->qgroup_rescan_lock);
> - fs_info->qgroup_rescan_running = true;
> - btrfs_queue_work(fs_info->qgroup_rescan_workers,
> - &fs_info->qgroup_rescan_work);
> + if (test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
could this be one of the helpers like !btrfs_qgroup_enabled() or maybe
even better !btrfs_qgroup_full_accounting()?
> + fs_info->qgroup_rescan_running = true;
> + btrfs_queue_work(fs_info->qgroup_rescan_workers,
> + &fs_info->qgroup_rescan_work);
> + } else {
> + ret = -ENOTCONN;
> + }
> mutex_unlock(&fs_info->qgroup_rescan_lock);
>
> - return 0;
> + return ret;
> }
>
> int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info,
> --
> 2.47.2
>
next prev parent reply other threads:[~2025-06-30 16:31 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-30 12:39 [PATCH 0/2] btrfs: qgroup race fix and cleanup fdmanana
2025-06-30 12:39 ` [PATCH 1/2] btrfs: qgroup: fix race between quota disable and quota rescan ioctl fdmanana
2025-06-30 12:39 ` [PATCH 2/2] btrfs: qgroup: remove no longer used fs_info->qgroup_ulist fdmanana
2025-06-30 13:01 ` [PATCH v2 0/2] btrfs: qgroup race fix and cleanup fdmanana
2025-06-30 13:01 ` [PATCH v2 1/2] btrfs: qgroup: fix race between quota disable and quota rescan ioctl fdmanana
2025-06-30 13:01 ` [PATCH v2 2/2] btrfs: qgroup: remove no longer used fs_info->qgroup_ulist fdmanana
2025-06-30 13:07 ` [PATCH v3 0/2] btrfs: qgroup race fix and cleanup fdmanana
2025-06-30 13:07 ` [PATCH v3 1/2] btrfs: qgroup: fix race between quota disable and quota rescan ioctl fdmanana
2025-06-30 16:32 ` Boris Burkov [this message]
2025-06-30 16:53 ` Filipe Manana
2025-06-30 21:22 ` Qu Wenruo
2025-06-30 13:07 ` [PATCH v3 2/2] btrfs: qgroup: remove no longer used fs_info->qgroup_ulist fdmanana
2025-06-30 16:34 ` Boris Burkov
2025-06-30 21:24 ` Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250630163242.GA61133@zen.localdomain \
--to=boris@bur.io \
--cc=fdmanana@kernel.org \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox