From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 08F6D2D8DD4; Mon, 18 Aug 2025 13:42:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755524530; cv=none; b=TefEj8RUfa8lfGD5KVmQ++NYKFXsEwJO28M1odSRN7ZQ3VuC/gaN+3YMdN4zHCQDznFGEHYFe25Um2Ksx1LAXW7P+L2jnlFAXGMSsBtR6OeReYovQowrRl6e3YmYLU0kGBprGsVLlDR9NWy4LtjohOJJKoCUF3cKQDFI3PZ1MZM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755524530; c=relaxed/simple; bh=2t7ye/lvFpQOOKBENa+EnB6CY319dbJhppXpxZdG1ks=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=c3dHArGtEmvU4tQID6S+Cd+lgEWU0umPXNKERViJx02sIDYBMnXV+d7x9YQN9DhQ3WgghnatkkxE/uGrmsJYTOmPSXWHhUW5o4b67NL2rfBM8qs4DBSIPWyanlE/cI/baoQ/pQFmdVilsbLYU6ETL7TyrSuzM0WxxUqgAJdaI7o= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=v0BCnuaz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="v0BCnuaz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 84CEAC116C6; Mon, 18 Aug 2025 13:42:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1755524529; bh=2t7ye/lvFpQOOKBENa+EnB6CY319dbJhppXpxZdG1ks=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=v0BCnuazU8w/TZtSI4b2wrw6w88p023cTSd4WITLI5ULLmH+Ycf7BJ0C2P41zOWaI K++w5Fsx7BFjMxyiJZzAUppxAWmy2dzU7VAw+M96ngPsgc3DgpUJN+6yG9jGrw3Qz9 wPOfqvNXUVL2XHDU+iRacT1Z0jU4YJN0uY3649ZE= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, cen zhang , Boris Burkov , Qu Wenruo , Filipe Manana , David Sterba Subject: [PATCH 6.15 469/515] btrfs: qgroup: fix race between quota disable and quota rescan ioctl Date: Mon, 18 Aug 2025 14:47:35 +0200 Message-ID: <20250818124516.475018684@linuxfoundation.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250818124458.334548733@linuxfoundation.org> References: <20250818124458.334548733@linuxfoundation.org> User-Agent: quilt/0.68 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Filipe Manana commit e1249667750399a48cafcf5945761d39fa584edf upstream. There's a race between a task disabling quotas and another running the rescan ioctl that can result in a use-after-free of qgroup records from the fs_info->qgroup_tree rbtree. This happens as follows: 1) Task A enters btrfs_ioctl_quota_rescan() -> btrfs_qgroup_rescan(); 2) Task B enters btrfs_quota_disable() and calls btrfs_qgroup_wait_for_completion(), which does nothing because at that point fs_info->qgroup_rescan_running is false (it wasn't set yet by task A); 3) Task B calls btrfs_free_qgroup_config() which starts freeing qgroups from fs_info->qgroup_tree without taking the lock fs_info->qgroup_lock; 4) Task A enters qgroup_rescan_zero_tracking() which starts iterating the fs_info->qgroup_tree tree while holding fs_info->qgroup_lock, but task B is freeing qgroup records from that tree without holding the lock, resulting in a use-after-free. Fix this by taking fs_info->qgroup_lock at btrfs_free_qgroup_config(). Also at btrfs_qgroup_rescan() don't start the rescan worker if quotas were already disabled. Reported-by: cen zhang Link: https://lore.kernel.org/linux-btrfs/CAFRLqsV+cMDETFuzqdKSHk_FDm6tneea45krsHqPD6B3FetLpQ@mail.gmail.com/ CC: stable@vger.kernel.org # 6.1+ Reviewed-by: Boris Burkov Reviewed-by: Qu Wenruo Signed-off-by: Filipe Manana Signed-off-by: David Sterba Signed-off-by: Greg Kroah-Hartman --- fs/btrfs/qgroup.c | 31 ++++++++++++++++++++++++------- 1 file changed, 24 insertions(+), 7 deletions(-) --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -630,22 +630,30 @@ bool btrfs_check_quota_leak(const struct /* * This is called from close_ctree() or open_ctree() or btrfs_quota_disable(), - * first two are in single-threaded paths.And for the third one, we have set - * quota_root to be null with qgroup_lock held before, so it is safe to clean - * up the in-memory structures without qgroup_lock held. + * first two are in single-threaded paths. */ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info) { struct rb_node *n; struct btrfs_qgroup *qgroup; + /* + * btrfs_quota_disable() can be called concurrently with + * btrfs_qgroup_rescan() -> qgroup_rescan_zero_tracking(), so take the + * lock. + */ + spin_lock(&fs_info->qgroup_lock); while ((n = rb_first(&fs_info->qgroup_tree))) { qgroup = rb_entry(n, struct btrfs_qgroup, node); rb_erase(n, &fs_info->qgroup_tree); __del_qgroup_rb(qgroup); + spin_unlock(&fs_info->qgroup_lock); btrfs_sysfs_del_one_qgroup(fs_info, qgroup); kfree(qgroup); + spin_lock(&fs_info->qgroup_lock); } + spin_unlock(&fs_info->qgroup_lock); + /* * We call btrfs_free_qgroup_config() when unmounting * filesystem and disabling quota, so we set qgroup_ulist @@ -4040,12 +4048,21 @@ btrfs_qgroup_rescan(struct btrfs_fs_info qgroup_rescan_zero_tracking(fs_info); mutex_lock(&fs_info->qgroup_rescan_lock); - fs_info->qgroup_rescan_running = true; - btrfs_queue_work(fs_info->qgroup_rescan_workers, - &fs_info->qgroup_rescan_work); + /* + * The rescan worker is only for full accounting qgroups, check if it's + * enabled as it is pointless to queue it otherwise. A concurrent quota + * disable may also have just cleared BTRFS_FS_QUOTA_ENABLED. + */ + if (btrfs_qgroup_full_accounting(fs_info)) { + fs_info->qgroup_rescan_running = true; + btrfs_queue_work(fs_info->qgroup_rescan_workers, + &fs_info->qgroup_rescan_work); + } else { + ret = -ENOTCONN; + } mutex_unlock(&fs_info->qgroup_rescan_lock); - return 0; + return ret; } int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info,