From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0454130F54B for ; Fri, 6 Feb 2026 20:37:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770410272; cv=none; b=n5vXM3mQWWE2w+SdTgnVmn6LJAJMU/bWcEwqJ5WuWdp8JEduGg+nnJJTLntz95qpA5iWyk5SFU0v5iLn+iVo0xGVpmOtYEpCsRaxpTQFod1MP4sHXXHnvBAmVnRbh2JplejVNuG/hro3576L1UiE1eZk/X8KqJGP3e9b7z2Mqrc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770410272; c=relaxed/simple; bh=KX/ab+6SYxRzT/EfHTIy/rPfjDx43H1S74zLjfamKhg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IQIKO1rJiBArr7eS5/n7d24QtnpQxYdV5/Sa9lZuvxNMp2YeoSvJpD+pbD1In7/yDW8mhs6PvGXN3FtUc4YHib64TKHWlFY0tP4SMvwrtVswGSGpn19U9TVOGUluoP6zAPfCj6nCrzSReOsZNHSIG4w3IlkdgqeXMPeNZHGXhJE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=f5ynn/mQ; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="f5ynn/mQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1770410271; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Shy/SZAlRS3Kdw6OfyxjeoKQKbjIxIHLx2o3iV5Q6pY=; b=f5ynn/mQTYeB+ZlfIbAysrhOeBYjIgPzzPqS/WV/DSWGhQvCUIA5fNtXNv5H8nt80QJ4Ij 0IdhqRqoBaBYTlAZjkQyK0J3IB5s2hfi5Ob/woj5PLBqTyFMSGaJRKe0PoFXp8bVIjx2K8 P/WM0rIe1qz6VfA60mOB3o3R8BL9TjE= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-672-jbYnmNdPNFmdEx2GjBku8w-1; Fri, 06 Feb 2026 15:37:46 -0500 X-MC-Unique: jbYnmNdPNFmdEx2GjBku8w-1 X-Mimecast-MFC-AGG-ID: jbYnmNdPNFmdEx2GjBku8w_1770410264 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 259611956096; Fri, 6 Feb 2026 20:37:44 +0000 (UTC) Received: from llong-thinkpadp16vgen1.westford.csb (unknown [10.22.90.86]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id F2DCF180029A; Fri, 6 Feb 2026 20:37:40 +0000 (UTC) From: Waiman Long To: Chen Ridong , Tejun Heo , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Anna-Maria Behnsen , Frederic Weisbecker , Thomas Gleixner , Shuah Khan Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Waiman Long Subject: [PATCH/for-next v4 2/4] cgroup/cpuset: Defer housekeeping_update() calls from CPU hotplug to workqueue Date: Fri, 6 Feb 2026 15:37:10 -0500 Message-ID: <20260206203712.1989610-3-longman@redhat.com> In-Reply-To: <20260206203712.1989610-1-longman@redhat.com> References: <20260206203712.1989610-1-longman@redhat.com> Precedence: bulk X-Mailing-List: cgroups@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 The update_isolation_cpumasks() function can be called either directly from regular cpuset control file write with cpuset_full_lock() called or via the CPU hotplug path with cpus_write_lock and cpuset_mutex held. As we are going to enable dynamic update to the nozh_full housekeeping cpumask (HK_TYPE_KERNEL_NOISE) soon with the help of CPU hotplug, allowing the CPU hotplug path to call into housekeeping_update() directly from update_isolation_cpumasks() will likely cause deadlock. So we have to defer any call to housekeeping_update() after the CPU hotplug operation has finished. This is now done via the workqueue where the actual housekeeping_update() call, if needed, will happen after cpus_write_lock is released. We can't use the synchronous task_work API as call from CPU hotplug path happen in the per-cpu kthread of the CPU that is being shut down or brought up. Because of the asynchronous nature of workqueue, the HK_TYPE_DOMAIN housekeeping cpumask will be updated a bit later than the "cpuset.cpus.isolated" control file in this case. Also add a check in test_cpuset_prs.sh and modify some existing test cases to confirm that "cpuset.cpus.isolated" and HK_TYPE_DOMAIN housekeeping cpumask will both be updated. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 41 +++++++++++++++++-- .../selftests/cgroup/test_cpuset_prs.sh | 13 ++++-- 2 files changed, 48 insertions(+), 6 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index a4c6386a594d..eb0eabd85e8c 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1302,6 +1302,17 @@ static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new_cpus) return false; } +static void isolcpus_workfn(struct work_struct *work) +{ + cpuset_full_lock(); + if (isolated_cpus_updating) { + isolated_cpus_updating = false; + WARN_ON_ONCE(housekeeping_update(isolated_cpus) < 0); + rebuild_sched_domains_locked(); + } + cpuset_full_unlock(); +} + /* * update_isolation_cpumasks - Update external isolation related CPU masks * @@ -1310,14 +1321,38 @@ static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new_cpus) */ static void update_isolation_cpumasks(void) { - int ret; + static DECLARE_WORK(isolcpus_work, isolcpus_workfn); + lockdep_assert_cpuset_lock_held(); if (!isolated_cpus_updating) return; - ret = housekeeping_update(isolated_cpus); - WARN_ON_ONCE(ret < 0); + /* + * This function can be reached either directly from regular cpuset + * control file write or via CPU hotplug. In the latter case, it is + * the per-cpu kthread that calls cpuset_handle_hotplug() on behalf + * of the task that initiates CPU shutdown or bringup. + * + * To have better flexibility and prevent the possibility of deadlock + * when calling from CPU hotplug, we defer the housekeeping_update() + * call to after the current cpuset critical section has finished. + * This is done via workqueue. + */ + if (current->flags & PF_KTHREAD) { + /* + * We rely on WORK_STRUCT_PENDING_BIT to not requeue a work + * item that is still pending. Before the pending bit is + * cleared, the work data is copied out and work item dequeued. + * So it is possible to queue the work again before the + * isolcpus_workfn() is invoked to process the previously + * queued work. Since isolcpus_workfn() doesn't use the work + * item at all, this is not a problem. + */ + queue_work(system_unbound_wq, &isolcpus_work); + return; + } + WARN_ON_ONCE(housekeeping_update(isolated_cpus) < 0); isolated_cpus_updating = false; } diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/testing/selftests/cgroup/test_cpuset_prs.sh index 5dff3ad53867..0502b156582b 100755 --- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh +++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh @@ -245,8 +245,9 @@ TEST_MATRIX=( "C2-3:P1:S+ C3:P2 . . O2=0 O2=1 . . 0 A1:2|A2:3 A1:P1|A2:P2" "C2-3:P1:S+ C3:P1 . . O2=0 . . . 0 A1:|A2:3 A1:P1|A2:P1" "C2-3:P1:S+ C3:P1 . . O3=0 . . . 0 A1:2|A2: A1:P1|A2:P1" - "C2-3:P1:S+ C3:P1 . . T:O2=0 . . . 0 A1:3|A2:3 A1:P1|A2:P-1" - "C2-3:P1:S+ C3:P1 . . . T:O3=0 . . 0 A1:2|A2:2 A1:P1|A2:P-1" + "C2-3:P1:S+ C3:P2 . . T:O2=0 . . . 0 A1:3|A2:3 A1:P1|A2:P-2" + "C1-3:P1:S+ C3:P2 . . . T:O3=0 . . 0 A1:1-2|A2:1-2 A1:P1|A2:P-2 3|" + "C1-3:P1:S+ C3:P2 . . . T:O3=0 O3=1 . 0 A1:1-2|A2:3 A1:P1|A2:P2 3" "$SETUP_A123_PARTITIONS . O1=0 . . . 0 A1:|A2:2|A3:3 A1:P1|A2:P1|A3:P1" "$SETUP_A123_PARTITIONS . O2=0 . . . 0 A1:1|A2:|A3:3 A1:P1|A2:P1|A3:P1" "$SETUP_A123_PARTITIONS . O3=0 . . . 0 A1:1|A2:2|A3: A1:P1|A2:P1|A3:P1" @@ -764,7 +765,7 @@ check_cgroup_states() # only CPUs in isolated partitions as well as those that are isolated at # boot time. # -# $1 - expected isolated cpu list(s) {,} +# $1 - expected isolated cpu list(s) {|} # - expected sched/domains value # - cpuset.cpus.isolated value = if not defined # @@ -773,6 +774,7 @@ check_isolcpus() EXPECTED_ISOLCPUS=$1 ISCPUS=${CGROUP2}/cpuset.cpus.isolated ISOLCPUS=$(cat $ISCPUS) + HKICPUS=$(cat /sys/devices/system/cpu/isolated) LASTISOLCPU= SCHED_DOMAINS=/sys/kernel/debug/sched/domains if [[ $EXPECTED_ISOLCPUS = . ]] @@ -810,6 +812,11 @@ check_isolcpus() ISOLCPUS= EXPECTED_ISOLCPUS=$EXPECTED_SDOMAIN + # + # The inverse of HK_TYPE_DOMAIN cpumask in $HKICPUS should match $ISOLCPUS + # + [[ "$ISOLCPUS" != "$HKICPUS" ]] && return 1 + # # Use the sched domain in debugfs to check isolated CPUs, if available # -- 2.52.0