From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04D6138A73F for ; Thu, 14 May 2026 17:03:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778778214; cv=none; b=hRP5aaNjRKzrBQKZoW+FDBTxI5dB+1XwVV3NqBBKEFeCEEXBIGmAt2r18B619rua7t2eYEadIsSpxWg6L6r9vDANVT7QgOVRT/PJsWHMRdBvFWyOmZ2FA+RKRErAp3FR91AbtTY1PIVdRL6FK2q+sGvX5JQ4PgchR/HkaTRSIiw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778778214; c=relaxed/simple; bh=iRNqhYmZ1TMr6hzbVQ9eWe0Eh120laNQC3FyJd2jJuo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AWqpfc+WWNMgwkECz/PlYXJPZi0dfhNQnVLSf8z0LIuYQlWkEKq/HSTdQ+XA7vqtjBVnLyUw45PbpLGDJtbzMIRQO+Ti+3p4xIVxqTEgoBCaEJG9sb9cZE15pFiDM1VXCT0rMcM5MgZLojnJ0iGJ1PGSCeJw92qGZgrO1CURRpg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CjmV7bHu; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CjmV7bHu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1778778211; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jjJNzrFjKmrD4V4JJnVcoPFpev53VCvIaz6cbH8bCgA=; b=CjmV7bHuOs5d/M5uiqvheC25Q0ZnA4RFzqD9tF/qzEfqjr79keqYRFQOXWsPRvZarFTFV1 QW18Xq9p0tMLF1Bdj0u/XwB7KE0AtT1m5GXxaDvA0NL790ns0DE+1+Wv2fHBN2noi7U+/5 7O+CQX+J8HGl/5LuWZ1yD9zLq2nPNgU= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-103-skuhS_ZGPCGIhxpzuiGB5w-1; Thu, 14 May 2026 13:03:25 -0400 X-MC-Unique: skuhS_ZGPCGIhxpzuiGB5w-1 X-Mimecast-MFC-AGG-ID: skuhS_ZGPCGIhxpzuiGB5w_1778778204 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id AD46319560B7; Thu, 14 May 2026 17:03:23 +0000 (UTC) Received: from llong-thinkpadp16vgen1.westford.csb (unknown [10.2.16.73]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C315C30001A2; Thu, 14 May 2026 17:03:20 +0000 (UTC) From: Waiman Long To: Chen Ridong , Tejun Heo , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Dietmar Eggemann , Aaron Tomlin , Juri Lelli , Waiman Long Subject: [PATCH cgroup/for-next 4/4] cgroup/cpuset: Support multiple source/destination cpusets for cpuset_*attach() Date: Thu, 14 May 2026 13:02:40 -0400 Message-ID: <20260514170240.575156-5-longman@redhat.com> In-Reply-To: <20260514170240.575156-1-longman@redhat.com> References: <20260514170240.575156-1-longman@redhat.com> Precedence: bulk X-Mailing-List: cgroups@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 With cgroup v2, the cgroup_taskset structure passed into the cgroup can_attach() and attach() methods can contain task migration data with multiple destination or source cpusets when the cpuset controller is enabled or disabled respectively. Since cpuset is threaded, another possible way to cause many-to-one migration is to move the whole process with multiple threads in different cpuset enabled threaded cgroups into another cpuset enabled cgroup. Alternatively, multiple processs from different cpusets can be written into cgroup.proc as a single operation. The current cpuset_can_attach() and cpuset_attach() functions still expect task migration is from one source cpuset to one destination cpuset. This has been the case since cpuset was enabled for cgroup v2 in commit 4ec22e9c5a90 ("cpuset: Enable cpuset controller in default hierarchy"). This problem is less an issue when enabling the cpuset controller as all the newly created child cpusets will have exactly the same set of CPUs and memory nodes except when deadline tasks are involved in migration as the deadline task accounting data can be off. It can be more problematic when the cpuset controller is disabled as their set of CPUs and memory nodes may differ from their parent or with the moving of multi-threaded process from different threaded cgroups. Fix that by tracking the set of source (old) and destination cpusets in singly linked lists and iterating them all to properly update the internal data. Also keep the current cs and oldcs variables up-to-date with the css and task iterators. cpuset_attach_old_cs is now dropped as the old cpusets are now being tracked. To ensure proper DL tasks accounting, the nr_migrate_dl_tasks in both the source and destination cpusets are decremented/incremented with their values added to nr_deadline_tasks when the migration is successful. Fixes: 4ec22e9c5a90 ("cpuset: Enable cpuset controller in default hierarchy") Signed-off-by: Waiman Long --- kernel/cgroup/cpuset-internal.h | 6 + kernel/cgroup/cpuset.c | 206 +++++++++++++++++++++++--------- 2 files changed, 158 insertions(+), 54 deletions(-) diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h index f7aaf01f7cd5..4c2772a7fd5e 100644 --- a/kernel/cgroup/cpuset-internal.h +++ b/kernel/cgroup/cpuset-internal.h @@ -161,6 +161,12 @@ struct cpuset { */ bool remote_partition; + /* + * cpuset_can_attach() and cpuset_attach() specific data + */ + bool attach_node_in_llist; + struct llist_node attach_node; + /* * number of SCHED_DEADLINE tasks attached to this cpuset, so that we * know when to rebuild associated root domain bandwidth information. diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 8ced1fa0900f..c46454b29d74 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -37,6 +37,7 @@ #include #include #include +#include DEFINE_STATIC_KEY_FALSE(cpusets_pre_enable_key); DEFINE_STATIC_KEY_FALSE(cpusets_enabled_key); @@ -2967,7 +2968,8 @@ static int update_prstate(struct cpuset *cs, int new_prs) /* * cpuset_can_attach() and cpuset_attach() specific internal data */ -static struct cpuset *cpuset_attach_old_cs; +static LLIST_HEAD(src_cs_head); +static LLIST_HEAD(dst_cs_head); static bool attach_cpus_updated; static bool attach_mems_updated; @@ -2980,9 +2982,10 @@ static bool attach_mems_updated; * Also set the boolean flag passed in by @psetsched depending on if * security_task_setscheduler() call is needed and @oldcs is not NULL. */ -static int cpuset_can_attach_check(struct cpuset *cs, struct cpuset *oldcs, - bool *psetsched) +static int cpuset_can_attach_check(struct cpuset *cs, struct cpuset *oldcs, bool *psetsched) { + bool cpu_match, mem_match; + if (cpumask_empty(cs->effective_cpus) || (!is_in_v2_mode() && nodes_empty(cs->mems_allowed))) return -ENOSPC; @@ -2993,15 +2996,34 @@ static int cpuset_can_attach_check(struct cpuset *cs, struct cpuset *oldcs, /* * Update attach specific data */ - attach_cpus_updated = !cpumask_equal(cs->effective_cpus, oldcs->effective_cpus); - attach_mems_updated = !nodes_equal(cs->effective_mems, oldcs->effective_mems); + if (!cs->attach_node_in_llist) { + llist_add(&cs->attach_node, &dst_cs_head); + cs->attach_node_in_llist = true; + } + if (!oldcs->attach_node_in_llist) { + llist_add(&oldcs->attach_node, &src_cs_head); + oldcs->attach_node_in_llist = true; + } + + cpu_match = cpumask_equal(cs->effective_cpus, oldcs->effective_cpus); + mem_match = nodes_equal(cs->effective_mems, oldcs->effective_mems); + + /* + * Set the updated flags whenever there is a mismatch in any of the + * src/dst pairs. + */ + if (!attach_cpus_updated) + attach_cpus_updated = !cpu_match; + + if (!attach_mems_updated) + attach_mems_updated = !mem_match; /* * Skip rights over task setsched check in v2 when nothing changes, * migration permission derives from hierarchy ownership in * cgroup_procs_write_permission()). */ - *psetsched = !cpuset_v2() || attach_cpus_updated || attach_mems_updated; + *psetsched = !cpuset_v2() || !cpu_match || !mem_match; /* * A v1 cpuset with tasks will have no CPU left only when CPU hotplug @@ -3016,33 +3038,105 @@ static int cpuset_can_attach_check(struct cpuset *cs, struct cpuset *oldcs, return 0; } -static int alloc_dl_bw(struct cpuset *cs) +/* + * If reset_dl_bw is set, reset the previous dl_bw_alloc() call. Otherwise, + * update nr_deadline_tasks according to nr_migrate_dl_tasks in both source + * and destination cpusets. + */ +static void clear_attach_data(bool reset_dl_bw) +{ + struct cpuset *cs, *next; + + llist_for_each_entry_safe(cs, next, src_cs_head.first, attach_node) { + cs->attach_node.next = NULL; + cs->attach_node_in_llist = false; + if (cs->nr_migrate_dl_tasks && !reset_dl_bw) + cs->nr_deadline_tasks += cs->nr_migrate_dl_tasks; + cs->nr_migrate_dl_tasks = 0; + } + + llist_for_each_entry_safe(cs, next, dst_cs_head.first, attach_node) { + cs->attach_node.next = NULL; + cs->attach_node_in_llist = false; + if (reset_dl_bw && cs->dl_bw_cpu >= 0) + dl_bw_free(cs->dl_bw_cpu, cs->sum_migrate_dl_bw); + if (cs->nr_migrate_dl_tasks && !reset_dl_bw) + cs->nr_deadline_tasks += cs->nr_migrate_dl_tasks; + cs->nr_migrate_dl_tasks = 0; + cs->sum_migrate_dl_bw = 0; + cs->dl_bw_cpu = -1; + } + + src_cs_head.first = NULL; + dst_cs_head.first = NULL; + attach_cpus_updated = false; + attach_mems_updated = false; +} + +static int alloc_dl_bw(void) { + struct cpuset *cs; int cpu, ret; - if (!cs->sum_migrate_dl_bw) - return 0; + llist_for_each_entry(cs, dst_cs_head.first, attach_node) { + if (!cs->sum_migrate_dl_bw) + continue; - cpu = cpumask_any_and(cpu_active_mask, cs->effective_cpus); - if (unlikely(cpu >= nr_cpu_ids)) - return -EINVAL; + cpu = cpumask_any_and(cpu_active_mask, cs->effective_cpus); + if (unlikely(cpu >= nr_cpu_ids)) + return -EINVAL; - ret = dl_bw_alloc(cpu, cs->sum_migrate_dl_bw); - if (ret) - return ret; + ret = dl_bw_alloc(cpu, cs->sum_migrate_dl_bw); + if (ret) + return ret; - cs->dl_bw_cpu = cpu; + cs->dl_bw_cpu = cpu; + } return 0; } -static void reset_migrate_dl_data(struct cpuset *cs) +static void set_attach_in_progress(void) { - cs->nr_migrate_dl_tasks = 0; - cs->sum_migrate_dl_bw = 0; - cs->dl_bw_cpu = -1; + struct cpuset *cs; + + /* + * Mark attach is in progress. This makes validate_change() fail + * changes which zero cpus/mems_allowed. + */ + llist_for_each_entry(cs, dst_cs_head.first, attach_node) + cs->attach_in_progress++; } -/* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */ +static void reset_attach_in_progress(void) +{ + struct cpuset *cs; + + llist_for_each_entry(cs, dst_cs_head.first, attach_node) + dec_attach_in_progress_locked(cs); +} + +/* + * Called by cgroups to determine if a cpuset is usable; cpuset_mutex held. + * + * With cgroup v2, enabling of cpuset controller in a cgroup subtree can + * cause @tset to contain task migration data from one parent cpuset to multiple + * child cpusets. Not much is needed to be done here other than tracking the + * number of DL tasks in each cpuset as the CPUs and memory nodes of the child + * cpusets are exactly the same as the parent. + * + * Conversely, disabling of cpuset controller can cause @tset to contain task + * migration data from multiple child cpusets to one parent cpuset. Here, the + * CPUs and memory nodes of the child cpusets may be different from the parent, + * but must be a subset of its parent. + * + * Another possible many-to-one migration is the moving of the whole + * multithreaded process with threads in different cpusets to another cpuset. + * Alternatively, multiple processes from multiple cpusets can be moved to + * another cpuset in a single operation. + * + * For all other use cases including cgroup v1, @tset task migration data + * should be from one source cpuset to one destination cpuset. + */ static int cpuset_can_attach(struct cgroup_taskset *tset) { struct cgroup_subsys_state *css; @@ -3052,8 +3146,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset) int ret; /* used later by cpuset_attach() */ - cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset, &css)); - oldcs = cpuset_attach_old_cs; + oldcs = task_cs(cgroup_taskset_first(tset, &css)); cs = css_cs(css); mutex_lock(&cpuset_mutex); @@ -3064,6 +3157,16 @@ static int cpuset_can_attach(struct cgroup_taskset *tset) goto out_unlock; cgroup_taskset_for_each(task, css, tset) { + struct cpuset *newcs = css_cs(css); + struct cpuset *new_oldcs = task_cs(task); + + if ((newcs != cs) || (new_oldcs != oldcs)) { + cs = newcs; + oldcs = new_oldcs; + ret = cpuset_can_attach_check(cs, oldcs, &setsched_check); + if (ret) + goto out_unlock; + } ret = task_can_attach(task); if (ret) goto out_unlock; @@ -3081,23 +3184,18 @@ static int cpuset_can_attach(struct cgroup_taskset *tset) * contribute to sum_migrate_dl_bw. */ cs->nr_migrate_dl_tasks++; + oldcs->nr_migrate_dl_tasks--; if (dl_task_needs_bw_move(task, cs->effective_cpus)) cs->sum_migrate_dl_bw += task->dl.dl_bw; } } - ret = alloc_dl_bw(cs); - + ret = alloc_dl_bw(); out_unlock: if (ret) - reset_migrate_dl_data(cs); + clear_attach_data(true); else - /* - * Mark attach is in progress. This makes validate_change() fail - * changes which zero cpus/mems_allowed. - */ - cs->attach_in_progress++; - + set_attach_in_progress(); mutex_unlock(&cpuset_mutex); return ret; } @@ -3111,14 +3209,8 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset) cs = css_cs(css); mutex_lock(&cpuset_mutex); - dec_attach_in_progress_locked(cs); - - if (cs->dl_bw_cpu >= 0) - dl_bw_free(cs->dl_bw_cpu, cs->sum_migrate_dl_bw); - - if (cs->nr_migrate_dl_tasks) - reset_migrate_dl_data(cs); - + reset_attach_in_progress(); + clear_attach_data(true); mutex_unlock(&cpuset_mutex); } @@ -3172,8 +3264,7 @@ static void cpuset_attach(struct cgroup_taskset *tset) struct task_struct *task; struct task_struct *leader; struct cgroup_subsys_state *css; - struct cpuset *cs; - struct cpuset *oldcs = cpuset_attach_old_cs; + struct cpuset *cs, *oldcs; bool queue_task_work = false; cgroup_taskset_first(tset, &css); @@ -3184,9 +3275,9 @@ static void cpuset_attach(struct cgroup_taskset *tset) /* * In the default hierarchy, enabling cpuset in the child cgroups - * will trigger a number of cpuset_attach() calls with no change - * in effective cpus and mems. In that case, we can optimize out - * by skipping the task iteration and update. + * will trigger a cpuset_attach() call with no change in effective cpus + * and mems. In that case, we can optimize out by skipping the task + * iteration and update. */ if (cpuset_v2() && !attach_cpus_updated && !attach_mems_updated) { cpuset_attach_nodemask_to = cs->effective_mems; @@ -3195,8 +3286,16 @@ static void cpuset_attach(struct cgroup_taskset *tset) guarantee_online_mems(cs, &cpuset_attach_nodemask_to); - cgroup_taskset_for_each(task, css, tset) + cgroup_taskset_for_each(task, css, tset) { + struct cpuset *newcs = css_cs(css); + + if (newcs != cs) { + cs->old_mems_allowed = cs->effective_mems; + cs = newcs; + guarantee_online_mems(cs, &cpuset_attach_nodemask_to); + } cpuset_attach_task(cs, task); + } /* * Change mm for all threadgroup leaders. This is expensive and may @@ -3208,6 +3307,11 @@ static void cpuset_attach(struct cgroup_taskset *tset) if (!is_memory_migrate(cs)) goto out; + /* + * Only v1 supports memory_migrate and there should only be one source + * and one destination cpuset. + */ + oldcs = llist_entry(src_cs_head.first, struct cpuset, attach_node); cgroup_taskset_for_each_leader(leader, css, tset) { struct mm_struct *mm = get_task_mm(leader); @@ -3231,14 +3335,8 @@ static void cpuset_attach(struct cgroup_taskset *tset) out: cs->old_mems_allowed = cpuset_attach_nodemask_to; - if (cs->nr_migrate_dl_tasks) { - cs->nr_deadline_tasks += cs->nr_migrate_dl_tasks; - oldcs->nr_deadline_tasks -= cs->nr_migrate_dl_tasks; - reset_migrate_dl_data(cs); - } - - dec_attach_in_progress_locked(cs); - + reset_attach_in_progress(); + clear_attach_data(false); mutex_unlock(&cpuset_mutex); } -- 2.54.0