From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AE492CCFA0D for ; Wed, 5 Nov 2025 21:06:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=cTldLbnIn6jrSFHfrurRw6ltQwizAuMYeEEXXJIaHqA=; b=CedEOc8g/lBh4KaMvY7QFjI0Ne 6H6++m+zhylIOnGIs4uqUTIY8ZlTP8DWvRzMI3CKLao1WL05YL8o5CGeFXjBIRnMMwCaTBMYM7Ytr NjJp+JfljlHYIXVl9GGv1Yz6bNn5wYgho0GhkUXp49CPLNTGC5Vtmo8pptPgMHpdpKFbDUYpTKGtc FhJfgOSWKcPNX67cQA4TxFd/ifBkF3TlJYQAKJ6PVbMZbqTcwB1TSB6yP4k5Z8E3us7eafNef1fv+ aVtGtV6g++mh8qmcS4Xb1OquHiyD8D/7L5WU5vjHeCLq4lU9gJ93/FTWCsyRy2vAzfNnbI0r9hYUU RliI+7OQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vGkhW-0000000EQov-1kHM; Wed, 05 Nov 2025 21:05:54 +0000 Received: from tor.source.kernel.org ([2600:3c04:e001:324:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vGkhV-0000000EQnl-0cNs for linux-arm-kernel@lists.infradead.org; Wed, 05 Nov 2025 21:05:53 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 7C39B60224; Wed, 5 Nov 2025 21:05:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B768C116C6; Wed, 5 Nov 2025 21:05:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762376752; bh=F8RS6HxMEwD+CdqqVcGXcs4QntwP/aQ69bm2ypLXfL0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Knp5BQSSwU501i5GIxnOcbJwTVtyFd53ptT/lFssQbFaYhKGEBfKTkv46MZZ5eSe1 qxuJjt66g7Sq6NeQA6NaWjGbi56uDFYVJbvkh8/lprQJWhDZFDxww7qnbY7RQvWEfY Z55jySVJI3FlCGLh99LtrKlkWqidLK9czXsitcbTscRVs7X079FoC9rzCZ0GfbpRMe XaOAMs+/PWPGsjQXVydlFUcSk0jAgQIS8NvlcbuFOG8eu84588igVhduPlmyPu9p/v UTgVFquC2M3nQSL8WmvFgWOMy0XwEQlYhFjT5Fker0h+UYE/olRnCJKAsljLmgJGLu Zyqg0u8oNPkQw== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Andrew Morton , Bjorn Helgaas , Catalin Marinas , Danilo Krummrich , "David S . Miller" , Eric Dumazet , Gabriele Monaco , Greg Kroah-Hartman , Ingo Molnar , Jakub Kicinski , Jens Axboe , Johannes Weiner , Lai Jiangshan , Marco Crivellari , Michal Hocko , Muchun Song , Paolo Abeni , Peter Zijlstra , Phil Auld , "Rafael J . Wysocki" , Roman Gushchin , Shakeel Butt , Simon Horman , Tejun Heo , Thomas Gleixner , Vlastimil Babka , Waiman Long , Will Deacon , cgroups@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 14/31] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Date: Wed, 5 Nov 2025 22:03:30 +0100 Message-ID: <20251105210348.35256-15-frederic@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251105210348.35256-1-frederic@kernel.org> References: <20251105210348.35256-1-frederic@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The HK_TYPE_DOMAIN housekeeping cpumask is now modifiable at runtime. In order to synchronize against memcg workqueue to make sure that no asynchronous draining is still pending or executing on a newly made isolated CPU, the housekeeping susbsystem must flush the memcg workqueues. However the memcg workqueues can't be flushed easily since they are queued to the main per-CPU workqueue pool. Solve this with creating a memcg specific pool and provide and use the appropriate flushing API. Acked-by: Shakeel Butt Signed-off-by: Frederic Weisbecker --- include/linux/memcontrol.h | 4 ++++ kernel/sched/isolation.c | 2 ++ kernel/sched/sched.h | 1 + mm/memcontrol.c | 12 +++++++++++- 4 files changed, 18 insertions(+), 1 deletion(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 873e510d6f8d..001200df63cf 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1074,6 +1074,8 @@ static inline u64 cgroup_id_from_mm(struct mm_struct *mm) return id; } +void mem_cgroup_flush_workqueue(void); + extern int mem_cgroup_init(void); #else /* CONFIG_MEMCG */ @@ -1481,6 +1483,8 @@ static inline u64 cgroup_id_from_mm(struct mm_struct *mm) return 0; } +static inline void mem_cgroup_flush_workqueue(void) { } + static inline int mem_cgroup_init(void) { return 0; } #endif /* CONFIG_MEMCG */ diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c index 80a5b7c6400c..16c912dd91d2 100644 --- a/kernel/sched/isolation.c +++ b/kernel/sched/isolation.c @@ -145,6 +145,8 @@ int housekeeping_update(struct cpumask *mask, enum hk_type type) synchronize_rcu(); + mem_cgroup_flush_workqueue(); + kfree(old); return 0; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 5a44e85d4864..77034d20b4e8 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -44,6 +44,7 @@ #include #include #include +#include #include #include #include diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1033e52ab6cf..4d1f680a4bb0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -95,6 +95,8 @@ static bool cgroup_memory_nokmem __ro_after_init; /* BPF memory accounting disabled? */ static bool cgroup_memory_nobpf __ro_after_init; +static struct workqueue_struct *memcg_wq __ro_after_init; + static struct kmem_cache *memcg_cachep; static struct kmem_cache *memcg_pn_cachep; @@ -1975,7 +1977,7 @@ static void schedule_drain_work(int cpu, struct work_struct *work) { guard(rcu)(); if (!cpu_is_isolated(cpu)) - schedule_work_on(cpu, work); + queue_work_on(cpu, memcg_wq, work); } /* @@ -5092,6 +5094,11 @@ void mem_cgroup_sk_uncharge(const struct sock *sk, unsigned int nr_pages) refill_stock(memcg, nr_pages); } +void mem_cgroup_flush_workqueue(void) +{ + flush_workqueue(memcg_wq); +} + static int __init cgroup_memory(char *s) { char *token; @@ -5134,6 +5141,9 @@ int __init mem_cgroup_init(void) cpuhp_setup_state_nocalls(CPUHP_MM_MEMCQ_DEAD, "mm/memctrl:dead", NULL, memcg_hotplug_cpu_dead); + memcg_wq = alloc_workqueue("memcg", WQ_PERCPU, 0); + WARN_ON(!memcg_wq); + for_each_possible_cpu(cpu) { INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work, drain_local_memcg_stock); -- 2.51.0