From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44EFF309DBD; Mon, 13 Oct 2025 20:32:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760387543; cv=none; b=Tt/W0TzKeAJ+O3GKY2XAMQmSTvSUCazbtFfTpMexCXnw4w8Bl/r3zH99uQApR/B05PDdC2lLn1KgajdwXBI7U/YAI2+ysfHcnXKsyAfDXVfOTTV6/Yh9Su64ymnOTOJUXV+C4v2vevfbJVMKg3eDbcXGzphFeuKqaOiPQK/MawY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760387543; c=relaxed/simple; bh=+2t7R2V0qL5AZXIuxvaOr0dG5kK47CKZx3qBR+Y/X6c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t60OtbBJyA26vo1cI04PMFq4PlpG3k/pjL5FMlZbRxRPXhILWaHhj+dHh0fnkDtxGqT/uUzBKiYEuQg2NQm6ZDGZuNv/Z+YXvXtQUZI7TzXGk78JEdXfCB0/E1ylL8DI+FfESaj0KwtlEiTTMOabSaG8kIp8XY+ofELixOCCW/E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dmGHLoeQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dmGHLoeQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7294FC4CEF8; Mon, 13 Oct 2025 20:32:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760387541; bh=+2t7R2V0qL5AZXIuxvaOr0dG5kK47CKZx3qBR+Y/X6c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dmGHLoeQu0b50FoRUM/araypa2rlaqAhrIYYiEnl0iKXC/N7vYxIlcZ3M7iYPRMAe zFQu+ud1bEjwMsEx7T8Y+JRfgDWzi9zbXIIMBgE+Imkdo6aGmMQFYF1qoJbUSLMe0K ucMgaKsv3o8ITmLIA70YEYnJ/nrhVocC5Urc5GABJQfQ0gLdK3fRptNqo7CmxOQrfH 6j/ZDikdKGBgPPM0MBpNsDlQMtNSzpxTVfoWG2S4Hy6LGzfnbyk5RFidBqj3RkIkpg cvWuRweOaVHRDHoollxSrfaWT5pdiKAbzV5q2SzRet26W4P41N54tHYj2ExqCb0z6Z 9B1F6flUKt28Q== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Andrew Morton , Bjorn Helgaas , Catalin Marinas , Danilo Krummrich , "David S . Miller" , Eric Dumazet , Gabriele Monaco , Greg Kroah-Hartman , Ingo Molnar , Jakub Kicinski , Jens Axboe , Johannes Weiner , Lai Jiangshan , Marco Crivellari , Michal Hocko , Muchun Song , Paolo Abeni , Peter Zijlstra , Phil Auld , "Rafael J . Wysocki" , Roman Gushchin , Shakeel Butt , Simon Horman , Tejun Heo , Thomas Gleixner , Vlastimil Babka , Waiman Long , Will Deacon , cgroups@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 03/33] memcg: Prepare to protect against concurrent isolated cpuset change Date: Mon, 13 Oct 2025 22:31:16 +0200 Message-ID: <20251013203146.10162-4-frederic@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251013203146.10162-1-frederic@kernel.org> References: <20251013203146.10162-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit The HK_TYPE_DOMAIN housekeeping cpumask will soon be made modifyable at runtime. In order to synchronize against memcg workqueue to make sure that no asynchronous draining is pending or executing on a newly made isolated CPU, target and queue a drain work under the same RCU critical section. Whenever housekeeping will update the HK_TYPE_DOMAIN cpumask, a memcg workqueue flush will also be issued in a further change to make sure that no work remains pending after a CPU has been made isolated. Signed-off-by: Frederic Weisbecker --- mm/memcontrol.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4deda33625f4..1033e52ab6cf 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1971,6 +1971,13 @@ static bool is_memcg_drain_needed(struct memcg_stock_pcp *stock, return flush; } +static void schedule_drain_work(int cpu, struct work_struct *work) +{ + guard(rcu)(); + if (!cpu_is_isolated(cpu)) + schedule_work_on(cpu, work); +} + /* * Drains all per-CPU charge caches for given root_memcg resp. subtree * of the hierarchy under it. @@ -2000,8 +2007,8 @@ void drain_all_stock(struct mem_cgroup *root_memcg) &memcg_st->flags)) { if (cpu == curcpu) drain_local_memcg_stock(&memcg_st->work); - else if (!cpu_is_isolated(cpu)) - schedule_work_on(cpu, &memcg_st->work); + else + schedule_drain_work(cpu, &memcg_st->work); } if (!test_bit(FLUSHING_CACHED_CHARGE, &obj_st->flags) && @@ -2010,8 +2017,8 @@ void drain_all_stock(struct mem_cgroup *root_memcg) &obj_st->flags)) { if (cpu == curcpu) drain_local_obj_stock(&obj_st->work); - else if (!cpu_is_isolated(cpu)) - schedule_work_on(cpu, &obj_st->work); + else + schedule_drain_work(cpu, &obj_st->work); } } migrate_enable(); -- 2.51.0