From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60CDCC3DA63 for ; Fri, 19 Jul 2024 15:07:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B185D6B008C; Fri, 19 Jul 2024 11:07:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AC8666B0092; Fri, 19 Jul 2024 11:07:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B70D6B0093; Fri, 19 Jul 2024 11:07:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7CC236B008C for ; Fri, 19 Jul 2024 11:07:14 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2711E1207B4 for ; Fri, 19 Jul 2024 15:07:14 +0000 (UTC) X-FDA: 82356830388.14.C69ED53 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf19.hostedemail.com (Postfix) with ESMTP id 6D9031A0026 for ; Fri, 19 Jul 2024 15:07:10 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="AuSyW/Ds"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf19.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721401599; a=rsa-sha256; cv=none; b=C+Ah+oQR2zVAbQ7VSC1rsLjxUeMJ9sNURkOZeFu41uMsGrLOjSM/C7FZ/yx6JU2xqGV9D2 SfzLyEmsCRsmk6Hzrcwk+9j47iWCBChuLzXuIoGl+N3D6O804ARiHSZAmjeW+ro1QpHLu5 8CbR6WPJczapTiwl31/I+XAFPqa2Feg= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="AuSyW/Ds"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf19.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721401599; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fYHrSmOl0HIbe8WXC+WPb4o2dt0JRwGaTiDPPpYkFdY=; b=HeeXsfQAqkDH0tuzSMf/EUCDjjh9e4x3uzzYj8S4FlC0uVorVvVaM5QM1+IgTH6cSa/ijx 9C25OTxlPwhs+pnZmtgcmUe67TO+CDSOP24iW7el/oWGnKIlnNHWgJQBJkmbthmgaBzxjh GsU4YOslxSXh6Ts6B1vrxFdl3eE3i1Y= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 5B941CE1C41; Fri, 19 Jul 2024 15:07:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A262CC32782; Fri, 19 Jul 2024 15:07:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721401624; bh=x04UtHiRnBmbGEix0/mydpwkC0IHcU9T1GhoF957Vmc=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=AuSyW/DsXnC3QKZ+0iOmvBiMeLVrz3Pw4vL/gsgBpkKpBiEPTuumYbYNNemIo/jLl sag+tTgidEcB1JX7nfTbvxbyXhoaBuOFt6X/zsfSC4bR7Nw8fd1weqlxbT/CwoJVZ1 Tbcud/Iy+8seN7hOp5NTOSIYQrqdxvVLtIPxwEEIftGEr2KWLajvIdrq55bcmPAJ7I gLdUuZlvf1CzpORzVdRZ9AMItAUBEQQ8SdUJ3S3zTiJUVZ/4MSc/0wpCqw4/YVM/Oo d6Gi02q2OUqP4VA305fuw3Hx7S4iKfxxQDFIUeql+o2LzlCp9MRvCfjp/04RwAnR1d NZ17/7lOgN5aA== Message-ID: <2794de3c-673b-45ea-8897-df1ada9c6717@kernel.org> Date: Fri, 19 Jul 2024 17:07:01 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH V7 1/2] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes To: Yosry Ahmed Cc: tj@kernel.org, cgroups@vger.kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, lizefan.x@bytedance.com, longman@redhat.com, kernel-team@cloudflare.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <172070450139.2992819.13210624094367257881.stgit@firesoul> Content-Language: en-US From: Jesper Dangaard Brouer In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 6D9031A0026 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: o5gfwd94heszdjjo9zgmkfjeco47pytb X-HE-Tag: 1721401630-639751 X-HE-Meta: U2FsdGVkX1/mH9dYj/C/TUS04O9f0XBZI31hED6IPgRxolea5s9Eve9AJlq5/p/pvFR1A2ipS2smQhSArudzgQvdD+NZuOnJxdWRCxpA4FmyPBRkrkwf8UM2WAXn2ZX5pII9COjmDC6IjPkmzgDMANcy3f0Oewe3Akiz4/++Tcr8SlCcCrlR9qYZBpB+tK7O9dQBKO1YP3GFw/S4Z36CHo1Kd4qHGAsKrtIaIiTJZCXsPv45Sz1BlL6yj7W7dEidiMmVoGoffvHzBy7F9v1sxDvmoPx35jX87zCShkeVFidYBVQbZiraWp2STVkHSY/BCfGKlFrCq44jaUNlYCZmb4964n+NDn3BbOxem3l3L8b26ws28kPa+Ls10CXrQOzd77T6n9GLmFtDDOHyH0mr825kSv/1AQoL7LbYMSW24l1h0AndRUNCuk7/Cci6cTxmTX6CQw4ggL46dYLvf/hak+a37w33kWjNYVivspA9Bt1oLsJcEzoFpAYPdTdX5AlLvsfiFWtAHco1OBEMy1lVp4VDiXvhZ5Ftz/oEDDduz5uuytDF9DX4wq3epbnGmJxxC6m9BFawMQCEk7jrDBNtccdGbEKnTGgvpGEMQM5LRgM3YsSEOy3BFa3mDM/CsdZef0wSKw+2Eh1RL7dOkHjlqsxkBkrNLcI6nHeDU5WV2qwECZLI3PWFX/Zs1IgO23SSpsC3c53plVElEyYTo+Rib40wpYBvfsQ70RWV2WyPOBZyMycT7QAql95S2Gh4WxYa5TcnfCDQk3afIhLf9exUraW+BB8WWIVkBaeXJ2bpA4yiq+kL5DOhIm9rfcGKGqj/V1961Tlw4Ji3ouuwTWifV9ZW2XAEcBrmFv7lQGVmzjHSQL4HbjpFZG1qTYnS4LJhyUtct2b9Rt3Ysr1uMxLtGPGGcN+gZIZTBJq+4LobXYvZ+BCBSbfavUYdT3tGGQ/MRanb/HEyisHNmY4gQfp qCPm8cVY iAvp14HHqO9kInbCxmiha9ZEZgTO0zRCiVS52uy4eLCRG+xf0h8/Y2Qt8/WAwTcBWG9t+7zd4J1mzIdEyMH1t+iiGvf3QFDhfZAU7SYRxeNUYK3iDtvSbCEiHszsKSDsPLnGzbj82NM4HZ8f4lzEZEUSsscMtKZ6bhjUi/9E7cbZQjkiMCEGcXBKAf47DMW9jXmVViTxxA0fqID/UEh9DDiKaseJD1vNceT1uLtAEae1Jmqw7jHtn19PK6YPvgHy/sKac+hY2VThCRxGmwjlOHRqtL7u22ybtUqQWEhJn9bh+PMhB4pPfYgcdWQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 17/07/2024 02.30, Yosry Ahmed wrote: > On Thu, Jul 11, 2024 at 6:28 AM Jesper Dangaard Brouer wrote: >> >> Avoid lock contention on the global cgroup rstat lock caused by kswapd >> starting on all NUMA nodes simultaneously. At Cloudflare, we observed >> massive issues due to kswapd and the specific mem_cgroup_flush_stats() >> call inlined in shrink_node, which takes the rstat lock. >> >> On our 12 NUMA node machines, each with a kswapd kthread per NUMA node, >> we noted severe lock contention on the rstat lock. This contention >> causes 12 CPUs to waste cycles spinning every time kswapd runs. >> Fleet-wide stats (/proc/N/schedstat) for kthreads revealed that we are >> burning an average of 20,000 CPU cores fleet-wide on kswapd, primarily >> due to spinning on the rstat lock. >> >> Help reviewers follow code: __alloc_pages_slowpath calls wake_all_kswapds >> causing all kswapdN threads to wake up simultaneously. The kswapd thread >> invokes shrink_node (via balance_pgdat) triggering the cgroup rstat flush >> operation as part of its work. This results in kernel self-induced rstat >> lock contention by waking up all kswapd threads simultaneously. Leveraging >> this detail: balance_pgdat() have NULL value in target_mem_cgroup, this >> cause mem_cgroup_flush_stats() to do flush with root_mem_cgroup. >> >> To avoid this kind of thundering herd problem, kernel previously had a >> "stats_flush_ongoing" concept, but this was removed as part of commit >> 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). This patch >> reintroduce and generalized the concept to apply to all users of cgroup >> rstat, not just memcg. >> >> If there is an ongoing rstat flush, and current cgroup is a descendant, >> then it is unnecessary to do the flush. For callers to still see updated >> stats, wait for ongoing flusher to complete before returning, but add >> timeout as stats are already inaccurate given updaters keeps running. >> >> Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). >> Signed-off-by: Jesper Dangaard Brouer > > Thanks for working on this, Jesper! I love the data you collected here! > > I think the commit subject and message should be changed to better > describe the patch. This is a patch that exclusively modifies cgroup > code, yet the subject is about kswapd. This change affects all users > of rstat flushing. > > I think a better subject would be: > "cgroup/rstat: avoid flushing if there is an ongoing overlapping > flush" or similar. > Took this for V8. https://lore.kernel.org/all/172139415725.3084888.13770938453137383953.stgit@firesoul > The commit message should first describe the cgroup change, and then > use kswapd as a brief example/illustration of how the problem > manifests in practice. You should also include a brief summary of the > numbers you collected from prod. > Update desc in V8 >> --- >> V6: https://lore.kernel.org/all/172052399087.2357901.4955042377343593447.stgit@firesoul/ >> V5: https://lore.kernel.org/all/171956951930.1897969.8709279863947931285.stgit@firesoul/ >> V4: https://lore.kernel.org/all/171952312320.1810550.13209360603489797077.stgit@firesoul/ >> V3: https://lore.kernel.org/all/171943668946.1638606.1320095353103578332.stgit@firesoul/ >> V2: https://lore.kernel.org/all/171923011608.1500238.3591002573732683639.stgit@firesoul/ >> V1: https://lore.kernel.org/all/171898037079.1222367.13467317484793748519.stgit@firesoul/ >> RFC: https://lore.kernel.org/all/171895533185.1084853.3033751561302228252.stgit@firesoul/ >> >> include/linux/cgroup-defs.h | 2 + >> kernel/cgroup/rstat.c | 95 ++++++++++++++++++++++++++++++++++++++----- >> 2 files changed, 85 insertions(+), 12 deletions(-) >> >> diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h [...] >> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c >> index fb8b49437573..fe2a81a310bb 100644 >> --- a/kernel/cgroup/rstat.c >> +++ b/kernel/cgroup/rstat.c [...] >> static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop) >> @@ -299,6 +316,53 @@ static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop) >> spin_unlock_irq(&cgroup_rstat_lock); >> } >> >> +#define MAX_WAIT msecs_to_jiffies(100) >> +/* Trylock helper that also checks for on ongoing flusher */ >> +static bool cgroup_rstat_trylock_flusher(struct cgroup *cgrp) >> +{ >> + struct cgroup *ongoing; >> + bool locked; >> + >> + /* Check if ongoing flusher is already taking care of this, if > > nit: I think commonly the comment would start on a new line after /*. > We use this comment style in networking code. I've updated it to follow this subsystem. >> + * we are a descendant skip work, but wait for ongoing flusher >> + * to complete work. >> + */ >> +retry: >> + ongoing = READ_ONCE(cgrp_rstat_ongoing_flusher); >> + if (ongoing && cgroup_is_descendant(cgrp, ongoing)) { >> + wait_for_completion_interruptible_timeout( >> + &ongoing->flush_done, MAX_WAIT); >> + /* TODO: Add tracepoint here */ >> + return false; >> + } >> + >> + locked = __cgroup_rstat_trylock(cgrp, -1); >> + if (!locked) { >> + /* Contended: Handle loosing race for ongoing flusher */ > > nit: losing > Thanks for catching this subtle wording issue. >> + if (!ongoing && READ_ONCE(cgrp_rstat_ongoing_flusher)) >> + goto retry; >> + >> + __cgroup_rstat_lock(cgrp, -1, false); >> + } >> + /* Obtained lock, record this cgrp as the ongoing flusher */ > > Do we want a comment here to explain why there could be an existing > ongoing flusher (i.e. due to multiple ongoing flushers)? I think it's > not super obvious. Extended this in V8. > >> + ongoing = READ_ONCE(cgrp_rstat_ongoing_flusher); >> + if (!ongoing) { >> + reinit_completion(&cgrp->flush_done); >> + WRITE_ONCE(cgrp_rstat_ongoing_flusher, cgrp); >> + } >> + return true; /* locked */ > > Would it be better to explain the return value of the function in the > comment above it? > Fixed this in V8. >> +} >> + >> +static void cgroup_rstat_unlock_flusher(struct cgroup *cgrp) >> +{ >> + /* Detect if we are the ongoing flusher */ > > I think this is a bit obvious. > True, removed comment. >> + if (cgrp == READ_ONCE(cgrp_rstat_ongoing_flusher)) { >> + WRITE_ONCE(cgrp_rstat_ongoing_flusher, NULL); >> + complete_all(&cgrp->flush_done); >> + } >> + __cgroup_rstat_unlock(cgrp, -1); >> +} >> + [...] Thanks for going through and commenting on the code! :-) --Jesper