From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 740FDCDB465 for ; Mon, 16 Oct 2023 15:43:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F3D778D00AC; Mon, 16 Oct 2023 11:43:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EC84F8D0001; Mon, 16 Oct 2023 11:43:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D67A08D00AC; Mon, 16 Oct 2023 11:43:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C0B3A8D0001 for ; Mon, 16 Oct 2023 11:43:37 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8CB46C09F9 for ; Mon, 16 Oct 2023 15:43:37 +0000 (UTC) X-FDA: 81351744474.18.A2DC096 Received: from outbound-smtp56.blacknight.com (outbound-smtp56.blacknight.com [46.22.136.240]) by imf11.hostedemail.com (Postfix) with ESMTP id A740340015 for ; Mon, 16 Oct 2023 15:43:34 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf11.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.240 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697471015; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lmKie0/3WSliP3heRF9B0AkfTxjygYu+SrvIk17umKk=; b=TqhoPZUodlkX1OSCnHe9xk+4SZ5ZbCIMXyY7Zxr+klyr4UtaaU18Y25Lo1LGKl2uu5qA17 8dRRGwpI+ExukATA/V/O2NnyoX8Q1qkoouzHmEoXZe5Ev7O1dlHaZrzQb/3BU3UqwYgjYm iJv55nydp3RzdKHz+mm8PFnxXyzZgVI= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf11.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.240 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697471015; a=rsa-sha256; cv=none; b=ic2v3NWAPvPHuCCUTTViolGkcwlzxWPxpTQ1o3ZNUnLzthHnXlM243AUmVpoool4/H8IVK uimr2/AvY9h9i+X0eK9biALtEvkOF57lkEf9gWPkTbGaJFPmrVbJBMN1jfNz7DmOQMQ4ox 4WIqqrPzEzWDboAW2G2ptD9kMuWtAv4= Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp56.blacknight.com (Postfix) with ESMTPS id B2CCBFA9D7 for ; Mon, 16 Oct 2023 16:43:32 +0100 (IST) Received: (qmail 21724 invoked from network); 16 Oct 2023 15:43:32 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.23.206]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 16 Oct 2023 15:43:32 -0000 Date: Mon, 16 Oct 2023 16:43:30 +0100 From: Mel Gorman To: "Huang, Ying" Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Sudeep Holla , Andrew Morton , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: Re: [PATCH 02/10] cacheinfo: calculate per-CPU data cache size Message-ID: <20231016154330.e66vs4fg75brh6gz@techsingularity.net> References: <20230920061856.257597-1-ying.huang@intel.com> <20230920061856.257597-3-ying.huang@intel.com> <20231011122027.pw3uw32sdxxqjsrq@techsingularity.net> <87h6mwf3gf.fsf@yhuang6-desk2.ccr.corp.intel.com> <20231012125253.fpeehd6362c5v2sj@techsingularity.net> <87v8bcdly7.fsf@yhuang6-desk2.ccr.corp.intel.com> <20231012152250.xuu5mvghwtonpvp2@techsingularity.net> <87pm1jcjas.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <87pm1jcjas.fsf@yhuang6-desk2.ccr.corp.intel.com> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: A740340015 X-Stat-Signature: kkte1hw3awxxqnirmsd9oqqf5du7rbai X-HE-Tag: 1697471014-484281 X-HE-Meta: U2FsdGVkX1/SrMRt25fQv7mEGLjA1vq9YCoCTstX3FvO1MeYxhMObK4MQybgMD4b1G2fSDY7LQNRiQc+FY3G4Hp2fwcSJ2grdCY9TJLcNaM//t9gfCbEFceUtJQXkPSI04UXcF5zh/FtiEXyxtKRegkboLN+N3XCjBU+nlFUeNlY8NvJevIrEWZBm4gdsTU9MOPPKzUWwOsvpobasRjPi25EcjQvOmONIr0ABr05mvYInhCoQ97fQ36u1GouDh5aCke+tlEK6zk9TVtvayDKH+ZrJU8i2EGB1w1742+OayTXQOLJk7QHLt0ireHcBCc28nZSQrhLLxgyJhO2aSdr1xziW3kcCG4HwIjxcBh7tdencZGSgioaa0aY1ihkEzoKUsB8Uqj0Jad35lLzg6MYFl/PSNZgcf4Gv2+nFTf7rvUtnD8rJVsxOXBt/53Vk7Xro8v7p6qyxE9ZC7FpB3fQJVaUVm1BQkKt0HjWWmGjGAq0YZBRLIgiE4Kb45lKZp7xbixTzTXBGoLzOSdzvVxF7y5FpSzir6TR0GDIr7Al9ICy+CnWR+Leid7yP0ZksuPHTLoFa7+TE3+cMUSfEnf0MFc8Qz6db/vkw7wprVpNn6zAmFiy1h6ZGX3uIgkilGYoKhcmocQMgScJTRGTsy3o+ZPtOzYpaUqK9EJTVuH/K9LBj7LjWa7MrxCKn9mW4LBMRbCbRs2hPZS9lz6HkjCzM5J6aXqNSfpAS/j0FmzPiFKSc9MikvQ8o306zRjHrQjfIqHhxVUssyzxv481PfUdOKjVJ0q9/FD0OIoC/sNh6XzA+Xdic8/25tR0fqs9gbpa1Wo9yMiyMD8D702JZBKDnrtbe0IRWhYOUzSPFSUw31mTQx4X8Xfcj/dEaqVQWy6N40qQY5MYGvW0fCMmGiJ1OFZmPxoH9yNv8gsdKOobUAajPOpiti3rNo5dshkpkgB5QBh0R7jQBPrnleIm7a1 a9z1AUun WNe5KgCBQkNauCJ7eA8nzw8KLNWlbjyB3374ds2WauAoE66JyQfldR2MpDI+g16PEyjg9ywGh5BLL40LqddPB3VTE1Mhw/mfSzRSK0/W19GWWbjs4Ifx/y9MZIZ54vMTBFqary21FT1S2p374cZMMck9yM30BkyzaVbiv5LBfqzfNRPp0kr8wl9bV9WI1zS9XAXOuIs9DPpoiXHi8LqPPYPzi+ZCr1gdn8WkT8ssT6lbODLFMWAEmpNaE2+kcrhDsvIrMzVkmis0htbXZL46vDu1YH4G/TGMMzSyXz3K7+OBH9EX8/qLrr1fGhyscOvFgzLsna5A9Feo/hzRxvcawfvM38OBR36akdrratymdrFV5aaoJwmdTSp+xv8p3H0Jf1qJqBP0U6xRpTRk/r9J8jnxf8UVaE5sQvKGXOIibANkf+bwhiQnwIog/bSOC5/c60fUg37DMT6NmWxrbCd2nibDJBH2iJpPU7LlWCqkWX7Qt/zleiaYgrM0usUIEh4wWIY0jfUUcyk0T3Sdx18caOdItIQttMYOmiJ+1J2dumezV6Xffzgt8l0gDmwZ/TJWVjKU6tKxD3QrZNLuNPDQUS7GAjOVg/aCT+jF5cWYTNDRkpSmWdaR2xU7eKuGhToqdchMTaRGz9IThEUmEXVmT9S4vDW7Iut+no20QFgK2/3+3o3IB2G8smBQfFMa9tBRShewt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Oct 13, 2023 at 11:06:51AM +0800, Huang, Ying wrote: > Mel Gorman writes: > > > On Thu, Oct 12, 2023 at 09:12:00PM +0800, Huang, Ying wrote: > >> Mel Gorman writes: > >> > >> > On Thu, Oct 12, 2023 at 08:08:32PM +0800, Huang, Ying wrote: > >> >> Mel Gorman writes: > >> >> > >> >> > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: > >> >> >> Per-CPU data cache size is useful information. For example, it can be > >> >> >> used to determine per-CPU cache size. So, in this patch, the data > >> >> >> cache size for each CPU is calculated via data_cache_size / > >> >> >> shared_cpu_weight. > >> >> >> > >> >> >> A brute-force algorithm to iterate all online CPUs is used to avoid > >> >> >> to allocate an extra cpumask, especially in offline callback. > >> >> >> > >> >> >> Signed-off-by: "Huang, Ying" > >> >> > > >> >> > It's not necessarily relevant to the patch, but at least the scheduler > >> >> > also stores some per-cpu topology information such as sd_llc_size -- the > >> >> > number of CPUs sharing the same last-level-cache as this CPU. It may be > >> >> > worth unifying this at some point if it's common that per-cpu > >> >> > information is too fine and per-zone or per-node information is too > >> >> > coarse. This would be particularly true when considering locking > >> >> > granularity, > >> >> > > >> >> >> Cc: Sudeep Holla > >> >> >> Cc: Andrew Morton > >> >> >> Cc: Mel Gorman > >> >> >> Cc: Vlastimil Babka > >> >> >> Cc: David Hildenbrand > >> >> >> Cc: Johannes Weiner > >> >> >> Cc: Dave Hansen > >> >> >> Cc: Michal Hocko > >> >> >> Cc: Pavel Tatashin > >> >> >> Cc: Matthew Wilcox > >> >> >> Cc: Christoph Lameter > >> >> >> --- > >> >> >> drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- > >> >> >> include/linux/cacheinfo.h | 1 + > >> >> >> 2 files changed, 42 insertions(+), 1 deletion(-) > >> >> >> > >> >> >> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c > >> >> >> index cbae8be1fe52..3e8951a3fbab 100644 > >> >> >> --- a/drivers/base/cacheinfo.c > >> >> >> +++ b/drivers/base/cacheinfo.c > >> >> >> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) > >> >> >> return rc; > >> >> >> } > >> >> >> > >> >> >> +static void update_data_cache_size_cpu(unsigned int cpu) > >> >> >> +{ > >> >> >> + struct cpu_cacheinfo *ci; > >> >> >> + struct cacheinfo *leaf; > >> >> >> + unsigned int i, nr_shared; > >> >> >> + unsigned int size_data = 0; > >> >> >> + > >> >> >> + if (!per_cpu_cacheinfo(cpu)) > >> >> >> + return; > >> >> >> + > >> >> >> + ci = ci_cacheinfo(cpu); > >> >> >> + for (i = 0; i < cache_leaves(cpu); i++) { > >> >> >> + leaf = per_cpu_cacheinfo_idx(cpu, i); > >> >> >> + if (leaf->type != CACHE_TYPE_DATA && > >> >> >> + leaf->type != CACHE_TYPE_UNIFIED) > >> >> >> + continue; > >> >> >> + nr_shared = cpumask_weight(&leaf->shared_cpu_map); > >> >> >> + if (!nr_shared) > >> >> >> + continue; > >> >> >> + size_data += leaf->size / nr_shared; > >> >> >> + } > >> >> >> + ci->size_data = size_data; > >> >> >> +} > >> >> > > >> >> > This needs comments. > >> >> > > >> >> > It would be nice to add a comment on top describing the limitation of > >> >> > CACHE_TYPE_UNIFIED here in the context of > >> >> > update_data_cache_size_cpu(). > >> >> > >> >> Sure. Will do that. > >> >> > >> > > >> > Thanks. > >> > > >> >> > The L2 cache could be unified but much smaller than a L3 or other > >> >> > last-level-cache. It's not clear from the code what level of cache is being > >> >> > used due to a lack of familiarity of the cpu_cacheinfo code but size_data > >> >> > is not the size of a cache, it appears to be the share of a cache a CPU > >> >> > would have under ideal circumstances. > >> >> > >> >> Yes. And it isn't for one specific level of cache. It's sum of per-CPU > >> >> shares of all levels of cache. But the calculation is inaccurate. More > >> >> details are in the below reply. > >> >> > >> >> > However, as it appears to also be > >> >> > iterating hierarchy then this may not be accurate. Caches may or may not > >> >> > allow data to be duplicated between levels so the value may be inaccurate. > >> >> > >> >> Thank you very much for pointing this out! The cache can be inclusive > >> >> or not. So, we cannot calculate the per-CPU slice of all-level caches > >> >> via adding them together blindly. I will change this in a follow-on > >> >> patch. > >> >> > >> > > >> > Please do, I would strongly suggest basing this on LLC only because it's > >> > the only value you can be sure of. This change is the only change that may > >> > warrant a respin of the series as the history will be somewhat confusing > >> > otherwise. > >> > >> I am still checking whether it's possible to get cache inclusive > >> information via cpuid. > >> > > > > cpuid may be x86-specific so that potentially leads to different behaviours > > on different architectures. > > > >> If there's no reliable way to do that. We can use the max value of > >> per-CPU share of each level of cache. For inclusive cache, that will be > >> the value of LLC. For non-inclusive cache, the value will be more > >> accurate. For example, on Intel Sapphire Rapids, the L2 cache is 2 MB > >> per core, while LLC is 1.875 MB per core according to [1]. > >> > > > > Be that as it may, it still opens the possibility of significantly different > > behaviour depending on the CPU family. I would strongly recommend that you > > start with LLC only because LLC is also the topology level of interest used > > by the scheduler and it's information that is generally available. Trying > > to get accurate information on every level and the complexity of dealing > > with inclusive vs exclusive cache or write-back vs write-through should > > be a separate patch, with separate justification and notes on how it can > > lead to behaviour specific to the CPU family or architecture. > > IMHO, we should try to optimize for as many CPUs as possible. The size > of the per-CPU (HW thread for SMT) slice of LLC of latest Intel server > CPUs is as follows, > > Icelake: 0.75 MB > Sapphire Rapids: 0.9375 MB > > While pcp->batch is 63 * 4 / 1024 = 0.2461 MB. > > In [03/10], only if "per_cpu_cache_slice > 4 * pcp->batch", we will cache > pcp->batch before draining the PCP. This makes the optimization > unavailable for a significant portion of the server CPUs. > > In theory, if "per_cpu_cache_slice > 2 * pcp->batch", we can reuse > cache-hot pages between CPUs. So, if we change the condition to > "per_cpu_cache_slice > 3 * pcp->batch", I think that we are still safe. > > As for other CPUs, according to [2], AMD CPUs have larger per-CPU LLC. > So, it's OK for them. ARM CPUs has much smaller per-CPU LLC, so some > further optimization is needed. > > [2] https://www.anandtech.com/show/16594/intel-3rd-gen-xeon-scalable-review/2 > > So, I suggest to use "per_cpu_cache_slice > 3 * pcp->batch" in [03/10], > and use LLC in this patch [02/10]. Then, we can optimize the per-CPU > slice of cache calculation in the follow-up patches. > I'm ok with adjusting the thresholds to adapt to using LLC only because at least it'll be consistent across CPU architectures and families. Dealing with the potentially different cache characteristics at each level or even being able to discover them is just unnecessarily complicated. It gets even worse if the mapping changes. For example, if L1 was direct mapped, L2 index mapped and L3 fully associative then it's not even meaningful to say that a CPU has a meaningful slice size as cache coloring side-effects mess everything up. -- Mel Gorman SUSE Labs