Archive-only list for patches
 help / color / mirror / Atom feed
From: Tony Luck <tony.luck@intel.com>
To: Reinette Chatre <reinette.chatre@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>,
	Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>,
	Peter Newman <peternewman@google.com>,
	James Morse <james.morse@arm.com>,
	Babu Moger <babu.moger@amd.com>,
	Drew Fustini <dfustini@baylibre.com>,
	Dave Martin <Dave.Martin@arm.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	patches@lists.linux.dev
Subject: Re: [PATCH v17 8/9] x86/resctrl: Sub NUMA Cluster detection and enable
Date: Mon, 13 May 2024 17:28:26 -0700	[thread overview]
Message-ID: <ZkKwKlo7JlbjUEjr@agluck-desk3> (raw)
In-Reply-To: <fc2de5b4-8a38-4041-9f61-d1bcdf810317@intel.com>

On Mon, May 13, 2024 at 11:53:26AM -0700, Reinette Chatre wrote:
> Hi Tony,
> 
> On 5/13/2024 10:17 AM, Tony Luck wrote:
> > On Fri, May 10, 2024 at 02:24:49PM -0700, Reinette Chatre wrote:
> >> Hi Tony,
> >>
> >> On 5/3/2024 1:33 PM, Tony Luck wrote:
> >>> There isn't a simple hardware bit that indicates whether a CPU is
> >>> running in Sub NUMA Cluster (SNC) mode. Infer the state by comparing
> >>> the ratio of NUMA nodes to L3 cache instances.
> >>>
> >>> When SNC mode is detected, reconfigure the RMID counters by updating
> >>> the MSR_RMID_SNC_CONFIG MSR on each socket as CPUs are seen.
> >>>
> >>> Clearing bit zero of the MSR divides the RMIDs and renumbers the ones
> >>> on the second SNC node to start from zero.
> >>>
> >>> Signed-off-by: Tony Luck <tony.luck@intel.com>
> >>> ---
> >>>  arch/x86/include/asm/msr-index.h   |   1 +
> >>>  arch/x86/kernel/cpu/resctrl/core.c | 119 +++++++++++++++++++++++++++++
> >>>  2 files changed, 120 insertions(+)
> >>>
> >>> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> >>> index e72c2b872957..ce54a1ffe1e5 100644
> >>> --- a/arch/x86/include/asm/msr-index.h
> >>> +++ b/arch/x86/include/asm/msr-index.h
> >>> @@ -1165,6 +1165,7 @@
> >>>  #define MSR_IA32_QM_CTR			0xc8e
> >>>  #define MSR_IA32_PQR_ASSOC		0xc8f
> >>>  #define MSR_IA32_L3_CBM_BASE		0xc90
> >>> +#define MSR_RMID_SNC_CONFIG		0xca0
> >>>  #define MSR_IA32_L2_CBM_BASE		0xd10
> >>>  #define MSR_IA32_MBA_THRTL_BASE		0xd50
> >>>  
> >>> diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
> >>> index a949e69308cd..6a1727ea1dfe 100644
> >>> --- a/arch/x86/kernel/cpu/resctrl/core.c
> >>> +++ b/arch/x86/kernel/cpu/resctrl/core.c
> >>> @@ -21,7 +21,9 @@
> >>>  #include <linux/err.h>
> >>>  #include <linux/cacheinfo.h>
> >>>  #include <linux/cpuhotplug.h>
> >>> +#include <linux/mod_devicetable.h>
> >>>  
> >>> +#include <asm/cpu_device_id.h>
> >>>  #include <asm/intel-family.h>
> >>>  #include <asm/resctrl.h>
> >>>  #include "internal.h"
> >>> @@ -746,11 +748,42 @@ static void clear_closid_rmid(int cpu)
> >>>  	      RESCTRL_RESERVED_CLOSID);
> >>>  }
> >>>  
> >>> +/*
> >>> + * The power-on reset value of MSR_RMID_SNC_CONFIG is 0x1
> >>> + * which indicates that RMIDs are configured in legacy mode.
> >>> + * This mode is incompatible with Linux resctrl semantics
> >>> + * as RMIDs are partitioned between SNC nodes, which requires
> >>> + * a user to know which RMID is allocated to a task.
> >>> + * Clearing bit 0 reconfigures the RMID counters for use
> >>> + * in Sub NUMA Cluster mode. This mode is better for Linux.
> >>> + * The RMID space is divided between all SNC nodes with the
> >>> + * RMIDs renumbered to start from zero in each node when
> >>> + * couning operations from tasks. Code to read the counters
> >>> + * must adjust RMID counter numbers based on SNC node. See
> >>> + * __rmid_read() for code that does this.
> >>> + */
> >>> +static void snc_remap_rmids(int cpu)
> >>> +{
> >>> +	u64 val;
> >>> +
> >>> +	/* Only need to enable once per package. */
> >>> +	if (cpumask_first(topology_core_cpumask(cpu)) != cpu)
> >>> +		return;
> >>> +
> >>> +	rdmsrl(MSR_RMID_SNC_CONFIG, val);
> >>> +	val &= ~BIT_ULL(0);
> >>> +	wrmsrl(MSR_RMID_SNC_CONFIG, val);
> >>> +}
> >>> +
> >>>  static int resctrl_arch_online_cpu(unsigned int cpu)
> >>>  {
> >>>  	struct rdt_resource *r;
> >>>  
> >>>  	mutex_lock(&domain_list_lock);
> >>> +
> >>> +	if (snc_nodes_per_l3_cache > 1)
> >>> +		snc_remap_rmids(cpu);
> >>> +
> >>>  	for_each_capable_rdt_resource(r)
> >>>  		domain_add_cpu(cpu, r);
> >>>  	mutex_unlock(&domain_list_lock);
> >>> @@ -990,11 +1023,97 @@ static __init bool get_rdt_resources(void)
> >>>  	return (rdt_mon_capable || rdt_alloc_capable);
> >>>  }
> >>>  
> >>> +/* CPU models that support MSR_RMID_SNC_CONFIG */
> >>> +static const struct x86_cpu_id snc_cpu_ids[] __initconst = {
> >>> +	X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, 0),
> >>> +	X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, 0),
> >>> +	X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, 0),
> >>> +	X86_MATCH_INTEL_FAM6_MODEL(GRANITERAPIDS_X, 0),
> >>> +	X86_MATCH_INTEL_FAM6_MODEL(ATOM_CRESTMONT_X, 0),
> >>> +	{}
> >>> +};
> >>> +
> >>> +/*
> >>> + * There isn't a simple hardware bit that indicates whether a CPU is running
> >>> + * in Sub NUMA Cluster (SNC) mode. Infer the state by comparing the
> >>> + * ratio of NUMA nodes to L3 cache instances.
> >>> + * It is not possible to accurately determine SNC state if the system is
> >>> + * booted with a maxcpus=N parameter. That distorts the ratio of SNC nodes
> >>> + * to L3 caches. It will be OK if system is booted with hyperthreading
> >>> + * disabled (since this doesn't affect the ratio).
> >>> + */
> >>> +static __init int snc_get_config(void)
> >>> +{
> >>> +	unsigned long *node_caches;
> >>> +	int mem_only_nodes = 0;
> >>> +	int cpu, node, ret;
> >>> +	int num_l3_caches;
> >>> +	int cache_id;
> >>> +
> >>> +	if (!x86_match_cpu(snc_cpu_ids))
> >>> +		return 1;
> >>> +
> >>> +	node_caches = bitmap_zalloc(num_possible_cpus(), GFP_KERNEL);
> >>> +	if (!node_caches)
> >>> +		return 1;
> >>> +
> >>> +	cpus_read_lock();
> >>> +
> >>> +	if (num_online_cpus() != num_present_cpus())
> >>> +		pr_warn("Some CPUs offline, SNC detection may be incorrect\n");
> >>> +
> >>> +	for_each_node(node) {
> >>> +		cpu = cpumask_first(cpumask_of_node(node));
> >>> +		if (cpu < nr_cpu_ids) {
> >>> +			cache_id = get_cpu_cacheinfo_id(cpu, 3);
> >>> +			if (cache_id != -1)
> >>> +				set_bit(cache_id, node_caches);
> >>> +		} else {
> >>> +			mem_only_nodes++;
> >>> +		}
> >>> +	}
> >>> +	cpus_read_unlock();
> >>> +
> >>> +	num_l3_caches = bitmap_weight(node_caches, num_possible_cpus());
> >>> +	kfree(node_caches);
> >>> +
> >>> +	if (!num_l3_caches)
> >>> +		goto insane;
> >>> +
> >>> +	/* sanity check #1: Number of CPU nodes must be multiple of num_l3_caches */
> >>> +	if ((nr_node_ids - mem_only_nodes) % num_l3_caches)
> >>> +		goto insane;
> >>> +
> >>> +	ret = (nr_node_ids - mem_only_nodes) / num_l3_caches;
> >>> +
> >>> +	/* sanity check #2: Only valid results are 1, 2, 3, 4 */
> >>> +	switch (ret) {
> >>> +	case 1:
> >>> +		break;
> >>> +	case 2:
> >>> +	case 3:
> >>> +	case 4:
> >>> +		pr_info("Sub-NUMA cluster detected with %d nodes per L3 cache\n", ret);
> >>> +		rdt_resources_all[RDT_RESOURCE_L3].r_resctrl.mon_scope = RESCTRL_NODE;
> >>> +		break;
> >>> +	default:
> >>> +		goto insane;
> >>> +	}
> >>> +
> >>> +	return ret;
> >>> +insane:
> >>> +	pr_warn("SNC insanity: CPU nodes = %d num_l3_caches = %d\n",
> >>> +		(nr_node_ids - mem_only_nodes), num_l3_caches);
> >>> +	return 1;
> >>> +}
> >>
> >> I find it confusing how dramatically this SNC detection code changed without
> >> any explanations. This detection seems to match the SNC detection code from v16 but
> >> after v16 you posted a new SNC detection implementation that did SNC detection totally
> >> differently [1] from v16. Instead of keeping with the "new" detection this implements
> >> what was in v16. Could you please help me understand what motivated the different
> >> implementations and why the big differences?
> > 
> > Reinette,
> > 
> > Do you like the detection code in that version? You didn't make any
> > comments about it.
> 
> It was a drop-in replacement for a portion that was not relevant to the
> architecture discussion that I focused on ... hence my surprise that it
> just came and went without any comment.

So it will be back again when I post v18 as it is somewhat simpler
(doesn't rely on allocating a bitmap to count L3 cache instances).

I'll update comments in that patch, in the code, and in the change
log in the cover letter.

> > I switched back to the v16 code because that had survived review before
> > and I just wanted to make the modifications to add both per-L3 and
> > per-SNC node monitoring files.
> > 
> > I can pull that into the next iteration if you want.
> 
> It is not clear to me why you switched back and forth between the detection
> algorithms. I expect big changes to be accompanied with explanation of what changed,
> why one is better than the other, or if they are considered "similar", what
> are the pros/cons. Am I missing something so obvious that causes you to think
> the work does not need the explanation I asked your help with?

The change deserved some comments when it suddenly appeared. One of the
many issues with my detour from the progression. It disappeared because
I reverted to the previously reviewed version.

> Reinette

-Tony

  reply	other threads:[~2024-05-14  0:28 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-03 20:33 [PATCH v17 0/9] Add support for Sub-NUMA cluster (SNC) systems Tony Luck
2024-05-03 20:33 ` [PATCH v17 1/9] x86/resctrl: Prepare for new domain scope Tony Luck
2024-05-03 20:33 ` [PATCH v17 2/9] x86/resctrl: Prepare to split rdt_domain structure Tony Luck
2024-05-03 20:33 ` [PATCH v17 3/9] x86/resctrl: Prepare for different scope for control/monitor operations Tony Luck
2024-05-03 20:33 ` [PATCH v17 4/9] x86/resctrl: Split the rdt_domain and rdt_hw_domain structures Tony Luck
2024-05-03 20:33 ` [PATCH v17 5/9] x86/resctrl: Add node-scope to the options for feature scope Tony Luck
2024-05-03 20:33 ` [PATCH v17 6/9] x86/resctrl: Introduce snc_nodes_per_l3_cache Tony Luck
2024-05-03 20:33 ` [PATCH v17 7/9] x86/resctrl: Add new monitor files for Sub-NUMA cluster (SNC) monitoring Tony Luck
2024-05-10 21:24   ` Reinette Chatre
2024-05-13 17:05     ` Tony Luck
2024-05-13 18:53       ` Reinette Chatre
2024-05-14  0:21         ` Tony Luck
2024-05-14 15:08           ` Reinette Chatre
2024-05-14 18:26             ` Luck, Tony
2024-05-14 20:30               ` Reinette Chatre
2024-05-14 21:53                 ` Tony Luck
2024-05-15 16:47                   ` Reinette Chatre
2024-05-15 17:23                     ` Tony Luck
2024-05-15 18:48                       ` Reinette Chatre
2024-05-03 20:33 ` [PATCH v17 8/9] x86/resctrl: Sub NUMA Cluster detection and enable Tony Luck
2024-05-10 21:24   ` Reinette Chatre
2024-05-13 17:17     ` Tony Luck
2024-05-13 18:53       ` Reinette Chatre
2024-05-14  0:28         ` Tony Luck [this message]
2024-05-03 20:33 ` [PATCH v17 9/9] x86/resctrl: Update documentation with Sub-NUMA cluster changes Tony Luck
2024-05-14 15:02 ` [PATCH v17 0/9] Add support for Sub-NUMA cluster (SNC) systems Maciej Wieczor-Retman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZkKwKlo7JlbjUEjr@agluck-desk3 \
    --to=tony.luck@intel.com \
    --cc=Dave.Martin@arm.com \
    --cc=babu.moger@amd.com \
    --cc=dfustini@baylibre.com \
    --cc=fenghua.yu@intel.com \
    --cc=james.morse@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.wieczor-retman@intel.com \
    --cc=patches@lists.linux.dev \
    --cc=peternewman@google.com \
    --cc=reinette.chatre@intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox