Linux-ARM-Kernel Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: James Morse <james.morse@arm.com>
To: Zeng Heng <zengheng4@huawei.com>,
	ben.horgan@arm.com, Dave.Martin@arm.com,
	tan.shaopeng@jp.fujitsu.com, reinette.chatre@intel.com,
	fenghuay@nvidia.com, tglx@kernel.org, will@kernel.org,
	hpa@zytor.com, bp@alien8.de, babu.moger@amd.com,
	dave.hansen@linux.intel.com, mingo@redhat.com,
	tony.luck@intel.com, gshan@redhat.com, catalin.marinas@arm.com
Cc: linux-arm-kernel@lists.infradead.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, wangkefeng.wang@huawei.com
Subject: Re: [PATCH v8 next 02/10] arm_mpam: Add intPARTID and reqPARTID support for Narrow-PARTID feature
Date: Thu, 14 May 2026 18:06:34 +0100	[thread overview]
Message-ID: <763d19a5-9b1a-4243-a7d7-9484c42c32c7@arm.com> (raw)
In-Reply-To: <20260413085405.1166412-3-zengheng4@huawei.com>

Hi Zeng,

On 13/04/2026 09:53, Zeng Heng wrote:
> Introduce Narrow-PARTID (partid_nrw) feature support, which enables
> many-to-one mapping of request PARTIDs (reqPARTID) to internal PARTIDs
> (intPARTID). This expands monitoring capability by allowing a single
> control group to track more task types through multiple reqPARTIDs
> per intPARTID, bypassing the PMG limit in some extent.
> 
> intPARTID: Internal PARTID used for control group configuration.
> Configurations are synchronized to all reqPARTIDs mapped to the same
> intPARTID. Count is indicated by MPAMF_PARTID_NRW_IDR.INTPARTID_MAX, or
> defaults to PARTID count if Narrow-PARTID is unsupported.
> 
> reqPARTID: Request PARTID used to expand monitoring groups. Enables
> a single control group to monitor more task types by multiple reqPARTIDs
> within one intPARTID, overcoming the PMG count limitation.
> 
> For systems with homogeneous MSCs (all supporting Narrow-PARTID), the
> driver exposes the full reqPARTID range directly. For heterogeneous
> systems where some MSCs lack Narrow-PARTID support, the driver utilizes
> PARTIDs beyond the intPARTID range as reqPARTIDs to expand monitoring
> capacity.
> 
> So, the numbers of control group and monitoring group are calculated as:
> 
>   n = min(intPARTID, PARTID)  /* the number of control groups */
>   l = min(reqPARTID, PARTID)  /* the number of monitoring groups */
>   m = l // n                  /* monitoring groups per control group */
> 
> Where:
> 
>   intPARTID: intPARTIDs on Narrow-PARTID-capable MSCs
>   reqPARTID: reqPARTIDs on Narrow-PARTID-capable MSCs
>   PARTID:    PARTIDs on non-Narrow-PARTID-capable MSCs
> 
> Example: L3 cache (256 PARTIDs, without Narrow-PARTID feature) +
>          MATA (32 intPARTIDs, 256 reqPARTIDs):
> 
>   n = min( 32, 256) =  32 intPARTIDs
>   l = min(256, 256) = 256 reqPARTIDs
>   m = 256 / 32 = 8 reqPARTIDs per intPARTID
> 
> Implementation notes:
>   * Handle mixed MSC systems (some support Narrow-PARTID, some don't) by
>     taking minimum number of intPARTIDs across all MSCs.
>   * resctrl_arch_get_num_closid() now returns the number of intPARTIDs
>     (was PARTID).

What you're doing here is making intPARTID the fundamental unit in MPAM. I
don't think we should do this as its not true in the architecture: narrowing
is a single, optional feature. We have platforms that don't support narrowing
at all - so having to think in terms of "what is the intPARTID limit on this
platform that doesn't have the feature" is confusing.
Narrowing doesn't affect the monitoring, so you can't just string-replace the
driver.

The resctrl glue code is going to have to know about narrowing as it must
either duplicate the control values for aliasing PARTID, or remap them using
narrowing.

I'd prefer it if the mpam_devices code exposed an API to make use of narrowing
that makes sense given the MPAM architecture. (e.g. narrowing is optional!)
Whetever resctrl needs should then be built on top of that.


Currently the MAX PARTID/PMG are dealt with separately as we need a global value
at the end. It was done separately as it could be a patch on its own, to try and
keep each patch reviewable.
But it should probably have been dealt with the same way we deal with all MSC
features - stash them in struct mpam_msc_ris at hw_probe time, and combine them
up to the class level handling different values with __props_mismatch().
The global state that includes the requestors can be created after that point.

I think we should do this now to add intpartid_max to struct mpam_props so that the
resctrl glue code can find the intpartid_max per class. I don't think it makes sense as
a global property. (we should move partid_max and pmg_max at the same time)

... I think moving partid_max would just be cleanup. The resctrl glue code needs to
know the maximum PARTID for monitoring, but I think this would always be the global
PARTID max value.


> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
> index 41b14344b16f..cf067bf5092e 100644
> --- a/drivers/resctrl/mpam_devices.c
> +++ b/drivers/resctrl/mpam_devices.c
> @@ -63,6 +63,7 @@ static DEFINE_MUTEX(mpam_cpuhp_state_lock);
>   * Generating traffic outside this range will result in screaming interrupts.
>   */
>  u16 mpam_partid_max;
> +u16 mpam_intpartid_max;
>  u8 mpam_pmg_max;
>  static bool partid_max_init, partid_max_published;
>  static DEFINE_SPINLOCK(partid_max_lock);

The global properties are supposed to mean "if you generate any traffic outside
this range, you'll get an out of range error causing mpam_disable()".

This doesn't hold for intPARTID.

In my hypothetical system from the cover letter: {
  64 PARTID at the L3
  64 PARTID and narrowing to 16 at the SLC
  64 PARTID and narrowing to 32 at the memory-controller

  The resctrl glue code could ignore the SLC if it wanted to use 32 PARTID, and just
  duplicate the aliasing controls at L3. (and remove the non-aliasing controls)
}


> @@ -2743,9 +2749,13 @@ static void mpam_enable_once(void)
>  	mpam_register_cpuhp_callbacks(mpam_cpu_online, mpam_cpu_offline,
>  				      "mpam:online");
>  
> -	/* Use printk() to avoid the pr_fmt adding the function name. */
> -	printk(KERN_INFO "MPAM enabled with %u PARTIDs and %u PMGs\n",
> -	       mpam_partid_max + 1, mpam_pmg_max + 1);
> +	if (mpam_partid_max == mpam_intpartid_max)
> +		/* Use printk() to avoid the pr_fmt adding the function name. */
> +		printk(KERN_INFO "MPAM enabled with %u PARTIDs and %u PMGs\n",
> +		       mpam_partid_max + 1, mpam_pmg_max + 1);
> +	else
> +		printk(KERN_INFO "MPAM enabled with %u reqPARTIDs, %u intPARTIDs and %u PMGs\n",
> +		       mpam_partid_max + 1, mpam_intpartid_max + 1, mpam_pmg_max + 1);

intPARTID is not a global property. It's also an optional feature, so its problematic
to print this on platforms that don't have the feature.


>  }
>  
>  static void mpam_reset_component_locked(struct mpam_component *comp)
>  
>  u32 resctrl_arch_system_num_rmid_idx(void)


Thanks,

James


  reply	other threads:[~2026-05-14 17:06 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-13  8:53 [PATCH v8 next 00/10] arm_mpam: Introduce Narrow-PARTID feature Zeng Heng
2026-04-13  8:53 ` [PATCH v8 next 01/10] fs/resctrl: Fix MPAM Partid parsing errors by preserving CDP state during umount Zeng Heng
2026-05-14 17:06   ` James Morse
2026-04-13  8:53 ` [PATCH v8 next 02/10] arm_mpam: Add intPARTID and reqPARTID support for Narrow-PARTID feature Zeng Heng
2026-05-14 17:06   ` James Morse [this message]
2026-04-13  8:53 ` [PATCH v8 next 03/10] arm_mpam: Disable reqPARTID expansion when Narrow-PARTID is unavailable Zeng Heng
2026-05-14 17:06   ` James Morse
2026-04-13  8:53 ` [PATCH v8 next 04/10] arm_mpam: Refactor rmid to reqPARTID/PMG mapping Zeng Heng
2026-05-14 17:07   ` James Morse
2026-04-13  8:54 ` [PATCH v8 next 05/10] arm_mpam: Propagate control group config to sub-monitoring groups Zeng Heng
2026-04-13  8:54 ` [PATCH v8 next 06/10] arm_mpam: Add boot parameter to limit mpam_intpartid_max Zeng Heng
2026-04-13  8:54 ` [PATCH v8 next 07/10] fs/resctrl: Add rmid_entry state helpers Zeng Heng
2026-04-13  8:54 ` [PATCH v8 next 08/10] arm_mpam: Implement dynamic reqPARTID allocation for monitoring groups Zeng Heng
2026-04-13  8:54 ` [PATCH v8 next 09/10] fs/resctrl: Wire up rmid expansion and reclaim functions Zeng Heng
2026-04-13  8:54 ` [PATCH v8 next 10/10] arm_mpam: Add mpam_sync_config() for dynamic rmid expansion Zeng Heng
2026-04-16  6:29 ` [PATCH v8 next 00/10] arm_mpam: Introduce Narrow-PARTID feature Shaopeng Tan (Fujitsu)
2026-04-20  7:31 ` Zeng Heng
2026-04-28  4:20   ` Shaopeng Tan (Fujitsu)
2026-04-29  9:47     ` Zeng Heng
2026-04-29 10:59 ` Zeng Heng
2026-05-14 17:06 ` James Morse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=763d19a5-9b1a-4243-a7d7-9484c42c32c7@arm.com \
    --to=james.morse@arm.com \
    --cc=Dave.Martin@arm.com \
    --cc=babu.moger@amd.com \
    --cc=ben.horgan@arm.com \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=fenghuay@nvidia.com \
    --cc=gshan@redhat.com \
    --cc=hpa@zytor.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=reinette.chatre@intel.com \
    --cc=tan.shaopeng@jp.fujitsu.com \
    --cc=tglx@kernel.org \
    --cc=tony.luck@intel.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    --cc=zengheng4@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox