* [PATCH v5 01/41] arm64/sysreg: Add MPAMSM_EL1 register
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:56 ` [PATCH v5 02/41] KVM: arm64: Preserve host MPAM configuration when changing traps Ben Horgan
` (42 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
The MPAMSM_EL1 register determines the MPAM configuration for an SMCU. Add
the register definition.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
arch/arm64/tools/sysreg | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 9d1c21108057..1287cb1de6f3 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -5172,6 +5172,14 @@ Field 31:16 PARTID_D
Field 15:0 PARTID_I
EndSysreg
+Sysreg MPAMSM_EL1 3 0 10 5 3
+Res0 63:48
+Field 47:40 PMG_D
+Res0 39:32
+Field 31:16 PARTID_D
+Res0 15:0
+EndSysreg
+
Sysreg ISR_EL1 3 0 12 1 0
Res0 63:11
Field 10 IS
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 02/41] KVM: arm64: Preserve host MPAM configuration when changing traps
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
2026-02-24 17:56 ` [PATCH v5 01/41] arm64/sysreg: Add MPAMSM_EL1 register Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-03-02 17:52 ` Marc Zyngier
2026-02-24 17:56 ` [PATCH v5 03/41] KVM: arm64: Make MPAMSM_EL1 accesses UNDEF Ben Horgan
` (41 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
When kvm enables or disables MPAM traps to EL2 it clears all other bits in
MPAM2_EL2. Notably, it clears the partition ids (PARTIDs) and performance
monitoring groups (PMGs). Avoid changing these bits in anticipation of
adding support for MPAM in the kernel. Otherwise, on a VHE system with the
host running at EL2 where MPAM2_EL2 and MPAM1_EL1 access the same register,
any attempt to use MPAM to monitor or partition resources for kernel space
would be foiled by running a KVM guest. Additionally, MPAM2_EL2.EnMPAMSM is
always set to 0 which causes MPAMSM_EL1 to always trap. Keep EnMPAMSM set
to 1 when not in a guest so that the kernel can use MPAMSM_EL1.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 2597e8bda867..0b50ddd530f3 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -267,7 +267,8 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
static inline void __activate_traps_mpam(struct kvm_vcpu *vcpu)
{
- u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
+ u64 clr = MPAM2_EL2_EnMPAMSM;
+ u64 set = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
if (!system_supports_mpam())
return;
@@ -277,18 +278,21 @@ static inline void __activate_traps_mpam(struct kvm_vcpu *vcpu)
write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
} else {
/* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
- r |= MPAM2_EL2_TIDR;
+ set |= MPAM2_EL2_TIDR;
}
- write_sysreg_s(r, SYS_MPAM2_EL2);
+ sysreg_clear_set_s(SYS_MPAM2_EL2, clr, set);
}
static inline void __deactivate_traps_mpam(void)
{
+ u64 clr = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1 | MPAM2_EL2_TIDR;
+ u64 set = MPAM2_EL2_EnMPAMSM;
+
if (!system_supports_mpam())
return;
- write_sysreg_s(0, SYS_MPAM2_EL2);
+ sysreg_clear_set_s(SYS_MPAM2_EL2, clr, set);
if (system_supports_mpam_hcr())
write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 02/41] KVM: arm64: Preserve host MPAM configuration when changing traps
2026-02-24 17:56 ` [PATCH v5 02/41] KVM: arm64: Preserve host MPAM configuration when changing traps Ben Horgan
@ 2026-03-02 17:52 ` Marc Zyngier
0 siblings, 0 replies; 75+ messages in thread
From: Marc Zyngier @ 2026-03-02 17:52 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
On Tue, 24 Feb 2026 17:56:41 +0000,
Ben Horgan <ben.horgan@arm.com> wrote:
>
> When kvm enables or disables MPAM traps to EL2 it clears all other bits in
nit: s/kvm/KVM/g
> MPAM2_EL2. Notably, it clears the partition ids (PARTIDs) and performance
> monitoring groups (PMGs). Avoid changing these bits in anticipation of
> adding support for MPAM in the kernel. Otherwise, on a VHE system with the
> host running at EL2 where MPAM2_EL2 and MPAM1_EL1 access the same register,
> any attempt to use MPAM to monitor or partition resources for kernel space
> would be foiled by running a KVM guest. Additionally, MPAM2_EL2.EnMPAMSM is
> always set to 0 which causes MPAMSM_EL1 to always trap. Keep EnMPAMSM set
> to 1 when not in a guest so that the kernel can use MPAMSM_EL1.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> arch/arm64/kvm/hyp/include/hyp/switch.h | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 2597e8bda867..0b50ddd530f3 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -267,7 +267,8 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>
> static inline void __activate_traps_mpam(struct kvm_vcpu *vcpu)
> {
> - u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
> + u64 clr = MPAM2_EL2_EnMPAMSM;
> + u64 set = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
>
> if (!system_supports_mpam())
> return;
> @@ -277,18 +278,21 @@ static inline void __activate_traps_mpam(struct kvm_vcpu *vcpu)
> write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
> } else {
> /* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
> - r |= MPAM2_EL2_TIDR;
> + set |= MPAM2_EL2_TIDR;
> }
>
> - write_sysreg_s(r, SYS_MPAM2_EL2);
> + sysreg_clear_set_s(SYS_MPAM2_EL2, clr, set);
> }
>
> static inline void __deactivate_traps_mpam(void)
> {
> + u64 clr = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1 | MPAM2_EL2_TIDR;
> + u64 set = MPAM2_EL2_EnMPAMSM;
> +
> if (!system_supports_mpam())
> return;
>
> - write_sysreg_s(0, SYS_MPAM2_EL2);
> + sysreg_clear_set_s(SYS_MPAM2_EL2, clr, set);
>
> if (system_supports_mpam_hcr())
> write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
Acked-by: Marc Zyngier <maz@kernel.org>
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 03/41] KVM: arm64: Make MPAMSM_EL1 accesses UNDEF
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
2026-02-24 17:56 ` [PATCH v5 01/41] arm64/sysreg: Add MPAMSM_EL1 register Ben Horgan
2026-02-24 17:56 ` [PATCH v5 02/41] KVM: arm64: Preserve host MPAM configuration when changing traps Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-03-02 17:54 ` Marc Zyngier
2026-02-24 17:56 ` [PATCH v5 04/41] arm64: mpam: Context switch the MPAM registers Ben Horgan
` (40 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
The MPAMSM_EL1 controls the MPAM labeling for an SMCU, Streaming Mode
Compute Unit. As there is on MPAM support in kvm, make sure MPAMSM_EL1
accesses trigger an UNDEF.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Remove paragraph from commit on allowed range of values
---
arch/arm64/kvm/sys_regs.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a7cd0badc20c..2c9a52e66fe0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3373,6 +3373,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_MPAM1_EL1), undef_access },
{ SYS_DESC(SYS_MPAM0_EL1), undef_access },
+ { SYS_DESC(SYS_MPAMSM_EL1), undef_access },
+
{ SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 03/41] KVM: arm64: Make MPAMSM_EL1 accesses UNDEF
2026-02-24 17:56 ` [PATCH v5 03/41] KVM: arm64: Make MPAMSM_EL1 accesses UNDEF Ben Horgan
@ 2026-03-02 17:54 ` Marc Zyngier
0 siblings, 0 replies; 75+ messages in thread
From: Marc Zyngier @ 2026-03-02 17:54 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
On Tue, 24 Feb 2026 17:56:42 +0000,
Ben Horgan <ben.horgan@arm.com> wrote:
>
> The MPAMSM_EL1 controls the MPAM labeling for an SMCU, Streaming Mode
nit: The MPAMSM_EL1 *register* controls...
> Compute Unit. As there is on MPAM support in kvm, make sure MPAMSM_EL1
s/on/no/, s/kvm/KVM/.
> accesses trigger an UNDEF.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> Changes since v2:
> Remove paragraph from commit on allowed range of values
> ---
> arch/arm64/kvm/sys_regs.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index a7cd0badc20c..2c9a52e66fe0 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3373,6 +3373,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>
> { SYS_DESC(SYS_MPAM1_EL1), undef_access },
> { SYS_DESC(SYS_MPAM0_EL1), undef_access },
> + { SYS_DESC(SYS_MPAMSM_EL1), undef_access },
> +
> { SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
> { SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
>
Acked-by: Marc Zyngier <maz@kernel.org>
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 04/41] arm64: mpam: Context switch the MPAM registers
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (2 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 03/41] KVM: arm64: Make MPAMSM_EL1 accesses UNDEF Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:56 ` [PATCH v5 05/41] arm64: mpam: Re-initialise MPAM regs when CPU comes online Ben Horgan
` (39 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
MPAM allows traffic in the SoC to be labeled by the OS, these labels are
used to apply policy in caches and bandwidth regulators, and to monitor
traffic in the SoC. The label is made up of a PARTID and PMG value. The x86
equivalent calls these CLOSID and RMID, but they don't map precisely.
MPAM has two CPU system registers that is used to hold the PARTID and PMG
values that traffic generated at each exception level will use. These can
be set per-task by the resctrl file system. (resctrl is the defacto
interface for controlling this stuff).
Add a helper to switch this.
struct task_struct's separate CLOSID and RMID fields are insufficient to
implement resctrl using MPAM, as resctrl can change the PARTID (CLOSID) and
PMG (sort of like the RMID) separately. On x86, the rmid is an independent
number, so a race that writes a mismatched closid and rmid into hardware is
benign. On arm64, the pmg bits extend the partid.
(i.e. partid-5 has a pmg-0 that is not the same as partid-6's pmg-0). In
this case, mismatching the values will 'dirty' a pmg value that resctrl
believes is clean, and is not tracking with its 'limbo' code.
To avoid this, the partid and pmg are always read and written as a
pair. This requires a new u64 field. In struct task_struct there are two
u32, rmid and closid for the x86 case, but as we can't use them here do
something else. Add this new field, mpam_partid_pmg, to struct thread_info
to avoid adding more architecture specific code to struct task_struct.
Always use READ_ONCE()/WRITE_ONCE() when accessing this field.
Resctrl allows a per-cpu 'default' value to be set, this overrides the
values when scheduling a task in the default control-group, which has
PARTID 0. The way 'code data prioritisation' gets emulated means the
register value for the default group needs to be a variable.
The current system register value is kept in a per-cpu variable to avoid
writing to the system register if the value isn't going to change. Writes
to this register may reset the hardware state for regulating bandwidth.
Finally, there is no reason to context switch these registers unless there
is a driver changing the values in struct task_struct. Hide the whole thing
behind a static key. This also allows the driver to disable MPAM in
response to errors reported by hardware. Move the existing static key to
belong to the arch code, as in the future the MPAM driver may become a
loadable module.
All this should depend on whether there is an MPAM driver, hide it behind
CONFIG_ARM64_MPAM.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
CC: Amit Singh Tomar <amitsinght@marvell.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
CONFIG_MPAM -> CONFIG_ARM64_MPAM in commit message
Remove extra DECLARE_STATIC_KEY_FALSE
Function name in comment, __mpam_sched_in() -> mpam_thread_switch()
Remove unused headers
Expand comment (Jonathan)
Changes since v2:
Tidy up ifdefs
Changes since v3:
Always set MPAMEN for MPAM1_EL1 rather than relying on it being read only.
---
arch/arm64/Kconfig | 2 +
arch/arm64/include/asm/mpam.h | 67 ++++++++++++++++++++++++++++
arch/arm64/include/asm/thread_info.h | 3 ++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/mpam.c | 13 ++++++
arch/arm64/kernel/process.c | 7 +++
drivers/resctrl/mpam_devices.c | 2 -
drivers/resctrl/mpam_internal.h | 4 +-
8 files changed, 95 insertions(+), 4 deletions(-)
create mode 100644 arch/arm64/include/asm/mpam.h
create mode 100644 arch/arm64/kernel/mpam.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 38dba5f7e4d2..ecaaca13a969 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2039,6 +2039,8 @@ config ARM64_MPAM
MPAM is exposed to user-space via the resctrl pseudo filesystem.
+ This option enables the extra context switch code.
+
endmenu # "ARMv8.4 architectural features"
menu "ARMv8.5 architectural features"
diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
new file mode 100644
index 000000000000..0747e0526927
--- /dev/null
+++ b/arch/arm64/include/asm/mpam.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2025 Arm Ltd. */
+
+#ifndef __ASM__MPAM_H
+#define __ASM__MPAM_H
+
+#include <linux/jump_label.h>
+#include <linux/percpu.h>
+#include <linux/sched.h>
+
+#include <asm/sysreg.h>
+
+DECLARE_STATIC_KEY_FALSE(mpam_enabled);
+DECLARE_PER_CPU(u64, arm64_mpam_default);
+DECLARE_PER_CPU(u64, arm64_mpam_current);
+
+/*
+ * The value of the MPAM0_EL1 sysreg when a task is in resctrl's default group.
+ * This is used by the context switch code to use the resctrl CPU property
+ * instead. The value is modified when CDP is enabled/disabled by mounting
+ * the resctrl filesystem.
+ */
+extern u64 arm64_mpam_global_default;
+
+/*
+ * The resctrl filesystem writes to the partid/pmg values for threads and CPUs,
+ * which may race with reads in mpam_thread_switch(). Ensure only one of the old
+ * or new values are used. Particular care should be taken with the pmg field as
+ * mpam_thread_switch() may read a partid and pmg that don't match, causing this
+ * value to be stored with cache allocations, despite being considered 'free' by
+ * resctrl.
+ */
+#ifdef CONFIG_ARM64_MPAM
+static inline u64 mpam_get_regval(struct task_struct *tsk)
+{
+ return READ_ONCE(task_thread_info(tsk)->mpam_partid_pmg);
+}
+
+static inline void mpam_thread_switch(struct task_struct *tsk)
+{
+ u64 oldregval;
+ int cpu = smp_processor_id();
+ u64 regval = mpam_get_regval(tsk);
+
+ if (!static_branch_likely(&mpam_enabled))
+ return;
+
+ if (regval == READ_ONCE(arm64_mpam_global_default))
+ regval = READ_ONCE(per_cpu(arm64_mpam_default, cpu));
+
+ oldregval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
+ if (oldregval == regval)
+ return;
+
+ write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1);
+ isb();
+
+ /* Synchronising the EL0 write is left until the ERET to EL0 */
+ write_sysreg_s(regval, SYS_MPAM0_EL1);
+
+ WRITE_ONCE(per_cpu(arm64_mpam_current, cpu), regval);
+}
+#else
+static inline void mpam_thread_switch(struct task_struct *tsk) {}
+#endif /* CONFIG_ARM64_MPAM */
+
+#endif /* __ASM__MPAM_H */
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 7942478e4065..5d7fe3e153c8 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -41,6 +41,9 @@ struct thread_info {
#ifdef CONFIG_SHADOW_CALL_STACK
void *scs_base;
void *scs_sp;
+#endif
+#ifdef CONFIG_ARM64_MPAM
+ u64 mpam_partid_pmg;
#endif
u32 cpu;
};
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 76f32e424065..15979f366519 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -67,6 +67,7 @@ obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
obj-$(CONFIG_VMCORE_INFO) += vmcore_info.o
obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
+obj-$(CONFIG_ARM64_MPAM) += mpam.o
obj-$(CONFIG_ARM64_MTE) += mte.o
obj-y += vdso-wrap.o
obj-$(CONFIG_COMPAT_VDSO) += vdso32-wrap.o
diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
new file mode 100644
index 000000000000..9866d2ca0faa
--- /dev/null
+++ b/arch/arm64/kernel/mpam.c
@@ -0,0 +1,13 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2025 Arm Ltd. */
+
+#include <asm/mpam.h>
+
+#include <linux/jump_label.h>
+#include <linux/percpu.h>
+
+DEFINE_STATIC_KEY_FALSE(mpam_enabled);
+DEFINE_PER_CPU(u64, arm64_mpam_default);
+DEFINE_PER_CPU(u64, arm64_mpam_current);
+
+u64 arm64_mpam_global_default;
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 489554931231..47698955fa1e 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -51,6 +51,7 @@
#include <asm/fpsimd.h>
#include <asm/gcs.h>
#include <asm/mmu_context.h>
+#include <asm/mpam.h>
#include <asm/mte.h>
#include <asm/processor.h>
#include <asm/pointer_auth.h>
@@ -738,6 +739,12 @@ struct task_struct *__switch_to(struct task_struct *prev,
if (prev->thread.sctlr_user != next->thread.sctlr_user)
update_sctlr_el1(next->thread.sctlr_user);
+ /*
+ * MPAM thread switch happens after the DSB to ensure prev's accesses
+ * use prev's MPAM settings.
+ */
+ mpam_thread_switch(next);
+
/* the actual thread switch */
last = cpu_switch_to(prev, next);
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index 1eebc2602187..b400a7381d9a 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -29,8 +29,6 @@
#include "mpam_internal.h"
-DEFINE_STATIC_KEY_FALSE(mpam_enabled); /* This moves to arch code */
-
/*
* mpam_list_lock protects the SRCU lists when writing. Once the
* mpam_enabled key is enabled these lists are read-only,
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index e8971842b124..4632985bcca6 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -16,12 +16,12 @@
#include <linux/srcu.h>
#include <linux/types.h>
+#include <asm/mpam.h>
+
#define MPAM_MSC_MAX_NUM_RIS 16
struct platform_device;
-DECLARE_STATIC_KEY_FALSE(mpam_enabled);
-
#ifdef CONFIG_MPAM_KUNIT_TEST
#define PACKED_FOR_KUNIT __packed
#else
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 05/41] arm64: mpam: Re-initialise MPAM regs when CPU comes online
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (3 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 04/41] arm64: mpam: Context switch the MPAM registers Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:56 ` [PATCH v5 06/41] arm64: mpam: Drop the CONFIG_EXPERT restriction Ben Horgan
` (38 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
Now that the MPAM system registers are expected to have values that change,
reprogram them based on the previous value when a CPU is brought online.
Previously MPAM's 'default PARTID' of 0 was always used for MPAM in
kernel-space as this is the PARTID that hardware guarantees to
reset. Because there are a limited number of PARTID, this value is exposed
to user-space, meaning resctrl changes to the resctrl default group would
also affect kernel threads. Instead, use the task's PARTID value for
kernel work on behalf of user-space too. The default of 0 is kept for both
user-space and kernel-space when MPAM is not enabled.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
CONFIG_MPAM -> CONFIG_ARM64_MPAM
Check mpam_enabled
Comment about relying on ERET for synchronisation
Update commit message
Changes since v3:
Always set MPAM1_EL1.MPAMEN rather than relying on it being read only
---
arch/arm64/kernel/cpufeature.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c31f8e17732a..c3f900f81653 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -86,6 +86,7 @@
#include <asm/kvm_host.h>
#include <asm/mmu.h>
#include <asm/mmu_context.h>
+#include <asm/mpam.h>
#include <asm/mte.h>
#include <asm/hypervisor.h>
#include <asm/processor.h>
@@ -2492,13 +2493,17 @@ test_has_mpam(const struct arm64_cpu_capabilities *entry, int scope)
static void
cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
{
- /*
- * Access by the kernel (at EL1) should use the reserved PARTID
- * which is configured unrestricted. This avoids priority-inversion
- * where latency sensitive tasks have to wait for a task that has
- * been throttled to release the lock.
- */
- write_sysreg_s(0, SYS_MPAM1_EL1);
+ int cpu = smp_processor_id();
+ u64 regval = 0;
+
+ if (IS_ENABLED(CONFIG_ARM64_MPAM) && static_branch_likely(&mpam_enabled))
+ regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
+
+ write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1);
+ isb();
+
+ /* Synchronising the EL0 write is left until the ERET to EL0 */
+ write_sysreg_s(regval, SYS_MPAM0_EL1);
}
static bool
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 06/41] arm64: mpam: Drop the CONFIG_EXPERT restriction
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (4 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 05/41] arm64: mpam: Re-initialise MPAM regs when CPU comes online Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-03-09 6:42 ` Gavin Shan
2026-02-24 17:56 ` [PATCH v5 07/41] arm64: mpam: Advertise the CPUs MPAM limits to the driver Ben Horgan
` (37 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
In anticipation of MPAM being useful remove the CONFIG_EXPERT restriction.
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
arch/arm64/Kconfig | 2 +-
drivers/resctrl/Kconfig | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index ecaaca13a969..3170c67464fb 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2016,7 +2016,7 @@ config ARM64_TLB_RANGE
config ARM64_MPAM
bool "Enable support for MPAM"
- select ARM64_MPAM_DRIVER if EXPERT # does nothing yet
+ select ARM64_MPAM_DRIVER
select ACPI_MPAM if ACPI
help
Memory System Resource Partitioning and Monitoring (MPAM) is an
diff --git a/drivers/resctrl/Kconfig b/drivers/resctrl/Kconfig
index c808e0470394..c34e059c6e41 100644
--- a/drivers/resctrl/Kconfig
+++ b/drivers/resctrl/Kconfig
@@ -1,6 +1,6 @@
menuconfig ARM64_MPAM_DRIVER
bool "MPAM driver"
- depends on ARM64 && ARM64_MPAM && EXPERT
+ depends on ARM64 && ARM64_MPAM
help
Memory System Resource Partitioning and Monitoring (MPAM) driver for
System IP, e.g. caches and memory controllers.
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 06/41] arm64: mpam: Drop the CONFIG_EXPERT restriction
2026-02-24 17:56 ` [PATCH v5 06/41] arm64: mpam: Drop the CONFIG_EXPERT restriction Ben Horgan
@ 2026-03-09 6:42 ` Gavin Shan
0 siblings, 0 replies; 75+ messages in thread
From: Gavin Shan @ 2026-03-09 6:42 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
On 2/25/26 3:56 AM, Ben Horgan wrote:
> In anticipation of MPAM being useful remove the CONFIG_EXPERT restriction.
>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> arch/arm64/Kconfig | 2 +-
> drivers/resctrl/Kconfig | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
Reviewed-by: Gavin Shan <gshan@redhat.com>
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 07/41] arm64: mpam: Advertise the CPUs MPAM limits to the driver
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (5 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 06/41] arm64: mpam: Drop the CONFIG_EXPERT restriction Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-03-09 6:43 ` Gavin Shan
2026-02-24 17:56 ` [PATCH v5 08/41] arm64: mpam: Add cpu_pm notifier to restore MPAM sysregs Ben Horgan
` (36 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
Requestors need to populate the MPAM fields for any traffic they send on
the interconnect. For the CPUs these values are taken from the
corresponding MPAMy_ELx register. Each requestor may have a limit on the
largest PARTID or PMG value that can be used. The MPAM driver has to
determine the system-wide minimum supported PARTID and PMG values.
To do this, the driver needs to be told what each requestor's limit is.
CPUs are special, but this infrastructure is also needed for the SMMU and
GIC ITS. Call the helper to tell the MPAM driver what the CPUs can do.
The return value can be ignored by the arch code as it runs well before the
MPAM driver starts probing.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
arch/arm64/kernel/mpam.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
index 9866d2ca0faa..e6feff2324ac 100644
--- a/arch/arm64/kernel/mpam.c
+++ b/arch/arm64/kernel/mpam.c
@@ -3,6 +3,7 @@
#include <asm/mpam.h>
+#include <linux/arm_mpam.h>
#include <linux/jump_label.h>
#include <linux/percpu.h>
@@ -11,3 +12,14 @@ DEFINE_PER_CPU(u64, arm64_mpam_default);
DEFINE_PER_CPU(u64, arm64_mpam_current);
u64 arm64_mpam_global_default;
+
+static int __init arm64_mpam_register_cpus(void)
+{
+ u64 mpamidr = read_sanitised_ftr_reg(SYS_MPAMIDR_EL1);
+ u16 partid_max = FIELD_GET(MPAMIDR_EL1_PARTID_MAX, mpamidr);
+ u8 pmg_max = FIELD_GET(MPAMIDR_EL1_PMG_MAX, mpamidr);
+
+ return mpam_register_requestor(partid_max, pmg_max);
+}
+/* Must occur before mpam_msc_driver_init() from subsys_initcall() */
+arch_initcall(arm64_mpam_register_cpus)
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 07/41] arm64: mpam: Advertise the CPUs MPAM limits to the driver
2026-02-24 17:56 ` [PATCH v5 07/41] arm64: mpam: Advertise the CPUs MPAM limits to the driver Ben Horgan
@ 2026-03-09 6:43 ` Gavin Shan
0 siblings, 0 replies; 75+ messages in thread
From: Gavin Shan @ 2026-03-09 6:43 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
On 2/25/26 3:56 AM, Ben Horgan wrote:
> From: James Morse <james.morse@arm.com>
>
> Requestors need to populate the MPAM fields for any traffic they send on
> the interconnect. For the CPUs these values are taken from the
> corresponding MPAMy_ELx register. Each requestor may have a limit on the
> largest PARTID or PMG value that can be used. The MPAM driver has to
> determine the system-wide minimum supported PARTID and PMG values.
>
> To do this, the driver needs to be told what each requestor's limit is.
>
> CPUs are special, but this infrastructure is also needed for the SMMU and
> GIC ITS. Call the helper to tell the MPAM driver what the CPUs can do.
>
> The return value can be ignored by the arch code as it runs well before the
> MPAM driver starts probing.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> arch/arm64/kernel/mpam.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
Reviewed-by: Gavin Shan <gshan@redhat.com>
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 08/41] arm64: mpam: Add cpu_pm notifier to restore MPAM sysregs
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (6 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 07/41] arm64: mpam: Advertise the CPUs MPAM limits to the driver Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:56 ` [PATCH v5 09/41] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register Ben Horgan
` (35 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
The MPAM system registers will be lost if the CPU is reset during PSCI's
CPU_SUSPEND.
Add a PM notifier to restore them.
mpam_thread_switch(current) can't be used as this won't make any changes if
the in-memory copy says the register already has the correct value. In
reality the system register is UNKNOWN out of reset.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v3:
Always set MPAM1_EL1.MPAMEN rather than relying on it being read only
Bail out early if mpam not supported (Gavin)
---
arch/arm64/kernel/mpam.c | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
index e6feff2324ac..48ec0ffd5999 100644
--- a/arch/arm64/kernel/mpam.c
+++ b/arch/arm64/kernel/mpam.c
@@ -4,6 +4,7 @@
#include <asm/mpam.h>
#include <linux/arm_mpam.h>
+#include <linux/cpu_pm.h>
#include <linux/jump_label.h>
#include <linux/percpu.h>
@@ -13,12 +14,44 @@ DEFINE_PER_CPU(u64, arm64_mpam_current);
u64 arm64_mpam_global_default;
+static int mpam_pm_notifier(struct notifier_block *self,
+ unsigned long cmd, void *v)
+{
+ u64 regval;
+ int cpu = smp_processor_id();
+
+ switch (cmd) {
+ case CPU_PM_EXIT:
+ /*
+ * Don't use mpam_thread_switch() as the system register
+ * value has changed under our feet.
+ */
+ regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
+ write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1);
+ isb();
+
+ write_sysreg_s(regval, SYS_MPAM0_EL1);
+
+ return NOTIFY_OK;
+ default:
+ return NOTIFY_DONE;
+ }
+}
+
+static struct notifier_block mpam_pm_nb = {
+ .notifier_call = mpam_pm_notifier,
+};
+
static int __init arm64_mpam_register_cpus(void)
{
u64 mpamidr = read_sanitised_ftr_reg(SYS_MPAMIDR_EL1);
u16 partid_max = FIELD_GET(MPAMIDR_EL1_PARTID_MAX, mpamidr);
u8 pmg_max = FIELD_GET(MPAMIDR_EL1_PMG_MAX, mpamidr);
+ if (!system_supports_mpam())
+ return 0;
+
+ cpu_pm_register_notifier(&mpam_pm_nb);
return mpam_register_requestor(partid_max, pmg_max);
}
/* Must occur before mpam_msc_driver_init() from subsys_initcall() */
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 09/41] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (7 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 08/41] arm64: mpam: Add cpu_pm notifier to restore MPAM sysregs Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:56 ` [PATCH v5 10/41] arm64: mpam: Add helpers to change a task or cpu's MPAM PARTID/PMG values Ben Horgan
` (34 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
The MPAMSM_EL1 sets the MPAM labels, PMG and PARTID, for loads and stores
generated by a shared SMCU. Disable the traps so the kernel can use it and
set it to the same configuration as the per-EL cpu MPAM configuration.
If an SMCU is not shared with other cpus then it is implementation
defined whether the configuration from MPAMSM_EL1 is used or that from
the appropriate MPAMy_ELx. As we set the same, PMG_D and PARTID_D,
configuration for MPAM0_EL1, MPAM1_EL1 and MPAMSM_EL1 the resulting
configuration is the same regardless.
The range of valid configurations for the PARTID and PMG in MPAMSM_EL1 is
not currently specified in Arm Architectural Reference Manual but the
architect has confirmed that it is intended to be the same as that for the
cpu configuration in the MPAMy_ELx registers.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Mention PMG_D and PARTID_D specifically int he commit message
Add paragraph in commit message on range of MPAMSM_EL1 fields
Changes since v3:
Use cpus_have_cap() in cpu_enable_mpam()
add {}
---
arch/arm64/include/asm/el2_setup.h | 3 ++-
arch/arm64/include/asm/mpam.h | 2 ++
arch/arm64/kernel/cpufeature.c | 2 ++
arch/arm64/kernel/mpam.c | 4 ++++
4 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index 85f4c1615472..4d15071a4f3f 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -513,7 +513,8 @@
check_override id_aa64pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT, .Linit_mpam_\@, .Lskip_mpam_\@, x1, x2
.Linit_mpam_\@:
- msr_s SYS_MPAM2_EL2, xzr // use the default partition
+ mov x0, #MPAM2_EL2_EnMPAMSM_MASK
+ msr_s SYS_MPAM2_EL2, x0 // use the default partition,
// and disable lower traps
mrs_s x0, SYS_MPAMIDR_EL1
tbz x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@ // skip if no MPAMHCR reg
diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
index 0747e0526927..6bccbfdccb87 100644
--- a/arch/arm64/include/asm/mpam.h
+++ b/arch/arm64/include/asm/mpam.h
@@ -53,6 +53,8 @@ static inline void mpam_thread_switch(struct task_struct *tsk)
return;
write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1);
+ if (system_supports_sme())
+ write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
isb();
/* Synchronising the EL0 write is left until the ERET to EL0 */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c3f900f81653..4f34e7a76f64 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2500,6 +2500,8 @@ cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1);
+ if (cpus_have_cap(ARM64_SME))
+ write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D), SYS_MPAMSM_EL1);
isb();
/* Synchronising the EL0 write is left until the ERET to EL0 */
diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
index 48ec0ffd5999..3a490de4fa12 100644
--- a/arch/arm64/kernel/mpam.c
+++ b/arch/arm64/kernel/mpam.c
@@ -28,6 +28,10 @@ static int mpam_pm_notifier(struct notifier_block *self,
*/
regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu));
write_sysreg_s(regval | MPAM1_EL1_MPAMEN, SYS_MPAM1_EL1);
+ if (system_supports_sme()) {
+ write_sysreg_s(regval & (MPAMSM_EL1_PARTID_D | MPAMSM_EL1_PMG_D),
+ SYS_MPAMSM_EL1);
+ }
isb();
write_sysreg_s(regval, SYS_MPAM0_EL1);
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 10/41] arm64: mpam: Add helpers to change a task or cpu's MPAM PARTID/PMG values
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (8 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 09/41] arm64: mpam: Initialise and context switch the MPAMSM_EL1 register Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-03-09 6:44 ` Gavin Shan
2026-02-24 17:56 ` [PATCH v5 11/41] KVM: arm64: Force guest EL1 to use user-space's partid configuration Ben Horgan
` (33 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan, Dave Martin
From: James Morse <james.morse@arm.com>
Care must be taken when modifying the PARTID and PMG of a task in any
per-task structure as writing these values may race with the task being
scheduled in, and reading the modified values.
Add helpers to set the task properties, and the CPU default value. These
use WRITE_ONCE() that pairs with the READ_ONCE() in mpam_get_regval() to
avoid causing torn values.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
CC: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
Keep comment attached to mpam_get_regval()
Add internal helper, __mpam_regval() (Jonathan)
Changes since v3:
Remove extra CONFIG_ARM64_MPAM guarding
Extend CONFIG_ARM64_MPAM guarding
---
arch/arm64/include/asm/mpam.h | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
index 6bccbfdccb87..05aa71200f61 100644
--- a/arch/arm64/include/asm/mpam.h
+++ b/arch/arm64/include/asm/mpam.h
@@ -4,6 +4,7 @@
#ifndef __ASM__MPAM_H
#define __ASM__MPAM_H
+#include <linux/bitfield.h>
#include <linux/jump_label.h>
#include <linux/percpu.h>
#include <linux/sched.h>
@@ -22,6 +23,23 @@ DECLARE_PER_CPU(u64, arm64_mpam_current);
*/
extern u64 arm64_mpam_global_default;
+#ifdef CONFIG_ARM64_MPAM
+static inline u64 __mpam_regval(u16 partid_d, u16 partid_i, u8 pmg_d, u8 pmg_i)
+{
+ return FIELD_PREP(MPAM0_EL1_PARTID_D, partid_d) |
+ FIELD_PREP(MPAM0_EL1_PARTID_I, partid_i) |
+ FIELD_PREP(MPAM0_EL1_PMG_D, pmg_d) |
+ FIELD_PREP(MPAM0_EL1_PMG_I, pmg_i);
+}
+
+static inline void mpam_set_cpu_defaults(int cpu, u16 partid_d, u16 partid_i,
+ u8 pmg_d, u8 pmg_i)
+{
+ u64 default_val = __mpam_regval(partid_d, partid_i, pmg_d, pmg_i);
+
+ WRITE_ONCE(per_cpu(arm64_mpam_default, cpu), default_val);
+}
+
/*
* The resctrl filesystem writes to the partid/pmg values for threads and CPUs,
* which may race with reads in mpam_thread_switch(). Ensure only one of the old
@@ -30,12 +48,20 @@ extern u64 arm64_mpam_global_default;
* value to be stored with cache allocations, despite being considered 'free' by
* resctrl.
*/
-#ifdef CONFIG_ARM64_MPAM
static inline u64 mpam_get_regval(struct task_struct *tsk)
{
return READ_ONCE(task_thread_info(tsk)->mpam_partid_pmg);
}
+static inline void mpam_set_task_partid_pmg(struct task_struct *tsk,
+ u16 partid_d, u16 partid_i,
+ u8 pmg_d, u8 pmg_i)
+{
+ u64 regval = __mpam_regval(partid_d, partid_i, pmg_d, pmg_i);
+
+ WRITE_ONCE(task_thread_info(tsk)->mpam_partid_pmg, regval);
+}
+
static inline void mpam_thread_switch(struct task_struct *tsk)
{
u64 oldregval;
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 10/41] arm64: mpam: Add helpers to change a task or cpu's MPAM PARTID/PMG values
2026-02-24 17:56 ` [PATCH v5 10/41] arm64: mpam: Add helpers to change a task or cpu's MPAM PARTID/PMG values Ben Horgan
@ 2026-03-09 6:44 ` Gavin Shan
0 siblings, 0 replies; 75+ messages in thread
From: Gavin Shan @ 2026-03-09 6:44 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
On 2/25/26 3:56 AM, Ben Horgan wrote:
> From: James Morse <james.morse@arm.com>
>
> Care must be taken when modifying the PARTID and PMG of a task in any
> per-task structure as writing these values may race with the task being
> scheduled in, and reading the modified values.
>
> Add helpers to set the task properties, and the CPU default value. These
> use WRITE_ONCE() that pairs with the READ_ONCE() in mpam_get_regval() to
> avoid causing torn values.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> CC: Dave Martin <Dave.Martin@arm.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> Changes since rfc:
> Keep comment attached to mpam_get_regval()
> Add internal helper, __mpam_regval() (Jonathan)
>
> Changes since v3:
> Remove extra CONFIG_ARM64_MPAM guarding
> Extend CONFIG_ARM64_MPAM guarding
> ---
> arch/arm64/include/asm/mpam.h | 28 +++++++++++++++++++++++++++-
> 1 file changed, 27 insertions(+), 1 deletion(-)
>
Reviewed-by: Gavin Shan <gshan@redhat.com>
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 11/41] KVM: arm64: Force guest EL1 to use user-space's partid configuration
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (9 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 10/41] arm64: mpam: Add helpers to change a task or cpu's MPAM PARTID/PMG values Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-03-02 17:58 ` Marc Zyngier
2026-03-09 6:45 ` Gavin Shan
2026-02-24 17:56 ` [PATCH v5 12/41] KVM: arm64: Use kernel-space partid configuration for hypercalls Ben Horgan
` (32 subsequent siblings)
43 siblings, 2 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
While we trap the guest's attempts to read/write the MPAM control
registers, the hardware continues to use them. Guest-EL0 uses KVM's
user-space's configuration, as the value is left in the register, and
guest-EL1 uses either the host kernel's configuration, or in the case of
VHE, the UNKNOWN reset value of MPAM1_EL1.
We want to force the guest-EL1 to use KVM's user-space's MPAM
configuration. On nVHE rely on MPAM0_EL1 and MPAM1_EL1 always being
programmed the same and on VHE copy MPAM0_EL1 into the guest's
MPAM1_EL1. There is no need to restore as this is out of context once TGE
is set.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
Drop the unneeded __mpam_guest_load() in nvhre and the MPAM1_EL1 save restore
Defer EL2 handling until next patch
Changes since v2:
Use mask (Oliver)
Changes since v4:
Explicitly set the mpam enable bit
---
arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index b254d442e54e..be685b63e8cf 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -183,6 +183,21 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt)
}
NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe);
+/*
+ * The _EL0 value was written by the host's context switch and belongs to the
+ * VMM. Copy this into the guest's _EL1 register.
+ */
+static inline void __mpam_guest_load(void)
+{
+ u64 mask = MPAM0_EL1_PARTID_D | MPAM0_EL1_PARTID_I | MPAM0_EL1_PMG_D | MPAM0_EL1_PMG_I;
+
+ if (system_supports_mpam()) {
+ u64 val = (read_sysreg_s(SYS_MPAM0_EL1) & mask) | MPAM1_EL1_MPAMEN;
+
+ write_sysreg_el1(val, SYS_MPAM1);
+ }
+}
+
/**
* __vcpu_load_switch_sysregs - Load guest system registers to the physical CPU
*
@@ -222,6 +237,7 @@ void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu)
*/
__sysreg32_restore_state(vcpu);
__sysreg_restore_user_state(guest_ctxt);
+ __mpam_guest_load();
if (unlikely(is_hyp_ctxt(vcpu))) {
__sysreg_restore_vel2_state(vcpu);
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 11/41] KVM: arm64: Force guest EL1 to use user-space's partid configuration
2026-02-24 17:56 ` [PATCH v5 11/41] KVM: arm64: Force guest EL1 to use user-space's partid configuration Ben Horgan
@ 2026-03-02 17:58 ` Marc Zyngier
2026-03-09 6:45 ` Gavin Shan
1 sibling, 0 replies; 75+ messages in thread
From: Marc Zyngier @ 2026-03-02 17:58 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
On Tue, 24 Feb 2026 17:56:50 +0000,
Ben Horgan <ben.horgan@arm.com> wrote:
>
> From: James Morse <james.morse@arm.com>
>
> While we trap the guest's attempts to read/write the MPAM control
> registers, the hardware continues to use them. Guest-EL0 uses KVM's
> user-space's configuration, as the value is left in the register, and
> guest-EL1 uses either the host kernel's configuration, or in the case of
> VHE, the UNKNOWN reset value of MPAM1_EL1.
>
> We want to force the guest-EL1 to use KVM's user-space's MPAM
> configuration. On nVHE rely on MPAM0_EL1 and MPAM1_EL1 always being
> programmed the same and on VHE copy MPAM0_EL1 into the guest's
> MPAM1_EL1. There is no need to restore as this is out of context once TGE
> is set.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [PATCH v5 11/41] KVM: arm64: Force guest EL1 to use user-space's partid configuration
2026-02-24 17:56 ` [PATCH v5 11/41] KVM: arm64: Force guest EL1 to use user-space's partid configuration Ben Horgan
2026-03-02 17:58 ` Marc Zyngier
@ 2026-03-09 6:45 ` Gavin Shan
1 sibling, 0 replies; 75+ messages in thread
From: Gavin Shan @ 2026-03-09 6:45 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
On 2/25/26 3:56 AM, Ben Horgan wrote:
> From: James Morse <james.morse@arm.com>
>
> While we trap the guest's attempts to read/write the MPAM control
> registers, the hardware continues to use them. Guest-EL0 uses KVM's
> user-space's configuration, as the value is left in the register, and
> guest-EL1 uses either the host kernel's configuration, or in the case of
> VHE, the UNKNOWN reset value of MPAM1_EL1.
>
> We want to force the guest-EL1 to use KVM's user-space's MPAM
> configuration. On nVHE rely on MPAM0_EL1 and MPAM1_EL1 always being
> programmed the same and on VHE copy MPAM0_EL1 into the guest's
> MPAM1_EL1. There is no need to restore as this is out of context once TGE
> is set.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> Changes since rfc:
> Drop the unneeded __mpam_guest_load() in nvhre and the MPAM1_EL1 save restore
> Defer EL2 handling until next patch
>
> Changes since v2:
> Use mask (Oliver)
>
> Changes since v4:
> Explicitly set the mpam enable bit
> ---
> arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
Reviewed-by: Gavin Shan <gshan@redhat.com>
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 12/41] KVM: arm64: Use kernel-space partid configuration for hypercalls
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (10 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 11/41] KVM: arm64: Force guest EL1 to use user-space's partid configuration Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-03-02 18:15 ` Marc Zyngier
2026-02-24 17:56 ` [PATCH v5 13/41] arm_mpam: resctrl: Add boilerplate cpuhp and domain allocation Ben Horgan
` (31 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
On nVHE systems whether or not MPAM is enabled, EL2 continues to use
partid-0 for hypercalls, even when the host may have configured its kernel
threads to use a different partid. 0 may have been assigned to another
task. Copy the EL1 MPAM register to EL2. This ensures hypercalls use the
same partid as the kernel thread does on the host.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Use mask
Use read_sysreg_el1 to cope with hvhe
Changes since v3:
Set MPAM2_EL2.MPAMEN to 1 as we rely on that before and after
---
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index e7790097db93..80e71eeddc03 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -638,6 +638,15 @@ static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
unsigned long hcall_min = 0;
hcall_t hfn;
+ if (system_supports_mpam()) {
+ u64 mask = MPAM1_EL1_PARTID_D | MPAM1_EL1_PARTID_I |
+ MPAM1_EL1_PMG_D | MPAM1_EL1_PMG_I;
+ u64 val = MPAM2_EL2_MPAMEN | (read_sysreg_el1(SYS_MPAM1) & mask);
+
+ write_sysreg_s(val, SYS_MPAM2_EL2);
+ isb();
+ }
+
/*
* If pKVM has been initialised then reject any calls to the
* early "privileged" hypercalls. Note that we cannot reject
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 12/41] KVM: arm64: Use kernel-space partid configuration for hypercalls
2026-02-24 17:56 ` [PATCH v5 12/41] KVM: arm64: Use kernel-space partid configuration for hypercalls Ben Horgan
@ 2026-03-02 18:15 ` Marc Zyngier
2026-03-03 16:33 ` Ben Horgan
0 siblings, 1 reply; 75+ messages in thread
From: Marc Zyngier @ 2026-03-02 18:15 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
On Tue, 24 Feb 2026 17:56:51 +0000,
Ben Horgan <ben.horgan@arm.com> wrote:
>
> On nVHE systems whether or not MPAM is enabled, EL2 continues to use
> partid-0 for hypercalls, even when the host may have configured its kernel
> threads to use a different partid. 0 may have been assigned to another
> task. Copy the EL1 MPAM register to EL2. This ensures hypercalls use the
> same partid as the kernel thread does on the host.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> Changes since v2:
> Use mask
> Use read_sysreg_el1 to cope with hvhe
>
> Changes since v3:
> Set MPAM2_EL2.MPAMEN to 1 as we rely on that before and after
> ---
> arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index e7790097db93..80e71eeddc03 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -638,6 +638,15 @@ static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
> unsigned long hcall_min = 0;
> hcall_t hfn;
>
> + if (system_supports_mpam()) {
> + u64 mask = MPAM1_EL1_PARTID_D | MPAM1_EL1_PARTID_I |
> + MPAM1_EL1_PMG_D | MPAM1_EL1_PMG_I;
> + u64 val = MPAM2_EL2_MPAMEN | (read_sysreg_el1(SYS_MPAM1) & mask);
> +
> + write_sysreg_s(val, SYS_MPAM2_EL2);
> + isb();
> + }
> +
> /*
> * If pKVM has been initialised then reject any calls to the
> * early "privileged" hypercalls. Note that we cannot reject
It is extremely debatable whether this is desirable:
- pKVM really shouldn't be influenced by what the host does, which
means reserving PARTIDs and indirecting what the host sees. This can
be deferred until pKVM is actually useful upstream.
- repeatedly hammering that register plus an ISB on the hot path of a
hypercall is a sure way to make things worse than they should be,
and that should be fixed now.
Do you really expect the EL1 settings to change on a regular basis? If
so, I'd rather you use a specific host hypercall, or even a trap to
propagate the EL1 configuration. If not, just set it as part of the
KVM init and be done with it.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 12/41] KVM: arm64: Use kernel-space partid configuration for hypercalls
2026-03-02 18:15 ` Marc Zyngier
@ 2026-03-03 16:33 ` Ben Horgan
2026-03-13 9:43 ` Ben Horgan
0 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-03-03 16:33 UTC (permalink / raw)
To: Marc Zyngier
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
Hi Marc,
On 3/2/26 18:15, Marc Zyngier wrote:
> On Tue, 24 Feb 2026 17:56:51 +0000,
> Ben Horgan <ben.horgan@arm.com> wrote:
>>
>> On nVHE systems whether or not MPAM is enabled, EL2 continues to use
>> partid-0 for hypercalls, even when the host may have configured its kernel
>> threads to use a different partid. 0 may have been assigned to another
>> task. Copy the EL1 MPAM register to EL2. This ensures hypercalls use the
>> same partid as the kernel thread does on the host.
>>
>> Tested-by: Gavin Shan <gshan@redhat.com>
>> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>> Tested-by: Peter Newman <peternewman@google.com>
>> Tested-by: Zeng Heng <zengheng4@huawei.com>
>> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>> ---
>> Changes since v2:
>> Use mask
>> Use read_sysreg_el1 to cope with hvhe
>>
>> Changes since v3:
>> Set MPAM2_EL2.MPAMEN to 1 as we rely on that before and after
>> ---
>> arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 +++++++++
>> 1 file changed, 9 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
>> index e7790097db93..80e71eeddc03 100644
>> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
>> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
>> @@ -638,6 +638,15 @@ static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
>> unsigned long hcall_min = 0;
>> hcall_t hfn;
>>
>> + if (system_supports_mpam()) {
>> + u64 mask = MPAM1_EL1_PARTID_D | MPAM1_EL1_PARTID_I |
>> + MPAM1_EL1_PMG_D | MPAM1_EL1_PMG_I;
>> + u64 val = MPAM2_EL2_MPAMEN | (read_sysreg_el1(SYS_MPAM1) & mask);
>> +
>> + write_sysreg_s(val, SYS_MPAM2_EL2);
>> + isb();
>> + }
>> +
>> /*
>> * If pKVM has been initialised then reject any calls to the
>> * early "privileged" hypercalls. Note that we cannot reject
>
> It is extremely debatable whether this is desirable:
>
> - pKVM really shouldn't be influenced by what the host does, which
> means reserving PARTIDs and indirecting what the host sees. This can
> be deferred until pKVM is actually useful upstream.
>
> - repeatedly hammering that register plus an ISB on the hot path of a
> hypercall is a sure way to make things worse than they should be,
> and that should be fixed now.
Would a read modify write be preferable?
>
> Do you really expect the EL1 settings to change on a regular basis? If
The MPAM EL1 partid/pmg configuration is kept in sync with the MPAM EL0
partid/pmg configuration (see mpam_thread_switch() in patch 4) which
means that the EL1 configuration will change whenever the user changes
the EL0 configuration.
> so, I'd rather you use a specific host hypercall, or even a trap to
> propagate the EL1 configuration. If not, just set it as part of the
I think this ends up trapping context switch which doesn't seem any more
desirable.
> KVM init and be done with it.
If we just forego this patch then the MPAM configuration for el2 as
initially configured, partid=0, pmg=0 would be used. This is also the
default for requestors that aren't MPAM aware or unconfigured, like
trusted firmware, its, gpu. VHE mode (required from 8.1?) should be
available in any platform that has MPAM (introduced in 8.4, back
portable to 8.3) and so using nvhe with MPAM seems unlikely and the
amount of data should be small enough. That leaves pKVM for which,
perhaps, doing nothing is also the correct answer.
What do you think? Drop, read modify write, or something else?
>
> Thanks,
>
> M.
>
Thanks,
Ben
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 12/41] KVM: arm64: Use kernel-space partid configuration for hypercalls
2026-03-03 16:33 ` Ben Horgan
@ 2026-03-13 9:43 ` Ben Horgan
0 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-03-13 9:43 UTC (permalink / raw)
To: Marc Zyngier
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
On 3/3/26 16:33, Ben Horgan wrote:
> Hi Marc,
>
> On 3/2/26 18:15, Marc Zyngier wrote:
>> On Tue, 24 Feb 2026 17:56:51 +0000,
>> Ben Horgan <ben.horgan@arm.com> wrote:
>>>
>>> On nVHE systems whether or not MPAM is enabled, EL2 continues to use
>>> partid-0 for hypercalls, even when the host may have configured its kernel
>>> threads to use a different partid. 0 may have been assigned to another
>>> task. Copy the EL1 MPAM register to EL2. This ensures hypercalls use the
>>> same partid as the kernel thread does on the host.
>>>
>>> Tested-by: Gavin Shan <gshan@redhat.com>
>>> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>>> Tested-by: Peter Newman <peternewman@google.com>
>>> Tested-by: Zeng Heng <zengheng4@huawei.com>
>>> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>>> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
>>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>>> ---
>>> Changes since v2:
>>> Use mask
>>> Use read_sysreg_el1 to cope with hvhe
>>>
>>> Changes since v3:
>>> Set MPAM2_EL2.MPAMEN to 1 as we rely on that before and after
>>> ---
>>> arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 +++++++++
>>> 1 file changed, 9 insertions(+)
>>>
>>> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
>>> index e7790097db93..80e71eeddc03 100644
>>> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
>>> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
>>> @@ -638,6 +638,15 @@ static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
>>> unsigned long hcall_min = 0;
>>> hcall_t hfn;
>>>
>>> + if (system_supports_mpam()) {
>>> + u64 mask = MPAM1_EL1_PARTID_D | MPAM1_EL1_PARTID_I |
>>> + MPAM1_EL1_PMG_D | MPAM1_EL1_PMG_I;
>>> + u64 val = MPAM2_EL2_MPAMEN | (read_sysreg_el1(SYS_MPAM1) & mask);
>>> +
>>> + write_sysreg_s(val, SYS_MPAM2_EL2);
>>> + isb();
>>> + }
>>> +
>>> /*
>>> * If pKVM has been initialised then reject any calls to the
>>> * early "privileged" hypercalls. Note that we cannot reject
>>
>> It is extremely debatable whether this is desirable:
>>
>> - pKVM really shouldn't be influenced by what the host does, which
>> means reserving PARTIDs and indirecting what the host sees. This can
>> be deferred until pKVM is actually useful upstream.
>>
>> - repeatedly hammering that register plus an ISB on the hot path of a
>> hypercall is a sure way to make things worse than they should be,
>> and that should be fixed now.
>
> Would a read modify write be preferable?
>
>>
>> Do you really expect the EL1 settings to change on a regular basis? If
>
> The MPAM EL1 partid/pmg configuration is kept in sync with the MPAM EL0
> partid/pmg configuration (see mpam_thread_switch() in patch 4) which
> means that the EL1 configuration will change whenever the user changes
> the EL0 configuration.
>
>> so, I'd rather you use a specific host hypercall, or even a trap to
>> propagate the EL1 configuration. If not, just set it as part of the
>
> I think this ends up trapping context switch which doesn't seem any more
> desirable.
>
>> KVM init and be done with it.
>
> If we just forego this patch then the MPAM configuration for el2 as
> initially configured, partid=0, pmg=0 would be used. This is also the
> default for requestors that aren't MPAM aware or unconfigured, like
> trusted firmware, its, gpu. VHE mode (required from 8.1?) should be
> available in any platform that has MPAM (introduced in 8.4, back
> portable to 8.3) and so using nvhe with MPAM seems unlikely and the
> amount of data should be small enough. That leaves pKVM for which,
> perhaps, doing nothing is also the correct answer.
>
> What do you think? Drop, read modify write, or something else?
As discussed offline, I'll drop this patch.
Thanks,
Ben
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 13/41] arm_mpam: resctrl: Add boilerplate cpuhp and domain allocation
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (11 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 12/41] KVM: arm64: Use kernel-space partid configuration for hypercalls Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-03-10 6:17 ` Gavin Shan
2026-02-24 17:56 ` [PATCH v5 14/41] arm_mpam: resctrl: Pick the caches we will use as resctrl resources Ben Horgan
` (30 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
resctrl has its own data structures to describe its resources. We can't use
these directly as we play tricks with the 'MBA' resource, picking the MPAM
controls or monitors that best apply. We may export the same component as
both L3 and MBA.
Add mpam_resctrl_res[] as the array of class->resctrl mappings we are
exporting, and add the cpuhp hooks that allocated and free the resctrl
domain structures. Only the mpam control feature are considered here and
monitor support will be added later.
While we're here, plumb in a few other obvious things.
CONFIG_ARM_CPU_RESCTRL is used to allow this code to be built even though
it can't yet be linked against resctrl.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
Domain list is an rcu list
Add synchronize_rcu() to free the deleted element
Code flow simplification (Jonathan)
Changes since v2:
Iterate over mpam_resctrl_dom directly (Jonathan)
Code flow clarification
Comment tidying
Remove power of 2 check as no longer creates holes in rmid indices
Remove unused type argument
add macro helper for_each_mpam_resctrl_control
Changes since v3:
Add and use mpam_resctrl_online_domain_hdr()
mpam_resctrl_alloc_domain() error paths (Reinette)
rebase on x86/cache changes rdt_mon_domain becomes rdt_l3_mon_domain
etc
Changes since v4:
Set rid in domain_hdr
Use rescctrl_res.alloc_capable to determine if alloc_capable as the
decision may depend on the resctrl mount options (cdp)
Squash in arm_mpam: resctrl: Sort the order of the domain lists
Move out monitor/counter changes to a separate patch
Commit message update
---
drivers/resctrl/Makefile | 1 +
drivers/resctrl/mpam_devices.c | 12 ++
drivers/resctrl/mpam_internal.h | 21 ++
drivers/resctrl/mpam_resctrl.c | 327 ++++++++++++++++++++++++++++++++
include/linux/arm_mpam.h | 3 +
5 files changed, 364 insertions(+)
create mode 100644 drivers/resctrl/mpam_resctrl.c
diff --git a/drivers/resctrl/Makefile b/drivers/resctrl/Makefile
index 898199dcf80d..40beaf999582 100644
--- a/drivers/resctrl/Makefile
+++ b/drivers/resctrl/Makefile
@@ -1,4 +1,5 @@
obj-$(CONFIG_ARM64_MPAM_DRIVER) += mpam.o
mpam-y += mpam_devices.o
+mpam-$(CONFIG_ARM_CPU_RESCTRL) += mpam_resctrl.o
ccflags-$(CONFIG_ARM64_MPAM_DRIVER_DEBUG) += -DDEBUG
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index b400a7381d9a..b45743c5fb46 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -1628,6 +1628,9 @@ static int mpam_cpu_online(unsigned int cpu)
mpam_reprogram_msc(msc);
}
+ if (mpam_is_enabled())
+ return mpam_resctrl_online_cpu(cpu);
+
return 0;
}
@@ -1671,6 +1674,9 @@ static int mpam_cpu_offline(unsigned int cpu)
{
struct mpam_msc *msc;
+ if (mpam_is_enabled())
+ mpam_resctrl_offline_cpu(cpu);
+
guard(srcu)(&mpam_srcu);
list_for_each_entry_srcu(msc, &mpam_all_msc, all_msc_list,
srcu_read_lock_held(&mpam_srcu)) {
@@ -2516,6 +2522,12 @@ static void mpam_enable_once(void)
mutex_unlock(&mpam_list_lock);
cpus_read_unlock();
+ if (!err) {
+ err = mpam_resctrl_setup();
+ if (err)
+ pr_err("Failed to initialise resctrl: %d\n", err);
+ }
+
if (err) {
mpam_disable_reason = "Failed to enable.";
schedule_work(&mpam_broken_work);
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index 4632985bcca6..28ac501e1ac3 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -12,6 +12,7 @@
#include <linux/jump_label.h>
#include <linux/llist.h>
#include <linux/mutex.h>
+#include <linux/resctrl.h>
#include <linux/spinlock.h>
#include <linux/srcu.h>
#include <linux/types.h>
@@ -337,6 +338,16 @@ struct mpam_msc_ris {
struct mpam_garbage garbage;
};
+struct mpam_resctrl_dom {
+ struct mpam_component *ctrl_comp;
+ struct rdt_ctrl_domain resctrl_ctrl_dom;
+};
+
+struct mpam_resctrl_res {
+ struct mpam_class *class;
+ struct rdt_resource resctrl_res;
+};
+
static inline int mpam_alloc_csu_mon(struct mpam_class *class)
{
struct mpam_props *cprops = &class->props;
@@ -391,6 +402,16 @@ void mpam_msmon_reset_mbwu(struct mpam_component *comp, struct mon_cfg *ctx);
int mpam_get_cpumask_from_cache_id(unsigned long cache_id, u32 cache_level,
cpumask_t *affinity);
+#ifdef CONFIG_RESCTRL_FS
+int mpam_resctrl_setup(void);
+int mpam_resctrl_online_cpu(unsigned int cpu);
+void mpam_resctrl_offline_cpu(unsigned int cpu);
+#else
+static inline int mpam_resctrl_setup(void) { return 0; }
+static inline int mpam_resctrl_online_cpu(unsigned int cpu) { return 0; }
+static inline void mpam_resctrl_offline_cpu(unsigned int cpu) { }
+#endif /* CONFIG_RESCTRL_FS */
+
/*
* MPAM MSCs have the following register layout. See:
* Arm Memory System Resource Partitioning and Monitoring (MPAM) System
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
new file mode 100644
index 000000000000..2ffba7a15d6a
--- /dev/null
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -0,0 +1,327 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2025 Arm Ltd.
+
+#define pr_fmt(fmt) "%s:%s: " fmt, KBUILD_MODNAME, __func__
+
+#include <linux/arm_mpam.h>
+#include <linux/cacheinfo.h>
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/printk.h>
+#include <linux/rculist.h>
+#include <linux/resctrl.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include <asm/mpam.h>
+
+#include "mpam_internal.h"
+
+/*
+ * The classes we've picked to map to resctrl resources, wrapped
+ * in with their resctrl structure.
+ * Class pointer may be NULL.
+ */
+static struct mpam_resctrl_res mpam_resctrl_controls[RDT_NUM_RESOURCES];
+
+#define for_each_mpam_resctrl_control(res, rid) \
+ for (rid = 0, res = &mpam_resctrl_controls[rid]; \
+ rid < RDT_NUM_RESOURCES; \
+ rid++, res = &mpam_resctrl_controls[rid])
+
+/* The lock for modifying resctrl's domain lists from cpuhp callbacks. */
+static DEFINE_MUTEX(domain_list_lock);
+
+bool resctrl_arch_alloc_capable(void)
+{
+ struct mpam_resctrl_res *res;
+ enum resctrl_res_level rid;
+
+ for_each_mpam_resctrl_control(res, rid) {
+ if (res->resctrl_res.alloc_capable)
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * MSC may raise an error interrupt if it sees an out or range partid/pmg,
+ * and go on to truncate the value. Regardless of what the hardware supports,
+ * only the system wide safe value is safe to use.
+ */
+u32 resctrl_arch_get_num_closid(struct rdt_resource *ignored)
+{
+ return mpam_partid_max + 1;
+}
+
+struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l)
+{
+ if (l >= RDT_NUM_RESOURCES)
+ return NULL;
+
+ return &mpam_resctrl_controls[l].resctrl_res;
+}
+
+static int mpam_resctrl_control_init(struct mpam_resctrl_res *res)
+{
+ /* TODO: initialise the resctrl resources */
+
+ return 0;
+}
+
+static int mpam_resctrl_pick_domain_id(int cpu, struct mpam_component *comp)
+{
+ struct mpam_class *class = comp->class;
+
+ if (class->type == MPAM_CLASS_CACHE)
+ return comp->comp_id;
+
+ /* TODO: repaint domain ids to match the L3 domain ids */
+ /* Otherwise, expose the ID used by the firmware table code. */
+ return comp->comp_id;
+}
+
+static void mpam_resctrl_domain_hdr_init(int cpu, struct mpam_component *comp,
+ enum resctrl_res_level rid,
+ struct rdt_domain_hdr *hdr)
+{
+ lockdep_assert_cpus_held();
+
+ INIT_LIST_HEAD(&hdr->list);
+ hdr->id = mpam_resctrl_pick_domain_id(cpu, comp);
+ hdr->rid = rid;
+ cpumask_set_cpu(cpu, &hdr->cpu_mask);
+}
+
+static void mpam_resctrl_online_domain_hdr(unsigned int cpu,
+ struct rdt_domain_hdr *hdr)
+{
+ lockdep_assert_cpus_held();
+
+ cpumask_set_cpu(cpu, &hdr->cpu_mask);
+}
+
+/**
+ * mpam_resctrl_offline_domain_hdr() - Update the domain header to remove a CPU.
+ * @cpu: The CPU to remove from the domain.
+ * @hdr: The domain's header.
+ *
+ * Removes @cpu from the header mask. If this was the last CPU in the domain,
+ * the domain header is removed from its parent list and true is returned,
+ * indicating the parent structure can be freed.
+ * If there are other CPUs in the domain, returns false.
+ */
+static bool mpam_resctrl_offline_domain_hdr(unsigned int cpu,
+ struct rdt_domain_hdr *hdr)
+{
+ lockdep_assert_held(&domain_list_lock);
+
+ cpumask_clear_cpu(cpu, &hdr->cpu_mask);
+ if (cpumask_empty(&hdr->cpu_mask)) {
+ list_del_rcu(&hdr->list);
+ synchronize_rcu();
+ return true;
+ }
+
+ return false;
+}
+
+static void mpam_resctrl_domain_insert(struct list_head *list,
+ struct rdt_domain_hdr *new)
+{
+ struct rdt_domain_hdr *err;
+ struct list_head *pos = NULL;
+
+ lockdep_assert_held(&domain_list_lock);
+
+ err = resctrl_find_domain(list, new->id, &pos);
+ if (WARN_ON_ONCE(err))
+ return;
+
+ list_add_tail_rcu(&new->list, pos);
+}
+
+static struct mpam_resctrl_dom *
+mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
+{
+ int err;
+ struct mpam_resctrl_dom *dom;
+ struct rdt_ctrl_domain *ctrl_d;
+ struct mpam_class *class = res->class;
+ struct mpam_component *comp_iter, *ctrl_comp;
+ struct rdt_resource *r = &res->resctrl_res;
+
+ lockdep_assert_held(&domain_list_lock);
+
+ ctrl_comp = NULL;
+ guard(srcu)(&mpam_srcu);
+ list_for_each_entry_srcu(comp_iter, &class->components, class_list,
+ srcu_read_lock_held(&mpam_srcu)) {
+ if (cpumask_test_cpu(cpu, &comp_iter->affinity)) {
+ ctrl_comp = comp_iter;
+ break;
+ }
+ }
+
+ /* class has no component for this CPU */
+ if (WARN_ON_ONCE(!ctrl_comp))
+ return ERR_PTR(-EINVAL);
+
+ dom = kzalloc_node(sizeof(*dom), GFP_KERNEL, cpu_to_node(cpu));
+ if (!dom)
+ return ERR_PTR(-ENOMEM);
+
+ if (resctrl_arch_alloc_capable()) {
+ dom->ctrl_comp = ctrl_comp;
+
+ ctrl_d = &dom->resctrl_ctrl_dom;
+ mpam_resctrl_domain_hdr_init(cpu, ctrl_comp, r->rid, &ctrl_d->hdr);
+ ctrl_d->hdr.type = RESCTRL_CTRL_DOMAIN;
+ err = resctrl_online_ctrl_domain(r, ctrl_d);
+ if (err)
+ goto free_domain;
+
+ mpam_resctrl_domain_insert(&r->ctrl_domains, &ctrl_d->hdr);
+ } else {
+ pr_debug("Skipped control domain online - no controls\n");
+ }
+ return dom;
+
+offline_ctrl_domain:
+ if (resctrl_arch_alloc_capable()) {
+ mpam_resctrl_offline_domain_hdr(cpu, &ctrl_d->hdr);
+ resctrl_offline_ctrl_domain(r, ctrl_d);
+ }
+free_domain:
+ kfree(dom);
+ dom = ERR_PTR(err);
+
+ return dom;
+}
+
+static struct mpam_resctrl_dom *
+mpam_resctrl_get_domain_from_cpu(int cpu, struct mpam_resctrl_res *res)
+{
+ struct mpam_resctrl_dom *dom;
+ struct rdt_resource *r = &res->resctrl_res;
+
+ lockdep_assert_cpus_held();
+
+ list_for_each_entry_rcu(dom, &r->ctrl_domains, resctrl_ctrl_dom.hdr.list) {
+ if (cpumask_test_cpu(cpu, &dom->ctrl_comp->affinity))
+ return dom;
+ }
+
+ return NULL;
+}
+
+int mpam_resctrl_online_cpu(unsigned int cpu)
+{
+ struct mpam_resctrl_res *res;
+ enum resctrl_res_level rid;
+
+ guard(mutex)(&domain_list_lock);
+ for_each_mpam_resctrl_control(res, rid) {
+ struct mpam_resctrl_dom *dom;
+
+ if (!res->class)
+ continue; // dummy_resource;
+
+ dom = mpam_resctrl_get_domain_from_cpu(cpu, res);
+ if (!dom) {
+ dom = mpam_resctrl_alloc_domain(cpu, res);
+ } else {
+ if (resctrl_arch_alloc_capable()) {
+ struct rdt_ctrl_domain *ctrl_d = &dom->resctrl_ctrl_dom;
+
+ mpam_resctrl_online_domain_hdr(cpu, &ctrl_d->hdr);
+ }
+ }
+ if (IS_ERR(dom))
+ return PTR_ERR(dom);
+ }
+
+ resctrl_online_cpu(cpu);
+
+ return 0;
+}
+
+void mpam_resctrl_offline_cpu(unsigned int cpu)
+{
+ struct mpam_resctrl_res *res;
+ enum resctrl_res_level rid;
+
+ resctrl_offline_cpu(cpu);
+
+ guard(mutex)(&domain_list_lock);
+ for_each_mpam_resctrl_control(res, rid) {
+ struct mpam_resctrl_dom *dom;
+ struct rdt_ctrl_domain *ctrl_d;
+ bool ctrl_dom_empty;
+
+ if (!res->class)
+ continue; // dummy resource
+
+ dom = mpam_resctrl_get_domain_from_cpu(cpu, res);
+ if (WARN_ON_ONCE(!dom))
+ continue;
+
+ if (resctrl_arch_alloc_capable()) {
+ ctrl_d = &dom->resctrl_ctrl_dom;
+ ctrl_dom_empty = mpam_resctrl_offline_domain_hdr(cpu, &ctrl_d->hdr);
+ if (ctrl_dom_empty)
+ resctrl_offline_ctrl_domain(&res->resctrl_res, ctrl_d);
+ } else {
+ ctrl_dom_empty = true;
+ }
+
+ if (ctrl_dom_empty)
+ kfree(dom);
+ }
+}
+
+int mpam_resctrl_setup(void)
+{
+ int err = 0;
+ struct mpam_resctrl_res *res;
+ enum resctrl_res_level rid;
+
+ cpus_read_lock();
+ for_each_mpam_resctrl_control(res, rid) {
+ INIT_LIST_HEAD_RCU(&res->resctrl_res.ctrl_domains);
+ res->resctrl_res.rid = rid;
+ }
+
+ /* TODO: pick MPAM classes to map to resctrl resources */
+
+ /* Initialise the resctrl structures from the classes */
+ for_each_mpam_resctrl_control(res, rid) {
+ if (!res->class)
+ continue; // dummy resource
+
+ err = mpam_resctrl_control_init(res);
+ if (err) {
+ pr_debug("Failed to initialise rid %u\n", rid);
+ break;
+ }
+ }
+ cpus_read_unlock();
+
+ if (err) {
+ pr_debug("Internal error %d - resctrl not supported\n", err);
+ return err;
+ }
+
+ if (!resctrl_arch_alloc_capable()) {
+ pr_debug("No alloc(%u) found - resctrl not supported\n",
+ resctrl_arch_alloc_capable());
+ return -EOPNOTSUPP;
+ }
+
+ /* TODO: call resctrl_init() */
+
+ return 0;
+}
diff --git a/include/linux/arm_mpam.h b/include/linux/arm_mpam.h
index 7f00c5285a32..2c7d1413a401 100644
--- a/include/linux/arm_mpam.h
+++ b/include/linux/arm_mpam.h
@@ -49,6 +49,9 @@ static inline int mpam_ris_create(struct mpam_msc *msc, u8 ris_idx,
}
#endif
+bool resctrl_arch_alloc_capable(void);
+bool resctrl_arch_mon_capable(void);
+
/**
* mpam_register_requestor() - Register a requestor with the MPAM driver
* @partid_max: The maximum PARTID value the requestor can generate.
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 13/41] arm_mpam: resctrl: Add boilerplate cpuhp and domain allocation
2026-02-24 17:56 ` [PATCH v5 13/41] arm_mpam: resctrl: Add boilerplate cpuhp and domain allocation Ben Horgan
@ 2026-03-10 6:17 ` Gavin Shan
2026-03-10 10:34 ` Ben Horgan
0 siblings, 1 reply; 75+ messages in thread
From: Gavin Shan @ 2026-03-10 6:17 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
Hi Ben,
On 2/25/26 3:56 AM, Ben Horgan wrote:
> From: James Morse <james.morse@arm.com>
>
> resctrl has its own data structures to describe its resources. We can't use
> these directly as we play tricks with the 'MBA' resource, picking the MPAM
> controls or monitors that best apply. We may export the same component as
> both L3 and MBA.
>
> Add mpam_resctrl_res[] as the array of class->resctrl mappings we are
> exporting, and add the cpuhp hooks that allocated and free the resctrl
> domain structures. Only the mpam control feature are considered here and
> monitor support will be added later.
>
> While we're here, plumb in a few other obvious things.
>
> CONFIG_ARM_CPU_RESCTRL is used to allow this code to be built even though
> it can't yet be linked against resctrl.
>
CONFIG_ARM_CPU_RESCTRL isn't valid. I guess you're probably mentioning
CONFIG_ARCH_HAS_CPU_RESCTRL?
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> Changes since rfc:
> Domain list is an rcu list
> Add synchronize_rcu() to free the deleted element
> Code flow simplification (Jonathan)
>
> Changes since v2:
> Iterate over mpam_resctrl_dom directly (Jonathan)
> Code flow clarification
> Comment tidying
> Remove power of 2 check as no longer creates holes in rmid indices
> Remove unused type argument
> add macro helper for_each_mpam_resctrl_control
>
> Changes since v3:
> Add and use mpam_resctrl_online_domain_hdr()
> mpam_resctrl_alloc_domain() error paths (Reinette)
> rebase on x86/cache changes rdt_mon_domain becomes rdt_l3_mon_domain
> etc
>
> Changes since v4:
> Set rid in domain_hdr
> Use rescctrl_res.alloc_capable to determine if alloc_capable as the
> decision may depend on the resctrl mount options (cdp)
> Squash in arm_mpam: resctrl: Sort the order of the domain lists
> Move out monitor/counter changes to a separate patch
> Commit message update
> ---
> drivers/resctrl/Makefile | 1 +
> drivers/resctrl/mpam_devices.c | 12 ++
> drivers/resctrl/mpam_internal.h | 21 ++
> drivers/resctrl/mpam_resctrl.c | 327 ++++++++++++++++++++++++++++++++
> include/linux/arm_mpam.h | 3 +
> 5 files changed, 364 insertions(+)
> create mode 100644 drivers/resctrl/mpam_resctrl.c
>
> diff --git a/drivers/resctrl/Makefile b/drivers/resctrl/Makefile
> index 898199dcf80d..40beaf999582 100644
> --- a/drivers/resctrl/Makefile
> +++ b/drivers/resctrl/Makefile
> @@ -1,4 +1,5 @@
> obj-$(CONFIG_ARM64_MPAM_DRIVER) += mpam.o
> mpam-y += mpam_devices.o
> +mpam-$(CONFIG_ARM_CPU_RESCTRL) += mpam_resctrl.o
>
> ccflags-$(CONFIG_ARM64_MPAM_DRIVER_DEBUG) += -DDEBUG
> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
> index b400a7381d9a..b45743c5fb46 100644
> --- a/drivers/resctrl/mpam_devices.c
> +++ b/drivers/resctrl/mpam_devices.c
> @@ -1628,6 +1628,9 @@ static int mpam_cpu_online(unsigned int cpu)
> mpam_reprogram_msc(msc);
> }
>
> + if (mpam_is_enabled())
> + return mpam_resctrl_online_cpu(cpu);
> +
> return 0;
> }
>
> @@ -1671,6 +1674,9 @@ static int mpam_cpu_offline(unsigned int cpu)
> {
> struct mpam_msc *msc;
>
> + if (mpam_is_enabled())
> + mpam_resctrl_offline_cpu(cpu);
> +
> guard(srcu)(&mpam_srcu);
> list_for_each_entry_srcu(msc, &mpam_all_msc, all_msc_list,
> srcu_read_lock_held(&mpam_srcu)) {
> @@ -2516,6 +2522,12 @@ static void mpam_enable_once(void)
> mutex_unlock(&mpam_list_lock);
> cpus_read_unlock();
>
> + if (!err) {
> + err = mpam_resctrl_setup();
> + if (err)
> + pr_err("Failed to initialise resctrl: %d\n", err);
> + }
> +
> if (err) {
> mpam_disable_reason = "Failed to enable.";
> schedule_work(&mpam_broken_work);
> diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
> index 4632985bcca6..28ac501e1ac3 100644
> --- a/drivers/resctrl/mpam_internal.h
> +++ b/drivers/resctrl/mpam_internal.h
> @@ -12,6 +12,7 @@
> #include <linux/jump_label.h>
> #include <linux/llist.h>
> #include <linux/mutex.h>
> +#include <linux/resctrl.h>
> #include <linux/spinlock.h>
> #include <linux/srcu.h>
> #include <linux/types.h>
> @@ -337,6 +338,16 @@ struct mpam_msc_ris {
> struct mpam_garbage garbage;
> };
>
> +struct mpam_resctrl_dom {
> + struct mpam_component *ctrl_comp;
> + struct rdt_ctrl_domain resctrl_ctrl_dom;
> +};
> +
> +struct mpam_resctrl_res {
> + struct mpam_class *class;
> + struct rdt_resource resctrl_res;
> +};
> +
> static inline int mpam_alloc_csu_mon(struct mpam_class *class)
> {
> struct mpam_props *cprops = &class->props;
> @@ -391,6 +402,16 @@ void mpam_msmon_reset_mbwu(struct mpam_component *comp, struct mon_cfg *ctx);
> int mpam_get_cpumask_from_cache_id(unsigned long cache_id, u32 cache_level,
> cpumask_t *affinity);
>
> +#ifdef CONFIG_RESCTRL_FS
> +int mpam_resctrl_setup(void);
> +int mpam_resctrl_online_cpu(unsigned int cpu);
> +void mpam_resctrl_offline_cpu(unsigned int cpu);
> +#else
> +static inline int mpam_resctrl_setup(void) { return 0; }
> +static inline int mpam_resctrl_online_cpu(unsigned int cpu) { return 0; }
> +static inline void mpam_resctrl_offline_cpu(unsigned int cpu) { }
> +#endif /* CONFIG_RESCTRL_FS */
> +
> /*
> * MPAM MSCs have the following register layout. See:
> * Arm Memory System Resource Partitioning and Monitoring (MPAM) System
> diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
> new file mode 100644
> index 000000000000..2ffba7a15d6a
> --- /dev/null
> +++ b/drivers/resctrl/mpam_resctrl.c
> @@ -0,0 +1,327 @@
> +// SPDX-License-Identifier: GPL-2.0
> +// Copyright (C) 2025 Arm Ltd.
> +
> +#define pr_fmt(fmt) "%s:%s: " fmt, KBUILD_MODNAME, __func__
> +
> +#include <linux/arm_mpam.h>
> +#include <linux/cacheinfo.h>
> +#include <linux/cpu.h>
> +#include <linux/cpumask.h>
> +#include <linux/errno.h>
> +#include <linux/list.h>
> +#include <linux/printk.h>
> +#include <linux/rculist.h>
> +#include <linux/resctrl.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +
> +#include <asm/mpam.h>
> +
> +#include "mpam_internal.h"
> +
The list of included headers could be simplified since several header files
have been included to mpam_internal.h, for example <linux/arm_mpam.h>,
<linux/cpumask.h>, <asm/mpam.h> and others.
> +/*
> + * The classes we've picked to map to resctrl resources, wrapped
> + * in with their resctrl structure.
> + * Class pointer may be NULL.
> + */
> +static struct mpam_resctrl_res mpam_resctrl_controls[RDT_NUM_RESOURCES];
> +
> +#define for_each_mpam_resctrl_control(res, rid) \
> + for (rid = 0, res = &mpam_resctrl_controls[rid]; \
> + rid < RDT_NUM_RESOURCES; \
> + rid++, res = &mpam_resctrl_controls[rid])
> +
> +/* The lock for modifying resctrl's domain lists from cpuhp callbacks. */
> +static DEFINE_MUTEX(domain_list_lock);
> +
> +bool resctrl_arch_alloc_capable(void)
> +{
> + struct mpam_resctrl_res *res;
> + enum resctrl_res_level rid;
> +
> + for_each_mpam_resctrl_control(res, rid) {
> + if (res->resctrl_res.alloc_capable)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +/*
> + * MSC may raise an error interrupt if it sees an out or range partid/pmg,
> + * and go on to truncate the value. Regardless of what the hardware supports,
> + * only the system wide safe value is safe to use.
> + */
> +u32 resctrl_arch_get_num_closid(struct rdt_resource *ignored)
> +{
> + return mpam_partid_max + 1;
> +}
> +
> +struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l)
> +{
> + if (l >= RDT_NUM_RESOURCES)
> + return NULL;
> +
> + return &mpam_resctrl_controls[l].resctrl_res;
> +}
> +
> +static int mpam_resctrl_control_init(struct mpam_resctrl_res *res)
> +{
> + /* TODO: initialise the resctrl resources */
> +
> + return 0;
> +}
> +
> +static int mpam_resctrl_pick_domain_id(int cpu, struct mpam_component *comp)
> +{
> + struct mpam_class *class = comp->class;
> +
> + if (class->type == MPAM_CLASS_CACHE)
> + return comp->comp_id;
> +
> + /* TODO: repaint domain ids to match the L3 domain ids */
> + /* Otherwise, expose the ID used by the firmware table code. */
> + return comp->comp_id;
> +}
> +
> +static void mpam_resctrl_domain_hdr_init(int cpu, struct mpam_component *comp,
> + enum resctrl_res_level rid,
> + struct rdt_domain_hdr *hdr)
> +{
> + lockdep_assert_cpus_held();
> +
> + INIT_LIST_HEAD(&hdr->list);
> + hdr->id = mpam_resctrl_pick_domain_id(cpu, comp);
> + hdr->rid = rid;
> + cpumask_set_cpu(cpu, &hdr->cpu_mask);
> +}
> +
> +static void mpam_resctrl_online_domain_hdr(unsigned int cpu,
> + struct rdt_domain_hdr *hdr)
> +{
> + lockdep_assert_cpus_held();
> +
> + cpumask_set_cpu(cpu, &hdr->cpu_mask);
> +}
> +
> +/**
> + * mpam_resctrl_offline_domain_hdr() - Update the domain header to remove a CPU.
> + * @cpu: The CPU to remove from the domain.
> + * @hdr: The domain's header.
> + *
> + * Removes @cpu from the header mask. If this was the last CPU in the domain,
^^^
is
> + * the domain header is removed from its parent list and true is returned,
> + * indicating the parent structure can be freed.
> + * If there are other CPUs in the domain, returns false.
> + */
> +static bool mpam_resctrl_offline_domain_hdr(unsigned int cpu,
> + struct rdt_domain_hdr *hdr)
> +{
> + lockdep_assert_held(&domain_list_lock);
> +
> + cpumask_clear_cpu(cpu, &hdr->cpu_mask);
> + if (cpumask_empty(&hdr->cpu_mask)) {
> + list_del_rcu(&hdr->list);
> + synchronize_rcu();
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static void mpam_resctrl_domain_insert(struct list_head *list,
> + struct rdt_domain_hdr *new)
> +{
> + struct rdt_domain_hdr *err;
> + struct list_head *pos = NULL;
> +
> + lockdep_assert_held(&domain_list_lock);
> +
> + err = resctrl_find_domain(list, new->id, &pos);
> + if (WARN_ON_ONCE(err))
> + return;
> +
> + list_add_tail_rcu(&new->list, pos);
> +}
> +
> +static struct mpam_resctrl_dom *
> +mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
> +{
> + int err;
> + struct mpam_resctrl_dom *dom;
> + struct rdt_ctrl_domain *ctrl_d;
> + struct mpam_class *class = res->class;
> + struct mpam_component *comp_iter, *ctrl_comp;
> + struct rdt_resource *r = &res->resctrl_res;
> +
> + lockdep_assert_held(&domain_list_lock);
> +
> + ctrl_comp = NULL;
> + guard(srcu)(&mpam_srcu);
> + list_for_each_entry_srcu(comp_iter, &class->components, class_list,
> + srcu_read_lock_held(&mpam_srcu)) {
> + if (cpumask_test_cpu(cpu, &comp_iter->affinity)) {
> + ctrl_comp = comp_iter;
> + break;
> + }
> + }
> +
> + /* class has no component for this CPU */
> + if (WARN_ON_ONCE(!ctrl_comp))
> + return ERR_PTR(-EINVAL);
> +
> + dom = kzalloc_node(sizeof(*dom), GFP_KERNEL, cpu_to_node(cpu));
> + if (!dom)
> + return ERR_PTR(-ENOMEM);
> +
> + if (resctrl_arch_alloc_capable()) {
> + dom->ctrl_comp = ctrl_comp;
> +
> + ctrl_d = &dom->resctrl_ctrl_dom;
> + mpam_resctrl_domain_hdr_init(cpu, ctrl_comp, r->rid, &ctrl_d->hdr);
> + ctrl_d->hdr.type = RESCTRL_CTRL_DOMAIN;
> + err = resctrl_online_ctrl_domain(r, ctrl_d);
> + if (err)
> + goto free_domain;
> +
> + mpam_resctrl_domain_insert(&r->ctrl_domains, &ctrl_d->hdr);
> + } else {
> + pr_debug("Skipped control domain online - no controls\n");
> + }
> + return dom;
> +
> +offline_ctrl_domain:
> + if (resctrl_arch_alloc_capable()) {
> + mpam_resctrl_offline_domain_hdr(cpu, &ctrl_d->hdr);
> + resctrl_offline_ctrl_domain(r, ctrl_d);
> + }
> +free_domain:
> + kfree(dom);
> + dom = ERR_PTR(err);
> +
> + return dom;
> +}
> +
> +static struct mpam_resctrl_dom *
> +mpam_resctrl_get_domain_from_cpu(int cpu, struct mpam_resctrl_res *res)
> +{
> + struct mpam_resctrl_dom *dom;
> + struct rdt_resource *r = &res->resctrl_res;
> +
> + lockdep_assert_cpus_held();
> +
> + list_for_each_entry_rcu(dom, &r->ctrl_domains, resctrl_ctrl_dom.hdr.list) {
> + if (cpumask_test_cpu(cpu, &dom->ctrl_comp->affinity))
> + return dom;
> + }
> +
> + return NULL;
> +}
> +
> +int mpam_resctrl_online_cpu(unsigned int cpu)
> +{
> + struct mpam_resctrl_res *res;
> + enum resctrl_res_level rid;
> +
> + guard(mutex)(&domain_list_lock);
> + for_each_mpam_resctrl_control(res, rid) {
> + struct mpam_resctrl_dom *dom;
> +
> + if (!res->class)
> + continue; // dummy_resource;
> +
> + dom = mpam_resctrl_get_domain_from_cpu(cpu, res);
> + if (!dom) {
> + dom = mpam_resctrl_alloc_domain(cpu, res);
> + } else {
> + if (resctrl_arch_alloc_capable()) {
> + struct rdt_ctrl_domain *ctrl_d = &dom->resctrl_ctrl_dom;
> +
> + mpam_resctrl_online_domain_hdr(cpu, &ctrl_d->hdr);
> + }
> + }
> + if (IS_ERR(dom))
> + return PTR_ERR(dom);
> + }
> +
> + resctrl_online_cpu(cpu);
> +
> + return 0;
> +}
> +
> +void mpam_resctrl_offline_cpu(unsigned int cpu)
> +{
> + struct mpam_resctrl_res *res;
> + enum resctrl_res_level rid;
> +
> + resctrl_offline_cpu(cpu);
> +
> + guard(mutex)(&domain_list_lock);
> + for_each_mpam_resctrl_control(res, rid) {
> + struct mpam_resctrl_dom *dom;
> + struct rdt_ctrl_domain *ctrl_d;
> + bool ctrl_dom_empty;
> +
> + if (!res->class)
> + continue; // dummy resource
> +
> + dom = mpam_resctrl_get_domain_from_cpu(cpu, res);
> + if (WARN_ON_ONCE(!dom))
> + continue;
> +
> + if (resctrl_arch_alloc_capable()) {
> + ctrl_d = &dom->resctrl_ctrl_dom;
> + ctrl_dom_empty = mpam_resctrl_offline_domain_hdr(cpu, &ctrl_d->hdr);
> + if (ctrl_dom_empty)
> + resctrl_offline_ctrl_domain(&res->resctrl_res, ctrl_d);
> + } else {
> + ctrl_dom_empty = true;
> + }
> +
> + if (ctrl_dom_empty)
> + kfree(dom);
> + }
> +}
> +
> +int mpam_resctrl_setup(void)
> +{
> + int err = 0;
> + struct mpam_resctrl_res *res;
> + enum resctrl_res_level rid;
> +
> + cpus_read_lock();
> + for_each_mpam_resctrl_control(res, rid) {
> + INIT_LIST_HEAD_RCU(&res->resctrl_res.ctrl_domains);
> + res->resctrl_res.rid = rid;
> + }
> +
> + /* TODO: pick MPAM classes to map to resctrl resources */
> +
> + /* Initialise the resctrl structures from the classes */
> + for_each_mpam_resctrl_control(res, rid) {
> + if (!res->class)
> + continue; // dummy resource
> +
> + err = mpam_resctrl_control_init(res);
> + if (err) {
> + pr_debug("Failed to initialise rid %u\n", rid);
> + break;
> + }
> + }
> + cpus_read_unlock();
> +
> + if (err) {
> + pr_debug("Internal error %d - resctrl not supported\n", err);
> + return err;
> + }
> +
This pr_debug() could be dropped since @err is set to true only when
a error is returned from mpam_resctrl_control_init(), we already had
a pr_debug() for the reported error.
> + if (!resctrl_arch_alloc_capable()) {
> + pr_debug("No alloc(%u) found - resctrl not supported\n",
> + resctrl_arch_alloc_capable());
> + return -EOPNOTSUPP;
> + }
> +
> + /* TODO: call resctrl_init() */
> +
> + return 0;
> +}
> diff --git a/include/linux/arm_mpam.h b/include/linux/arm_mpam.h
> index 7f00c5285a32..2c7d1413a401 100644
> --- a/include/linux/arm_mpam.h
> +++ b/include/linux/arm_mpam.h
> @@ -49,6 +49,9 @@ static inline int mpam_ris_create(struct mpam_msc *msc, u8 ris_idx,
> }
> #endif
>
> +bool resctrl_arch_alloc_capable(void);
> +bool resctrl_arch_mon_capable(void);
> +
> /**
> * mpam_register_requestor() - Register a requestor with the MPAM driver
> * @partid_max: The maximum PARTID value the requestor can generate.
Thanks,
Gavin
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 13/41] arm_mpam: resctrl: Add boilerplate cpuhp and domain allocation
2026-03-10 6:17 ` Gavin Shan
@ 2026-03-10 10:34 ` Ben Horgan
0 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-03-10 10:34 UTC (permalink / raw)
To: Gavin Shan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
Hi Gavin,
On 3/10/26 06:17, Gavin Shan wrote:
> Hi Ben,
>
> On 2/25/26 3:56 AM, Ben Horgan wrote:
>> From: James Morse <james.morse@arm.com>
>>
>> resctrl has its own data structures to describe its resources. We
>> can't use
>> these directly as we play tricks with the 'MBA' resource, picking the
>> MPAM
>> controls or monitors that best apply. We may export the same component as
>> both L3 and MBA.
>>
>> Add mpam_resctrl_res[] as the array of class->resctrl mappings we are
>> exporting, and add the cpuhp hooks that allocated and free the resctrl
>> domain structures. Only the mpam control feature are considered here and
>> monitor support will be added later.
>>
>> While we're here, plumb in a few other obvious things.
>>
>> CONFIG_ARM_CPU_RESCTRL is used to allow this code to be built even though
>> it can't yet be linked against resctrl.
>>
>
> CONFIG_ARM_CPU_RESCTRL isn't valid. I guess you're probably mentioning
> CONFIG_ARCH_HAS_CPU_RESCTRL?
CONFIG_ARM_CPU_RESCTRL is added in the Makefile in this patch. As
stated, this is just to allow build testing and the link will fail.
E.g.
$ CONFIG_ARM_CPU_RESCTRL=y make O=out drivers/resctrl/
Trying this now I note there is also a build warning about the
'offline_ctrl_domain' label being unused which I'll fix.
>
>> Tested-by: Gavin Shan <gshan@redhat.com>
>> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>> Tested-by: Peter Newman <peternewman@google.com>
>> Tested-by: Zeng Heng <zengheng4@huawei.com>
>> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
>> Signed-off-by: James Morse <james.morse@arm.com>
>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>> ---
>> Changes since rfc:
>> Domain list is an rcu list
>> Add synchronize_rcu() to free the deleted element
>> Code flow simplification (Jonathan)
>>
>> Changes since v2:
>> Iterate over mpam_resctrl_dom directly (Jonathan)
>> Code flow clarification
>> Comment tidying
>> Remove power of 2 check as no longer creates holes in rmid indices
>> Remove unused type argument
>> add macro helper for_each_mpam_resctrl_control
>>
>> Changes since v3:
>> Add and use mpam_resctrl_online_domain_hdr()
>> mpam_resctrl_alloc_domain() error paths (Reinette)
>> rebase on x86/cache changes rdt_mon_domain becomes rdt_l3_mon_domain
>> etc
>>
>> Changes since v4:
>> Set rid in domain_hdr
>> Use rescctrl_res.alloc_capable to determine if alloc_capable as the
>> decision may depend on the resctrl mount options (cdp)
>> Squash in arm_mpam: resctrl: Sort the order of the domain lists
>> Move out monitor/counter changes to a separate patch
>> Commit message update
>> ---
>> drivers/resctrl/Makefile | 1 +
>> drivers/resctrl/mpam_devices.c | 12 ++
>> drivers/resctrl/mpam_internal.h | 21 ++
>> drivers/resctrl/mpam_resctrl.c | 327 ++++++++++++++++++++++++++++++++
>> include/linux/arm_mpam.h | 3 +
>> 5 files changed, 364 insertions(+)
>> create mode 100644 drivers/resctrl/mpam_resctrl.c
>>
>> diff --git a/drivers/resctrl/Makefile b/drivers/resctrl/Makefile
>> index 898199dcf80d..40beaf999582 100644
>> --- a/drivers/resctrl/Makefile
>> +++ b/drivers/resctrl/Makefile
>> @@ -1,4 +1,5 @@
>> obj-$(CONFIG_ARM64_MPAM_DRIVER) += mpam.o
>> mpam-y += mpam_devices.o
>> +mpam-$(CONFIG_ARM_CPU_RESCTRL) += mpam_resctrl.o
Here.
>> ccflags-$(CONFIG_ARM64_MPAM_DRIVER_DEBUG) += -DDEBUG
>> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/
>> mpam_devices.c
>> index b400a7381d9a..b45743c5fb46 100644
>> --- a/drivers/resctrl/mpam_devices.c
>> +++ b/drivers/resctrl/mpam_devices.c
>> @@ -1628,6 +1628,9 @@ static int mpam_cpu_online(unsigned int cpu)
>> mpam_reprogram_msc(msc);
>> }
>> + if (mpam_is_enabled())
>> + return mpam_resctrl_online_cpu(cpu);
>> +
>> return 0;
>> }
>> @@ -1671,6 +1674,9 @@ static int mpam_cpu_offline(unsigned int cpu)
>> {
>> struct mpam_msc *msc;
>> + if (mpam_is_enabled())
>> + mpam_resctrl_offline_cpu(cpu);
>> +
>> guard(srcu)(&mpam_srcu);
>> list_for_each_entry_srcu(msc, &mpam_all_msc, all_msc_list,
>> srcu_read_lock_held(&mpam_srcu)) {
>> @@ -2516,6 +2522,12 @@ static void mpam_enable_once(void)
>> mutex_unlock(&mpam_list_lock);
>> cpus_read_unlock();
>> + if (!err) {
>> + err = mpam_resctrl_setup();
>> + if (err)
>> + pr_err("Failed to initialise resctrl: %d\n", err);
>> + }
>> +
>> if (err) {
>> mpam_disable_reason = "Failed to enable.";
>> schedule_work(&mpam_broken_work);
>> diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/
>> mpam_internal.h
>> index 4632985bcca6..28ac501e1ac3 100644
>> --- a/drivers/resctrl/mpam_internal.h
>> +++ b/drivers/resctrl/mpam_internal.h
>> @@ -12,6 +12,7 @@
>> #include <linux/jump_label.h>
>> #include <linux/llist.h>
>> #include <linux/mutex.h>
>> +#include <linux/resctrl.h>
>> #include <linux/spinlock.h>
>> #include <linux/srcu.h>
>> #include <linux/types.h>
>> @@ -337,6 +338,16 @@ struct mpam_msc_ris {
>> struct mpam_garbage garbage;
>> };
>> +struct mpam_resctrl_dom {
>> + struct mpam_component *ctrl_comp;
>> + struct rdt_ctrl_domain resctrl_ctrl_dom;
>> +};
>> +
>> +struct mpam_resctrl_res {
>> + struct mpam_class *class;
>> + struct rdt_resource resctrl_res;
>> +};
>> +
>> static inline int mpam_alloc_csu_mon(struct mpam_class *class)
>> {
>> struct mpam_props *cprops = &class->props;
>> @@ -391,6 +402,16 @@ void mpam_msmon_reset_mbwu(struct mpam_component
>> *comp, struct mon_cfg *ctx);
>> int mpam_get_cpumask_from_cache_id(unsigned long cache_id, u32
>> cache_level,
>> cpumask_t *affinity);
>> +#ifdef CONFIG_RESCTRL_FS
>> +int mpam_resctrl_setup(void);
>> +int mpam_resctrl_online_cpu(unsigned int cpu);
>> +void mpam_resctrl_offline_cpu(unsigned int cpu);
>> +#else
>> +static inline int mpam_resctrl_setup(void) { return 0; }
>> +static inline int mpam_resctrl_online_cpu(unsigned int cpu) { return
>> 0; }
>> +static inline void mpam_resctrl_offline_cpu(unsigned int cpu) { }
>> +#endif /* CONFIG_RESCTRL_FS */
>> +
>> /*
>> * MPAM MSCs have the following register layout. See:
>> * Arm Memory System Resource Partitioning and Monitoring (MPAM) System
>> diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/
>> mpam_resctrl.c
>> new file mode 100644
>> index 000000000000..2ffba7a15d6a
>> --- /dev/null
>> +++ b/drivers/resctrl/mpam_resctrl.c
>> @@ -0,0 +1,327 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +// Copyright (C) 2025 Arm Ltd.
>> +
>> +#define pr_fmt(fmt) "%s:%s: " fmt, KBUILD_MODNAME, __func__
>> +
>> +#include <linux/arm_mpam.h>
>> +#include <linux/cacheinfo.h>
>> +#include <linux/cpu.h>
>> +#include <linux/cpumask.h>
>> +#include <linux/errno.h>
>> +#include <linux/list.h>
>> +#include <linux/printk.h>
>> +#include <linux/rculist.h>
>> +#include <linux/resctrl.h>
>> +#include <linux/slab.h>
>> +#include <linux/types.h>
>> +
>> +#include <asm/mpam.h>
>> +
>> +#include "mpam_internal.h"
>> +
>
> The list of included headers could be simplified since several header files
> have been included to mpam_internal.h, for example <linux/arm_mpam.h>,
> <linux/cpumask.h>, <asm/mpam.h> and others.
Could be but it is a deliberate choice to where practical include the
header where the symbols are defined rather than relying on the indirect
includes.
>
>> +/*
>> + * The classes we've picked to map to resctrl resources, wrapped
>> + * in with their resctrl structure.
>> + * Class pointer may be NULL.
>> + */
>> +static struct mpam_resctrl_res mpam_resctrl_controls[RDT_NUM_RESOURCES];
>> +
>> +#define for_each_mpam_resctrl_control(res, rid) \
>> + for (rid = 0, res = &mpam_resctrl_controls[rid]; \
>> + rid < RDT_NUM_RESOURCES; \
>> + rid++, res = &mpam_resctrl_controls[rid])
>> +
>> +/* The lock for modifying resctrl's domain lists from cpuhp
>> callbacks. */
>> +static DEFINE_MUTEX(domain_list_lock);
>> +
>> +bool resctrl_arch_alloc_capable(void)
>> +{
>> + struct mpam_resctrl_res *res;
>> + enum resctrl_res_level rid;
>> +
>> + for_each_mpam_resctrl_control(res, rid) {
>> + if (res->resctrl_res.alloc_capable)
>> + return true;
>> + }
>> +
>> + return false;
>> +}
>> +
>> +/*
>> + * MSC may raise an error interrupt if it sees an out or range
>> partid/pmg,
>> + * and go on to truncate the value. Regardless of what the hardware
>> supports,
>> + * only the system wide safe value is safe to use.
>> + */
>> +u32 resctrl_arch_get_num_closid(struct rdt_resource *ignored)
>> +{
>> + return mpam_partid_max + 1;
>> +}
>> +
>> +struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l)
>> +{
>> + if (l >= RDT_NUM_RESOURCES)
>> + return NULL;
>> +
>> + return &mpam_resctrl_controls[l].resctrl_res;
>> +}
>> +
>> +static int mpam_resctrl_control_init(struct mpam_resctrl_res *res)
>> +{
>> + /* TODO: initialise the resctrl resources */
>> +
>> + return 0;
>> +}
>> +
>> +static int mpam_resctrl_pick_domain_id(int cpu, struct mpam_component
>> *comp)
>> +{
>> + struct mpam_class *class = comp->class;
>> +
>> + if (class->type == MPAM_CLASS_CACHE)
>> + return comp->comp_id;
>> +
>> + /* TODO: repaint domain ids to match the L3 domain ids */
>> + /* Otherwise, expose the ID used by the firmware table code. */
>> + return comp->comp_id;
>> +}
>> +
>> +static void mpam_resctrl_domain_hdr_init(int cpu, struct
>> mpam_component *comp,
>> + enum resctrl_res_level rid,
>> + struct rdt_domain_hdr *hdr)
>> +{
>> + lockdep_assert_cpus_held();
>> +
>> + INIT_LIST_HEAD(&hdr->list);
>> + hdr->id = mpam_resctrl_pick_domain_id(cpu, comp);
>> + hdr->rid = rid;
>> + cpumask_set_cpu(cpu, &hdr->cpu_mask);
>> +}
>> +
>> +static void mpam_resctrl_online_domain_hdr(unsigned int cpu,
>> + struct rdt_domain_hdr *hdr)
>> +{
>> + lockdep_assert_cpus_held();
>> +
>> + cpumask_set_cpu(cpu, &hdr->cpu_mask);
>> +}
>> +
>> +/**
>> + * mpam_resctrl_offline_domain_hdr() - Update the domain header to
>> remove a CPU.
>> + * @cpu: The CPU to remove from the domain.
>> + * @hdr: The domain's header.
>> + *
>> + * Removes @cpu from the header mask. If this was the last CPU in the
>> domain,
> ^^^
> is
>
>> + * the domain header is removed from its parent list and true is
>> returned,
>> + * indicating the parent structure can be freed.
>> + * If there are other CPUs in the domain, returns false.
>> + */
>> +static bool mpam_resctrl_offline_domain_hdr(unsigned int cpu,
>> + struct rdt_domain_hdr *hdr)
>> +{
>> + lockdep_assert_held(&domain_list_lock);
>> +
>> + cpumask_clear_cpu(cpu, &hdr->cpu_mask);
>> + if (cpumask_empty(&hdr->cpu_mask)) {
>> + list_del_rcu(&hdr->list);
>> + synchronize_rcu();
>> + return true;
>> + }
>> +
>> + return false;
>> +}
>> +
>> +static void mpam_resctrl_domain_insert(struct list_head *list,
>> + struct rdt_domain_hdr *new)
>> +{
>> + struct rdt_domain_hdr *err;
>> + struct list_head *pos = NULL;
>> +
>> + lockdep_assert_held(&domain_list_lock);
>> +
>> + err = resctrl_find_domain(list, new->id, &pos);
>> + if (WARN_ON_ONCE(err))
>> + return;
>> +
>> + list_add_tail_rcu(&new->list, pos);
>> +}
>> +
>> +static struct mpam_resctrl_dom *
>> +mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res
>> *res)
>> +{
>> + int err;
>> + struct mpam_resctrl_dom *dom;
>> + struct rdt_ctrl_domain *ctrl_d;
>> + struct mpam_class *class = res->class;
>> + struct mpam_component *comp_iter, *ctrl_comp;
>> + struct rdt_resource *r = &res->resctrl_res;
>> +
>> + lockdep_assert_held(&domain_list_lock);
>> +
>> + ctrl_comp = NULL;
>> + guard(srcu)(&mpam_srcu);
>> + list_for_each_entry_srcu(comp_iter, &class->components, class_list,
>> + srcu_read_lock_held(&mpam_srcu)) {
>> + if (cpumask_test_cpu(cpu, &comp_iter->affinity)) {
>> + ctrl_comp = comp_iter;
>> + break;
>> + }
>> + }
>> +
>> + /* class has no component for this CPU */
>> + if (WARN_ON_ONCE(!ctrl_comp))
>> + return ERR_PTR(-EINVAL);
>> +
>> + dom = kzalloc_node(sizeof(*dom), GFP_KERNEL, cpu_to_node(cpu));
>> + if (!dom)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + if (resctrl_arch_alloc_capable()) {
>> + dom->ctrl_comp = ctrl_comp;
>> +
>> + ctrl_d = &dom->resctrl_ctrl_dom;
>> + mpam_resctrl_domain_hdr_init(cpu, ctrl_comp, r->rid, &ctrl_d-
>> >hdr);
>> + ctrl_d->hdr.type = RESCTRL_CTRL_DOMAIN;
>> + err = resctrl_online_ctrl_domain(r, ctrl_d);
>> + if (err)
>> + goto free_domain;
>> +
>> + mpam_resctrl_domain_insert(&r->ctrl_domains, &ctrl_d->hdr);
>> + } else {
>> + pr_debug("Skipped control domain online - no controls\n");
>> + }
>> + return dom;
>> +
>> +offline_ctrl_domain:
>> + if (resctrl_arch_alloc_capable()) {
>> + mpam_resctrl_offline_domain_hdr(cpu, &ctrl_d->hdr);
>> + resctrl_offline_ctrl_domain(r, ctrl_d);
>> + }
>> +free_domain:
>> + kfree(dom);
>> + dom = ERR_PTR(err);
>> +
>> + return dom;
>> +}
>> +
>> +static struct mpam_resctrl_dom *
>> +mpam_resctrl_get_domain_from_cpu(int cpu, struct mpam_resctrl_res *res)
>> +{
>> + struct mpam_resctrl_dom *dom;
>> + struct rdt_resource *r = &res->resctrl_res;
>> +
>> + lockdep_assert_cpus_held();
>> +
>> + list_for_each_entry_rcu(dom, &r->ctrl_domains,
>> resctrl_ctrl_dom.hdr.list) {
>> + if (cpumask_test_cpu(cpu, &dom->ctrl_comp->affinity))
>> + return dom;
>> + }
>> +
>> + return NULL;
>> +}
>> +
>> +int mpam_resctrl_online_cpu(unsigned int cpu)
>> +{
>> + struct mpam_resctrl_res *res;
>> + enum resctrl_res_level rid;
>> +
>> + guard(mutex)(&domain_list_lock);
>> + for_each_mpam_resctrl_control(res, rid) {
>> + struct mpam_resctrl_dom *dom;
>> +
>> + if (!res->class)
>> + continue; // dummy_resource;
>> +
>> + dom = mpam_resctrl_get_domain_from_cpu(cpu, res);
>> + if (!dom) {
>> + dom = mpam_resctrl_alloc_domain(cpu, res);
>> + } else {
>> + if (resctrl_arch_alloc_capable()) {
>> + struct rdt_ctrl_domain *ctrl_d = &dom->resctrl_ctrl_dom;
>> +
>> + mpam_resctrl_online_domain_hdr(cpu, &ctrl_d->hdr);
>> + }
>> + }
>> + if (IS_ERR(dom))
>> + return PTR_ERR(dom);
>> + }
>> +
>> + resctrl_online_cpu(cpu);
>> +
>> + return 0;
>> +}
>> +
>> +void mpam_resctrl_offline_cpu(unsigned int cpu)
>> +{
>> + struct mpam_resctrl_res *res;
>> + enum resctrl_res_level rid;
>> +
>> + resctrl_offline_cpu(cpu);
>> +
>> + guard(mutex)(&domain_list_lock);
>> + for_each_mpam_resctrl_control(res, rid) {
>> + struct mpam_resctrl_dom *dom;
>> + struct rdt_ctrl_domain *ctrl_d;
>> + bool ctrl_dom_empty;
>> +
>> + if (!res->class)
>> + continue; // dummy resource
>> +
>> + dom = mpam_resctrl_get_domain_from_cpu(cpu, res);
>> + if (WARN_ON_ONCE(!dom))
>> + continue;
>> +
>> + if (resctrl_arch_alloc_capable()) {
>> + ctrl_d = &dom->resctrl_ctrl_dom;
>> + ctrl_dom_empty = mpam_resctrl_offline_domain_hdr(cpu,
>> &ctrl_d->hdr);
>> + if (ctrl_dom_empty)
>> + resctrl_offline_ctrl_domain(&res->resctrl_res, ctrl_d);
>> + } else {
>> + ctrl_dom_empty = true;
>> + }
>> +
>> + if (ctrl_dom_empty)
>> + kfree(dom);
>> + }
>> +}
>> +
>> +int mpam_resctrl_setup(void)
>> +{
>> + int err = 0;
>> + struct mpam_resctrl_res *res;
>> + enum resctrl_res_level rid;
>> +
>> + cpus_read_lock();
>> + for_each_mpam_resctrl_control(res, rid) {
>> + INIT_LIST_HEAD_RCU(&res->resctrl_res.ctrl_domains);
>> + res->resctrl_res.rid = rid;
>> + }
>> +
>> + /* TODO: pick MPAM classes to map to resctrl resources */
>> +
>> + /* Initialise the resctrl structures from the classes */
>> + for_each_mpam_resctrl_control(res, rid) {
>> + if (!res->class)
>> + continue; // dummy resource
>> +
>> + err = mpam_resctrl_control_init(res);
>> + if (err) {
>> + pr_debug("Failed to initialise rid %u\n", rid);
>> + break;
>> + }
>> + }
>> + cpus_read_unlock();
>> +
>> + if (err) {
>> + pr_debug("Internal error %d - resctrl not supported\n", err);
>> + return err;
>> + }
>> +
>
> This pr_debug() could be dropped since @err is set to true only when
> a error is returned from mpam_resctrl_control_init(), we already had
> a pr_debug() for the reported error.
Later in the series this can be an error from
mpam_resctrl_monitor_init() which does also have an associated
pr_debug() but to me it seems reasonable to have this extra message
to make the severity clearer.
Thanks,
Ben
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 14/41] arm_mpam: resctrl: Pick the caches we will use as resctrl resources
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (12 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 13/41] arm_mpam: resctrl: Add boilerplate cpuhp and domain allocation Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:56 ` [PATCH v5 15/41] arm_mpam: resctrl: Implement resctrl_arch_reset_all_ctrls() Ben Horgan
` (29 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
Systems with MPAM support may have a variety of control types at any point
of their system layout. We can only expose certain types of control, and
only if they exist at particular locations.
Start with the well-known caches. These have to be depth 2 or 3 and support
MPAM's cache portion bitmap controls, with a number of portions fewer than
resctrl's limit.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
Jonathan:
Remove brackets
Compress debug message
Use temp var, r
Changes since v2:
Return -EINVAL in mpam_resctrl_control_init() for unknown rid
Changes since v4:
Set alloc_capable after other settings (Reinette)
---
drivers/resctrl/mpam_resctrl.c | 89 +++++++++++++++++++++++++++++++++-
1 file changed, 87 insertions(+), 2 deletions(-)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 2ffba7a15d6a..fe566e39aa4d 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -65,9 +65,93 @@ struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l)
return &mpam_resctrl_controls[l].resctrl_res;
}
+static bool cache_has_usable_cpor(struct mpam_class *class)
+{
+ struct mpam_props *cprops = &class->props;
+
+ if (!mpam_has_feature(mpam_feat_cpor_part, cprops))
+ return false;
+
+ /* resctrl uses u32 for all bitmap configurations */
+ return class->props.cpbm_wd <= 32;
+}
+
+/* Test whether we can export MPAM_CLASS_CACHE:{2,3}? */
+static void mpam_resctrl_pick_caches(void)
+{
+ struct mpam_class *class;
+ struct mpam_resctrl_res *res;
+
+ lockdep_assert_cpus_held();
+
+ guard(srcu)(&mpam_srcu);
+ list_for_each_entry_srcu(class, &mpam_classes, classes_list,
+ srcu_read_lock_held(&mpam_srcu)) {
+ if (class->type != MPAM_CLASS_CACHE) {
+ pr_debug("class %u is not a cache\n", class->level);
+ continue;
+ }
+
+ if (class->level != 2 && class->level != 3) {
+ pr_debug("class %u is not L2 or L3\n", class->level);
+ continue;
+ }
+
+ if (!cache_has_usable_cpor(class)) {
+ pr_debug("class %u cache misses CPOR\n", class->level);
+ continue;
+ }
+
+ if (!cpumask_equal(&class->affinity, cpu_possible_mask)) {
+ pr_debug("class %u has missing CPUs, mask %*pb != %*pb\n", class->level,
+ cpumask_pr_args(&class->affinity),
+ cpumask_pr_args(cpu_possible_mask));
+ continue;
+ }
+
+ if (class->level == 2)
+ res = &mpam_resctrl_controls[RDT_RESOURCE_L2];
+ else
+ res = &mpam_resctrl_controls[RDT_RESOURCE_L3];
+ res->class = class;
+ }
+}
+
static int mpam_resctrl_control_init(struct mpam_resctrl_res *res)
{
- /* TODO: initialise the resctrl resources */
+ struct mpam_class *class = res->class;
+ struct rdt_resource *r = &res->resctrl_res;
+
+ switch (r->rid) {
+ case RDT_RESOURCE_L2:
+ case RDT_RESOURCE_L3:
+ r->schema_fmt = RESCTRL_SCHEMA_BITMAP;
+ r->cache.arch_has_sparse_bitmasks = true;
+
+ r->cache.cbm_len = class->props.cpbm_wd;
+ /* mpam_devices will reject empty bitmaps */
+ r->cache.min_cbm_bits = 1;
+
+ if (r->rid == RDT_RESOURCE_L2) {
+ r->name = "L2";
+ r->ctrl_scope = RESCTRL_L2_CACHE;
+ } else {
+ r->name = "L3";
+ r->ctrl_scope = RESCTRL_L3_CACHE;
+ }
+
+ /*
+ * Which bits are shared with other ...things...
+ * Unknown devices use partid-0 which uses all the bitmap
+ * fields. Until we configured the SMMU and GIC not to do this
+ * 'all the bits' is the correct answer here.
+ */
+ r->cache.shareable_bits = resctrl_get_default_ctrl(r);
+ r->alloc_capable = true;
+ break;
+ default:
+ return -EINVAL;
+ }
return 0;
}
@@ -295,7 +379,8 @@ int mpam_resctrl_setup(void)
res->resctrl_res.rid = rid;
}
- /* TODO: pick MPAM classes to map to resctrl resources */
+ /* Find some classes to use for controls */
+ mpam_resctrl_pick_caches();
/* Initialise the resctrl structures from the classes */
for_each_mpam_resctrl_control(res, rid) {
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 15/41] arm_mpam: resctrl: Implement resctrl_arch_reset_all_ctrls()
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (13 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 14/41] arm_mpam: resctrl: Pick the caches we will use as resctrl resources Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-25 11:03 ` Jonathan Cameron
2026-02-24 17:56 ` [PATCH v5 16/41] arm_mpam: resctrl: Add resctrl_arch_get_config() Ben Horgan
` (28 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
We already have a helper for resetting an mpam class and component. Hook
it up to resctrl_arch_reset_all_ctrls() and the domain offline path.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Zeng Heng <zengheng4@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Don't expose unlocked reset
Changes since v3:
Don't use or expose mpam_reset_component_locked()
---
drivers/resctrl/mpam_devices.c | 2 +-
drivers/resctrl/mpam_internal.h | 3 +++
drivers/resctrl/mpam_resctrl.c | 13 +++++++++++++
3 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index b45743c5fb46..e4a302a53991 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -2567,7 +2567,7 @@ static void mpam_reset_component_locked(struct mpam_component *comp)
}
}
-static void mpam_reset_class_locked(struct mpam_class *class)
+void mpam_reset_class_locked(struct mpam_class *class)
{
struct mpam_component *comp;
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index 28ac501e1ac3..e2704f678af5 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -392,6 +392,9 @@ extern u8 mpam_pmg_max;
void mpam_enable(struct work_struct *work);
void mpam_disable(struct work_struct *work);
+/* Reset all the RIS in a class under cpus_read_lock() */
+void mpam_reset_class_locked(struct mpam_class *class);
+
int mpam_apply_config(struct mpam_component *comp, u16 partid,
struct mpam_config *cfg);
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index fe566e39aa4d..5f482c6293e7 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -168,6 +168,19 @@ static int mpam_resctrl_pick_domain_id(int cpu, struct mpam_component *comp)
return comp->comp_id;
}
+void resctrl_arch_reset_all_ctrls(struct rdt_resource *r)
+{
+ struct mpam_resctrl_res *res;
+
+ lockdep_assert_cpus_held();
+
+ if (!mpam_is_enabled())
+ return;
+
+ res = container_of(r, struct mpam_resctrl_res, resctrl_res);
+ mpam_reset_class_locked(res->class);
+}
+
static void mpam_resctrl_domain_hdr_init(int cpu, struct mpam_component *comp,
enum resctrl_res_level rid,
struct rdt_domain_hdr *hdr)
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 15/41] arm_mpam: resctrl: Implement resctrl_arch_reset_all_ctrls()
2026-02-24 17:56 ` [PATCH v5 15/41] arm_mpam: resctrl: Implement resctrl_arch_reset_all_ctrls() Ben Horgan
@ 2026-02-25 11:03 ` Jonathan Cameron
0 siblings, 0 replies; 75+ messages in thread
From: Jonathan Cameron @ 2026-02-25 11:03 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, kobak, lcherian,
linux-arm-kernel, linux-kernel, peternewman, punit.agrawal,
quic_jiles, reinette.chatre, rohit.mathew, scott, sdonthineni,
tan.shaopeng, xhao, catalin.marinas, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
On Tue, 24 Feb 2026 17:56:54 +0000
Ben Horgan <ben.horgan@arm.com> wrote:
> From: James Morse <james.morse@arm.com>
>
> We already have a helper for resetting an mpam class and component. Hook
> it up to resctrl_arch_reset_all_ctrls() and the domain offline path.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Zeng Heng <zengheng4@huawei.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 16/41] arm_mpam: resctrl: Add resctrl_arch_get_config()
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (14 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 15/41] arm_mpam: resctrl: Implement resctrl_arch_reset_all_ctrls() Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:56 ` [PATCH v5 17/41] arm_mpam: resctrl: Implement helpers to update configuration Ben Horgan
` (27 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
Implement resctrl_arch_get_config() by testing the live configuration for a
CPOR bitmap. For any other configuration type return the default.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
drivers/resctrl/mpam_resctrl.c | 43 ++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 5f482c6293e7..d5caab6b8545 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -168,6 +168,49 @@ static int mpam_resctrl_pick_domain_id(int cpu, struct mpam_component *comp)
return comp->comp_id;
}
+u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain *d,
+ u32 closid, enum resctrl_conf_type type)
+{
+ u32 partid;
+ struct mpam_config *cfg;
+ struct mpam_props *cprops;
+ struct mpam_resctrl_res *res;
+ struct mpam_resctrl_dom *dom;
+ enum mpam_device_features configured_by;
+
+ lockdep_assert_cpus_held();
+
+ if (!mpam_is_enabled())
+ return resctrl_get_default_ctrl(r);
+
+ res = container_of(r, struct mpam_resctrl_res, resctrl_res);
+ dom = container_of(d, struct mpam_resctrl_dom, resctrl_ctrl_dom);
+ cprops = &res->class->props;
+
+ partid = resctrl_get_config_index(closid, type);
+ cfg = &dom->ctrl_comp->cfg[partid];
+
+ switch (r->rid) {
+ case RDT_RESOURCE_L2:
+ case RDT_RESOURCE_L3:
+ configured_by = mpam_feat_cpor_part;
+ break;
+ default:
+ return resctrl_get_default_ctrl(r);
+ }
+
+ if (!r->alloc_capable || partid >= resctrl_arch_get_num_closid(r) ||
+ !mpam_has_feature(configured_by, cfg))
+ return resctrl_get_default_ctrl(r);
+
+ switch (configured_by) {
+ case mpam_feat_cpor_part:
+ return cfg->cpbm;
+ default:
+ return resctrl_get_default_ctrl(r);
+ }
+}
+
void resctrl_arch_reset_all_ctrls(struct rdt_resource *r)
{
struct mpam_resctrl_res *res;
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 17/41] arm_mpam: resctrl: Implement helpers to update configuration
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (15 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 16/41] arm_mpam: resctrl: Add resctrl_arch_get_config() Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:56 ` [PATCH v5 18/41] arm_mpam: resctrl: Add plumbing against arm64 task and cpu hooks Ben Horgan
` (26 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
resctrl has two helpers for updating the configuration.
resctrl_arch_update_one() updates a single value, and is used by the
software-controller to apply feedback to the bandwidth controls, it has to
be called on one of the CPUs in the resctrl:domain.
resctrl_arch_update_domains() copies multiple staged configurations, it can
be called from anywhere.
Both helpers should update any changes to the underlying hardware.
Implement resctrl_arch_update_domains() to use
resctrl_arch_update_one(). Neither need to be called on a specific CPU as
the mpam driver will send IPIs as needed.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
list_for_each_entry -> list_for_each_entry_rcu
return 0
Restrict scope of local variables
Changes since v2:
whitespace fix
---
drivers/resctrl/mpam_resctrl.c | 70 ++++++++++++++++++++++++++++++++++
1 file changed, 70 insertions(+)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index d5caab6b8545..3ca762c3fae6 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -211,6 +211,76 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain *d,
}
}
+int resctrl_arch_update_one(struct rdt_resource *r, struct rdt_ctrl_domain *d,
+ u32 closid, enum resctrl_conf_type t, u32 cfg_val)
+{
+ u32 partid;
+ struct mpam_config cfg;
+ struct mpam_props *cprops;
+ struct mpam_resctrl_res *res;
+ struct mpam_resctrl_dom *dom;
+
+ lockdep_assert_cpus_held();
+ lockdep_assert_irqs_enabled();
+
+ /*
+ * No need to check the CPU as mpam_apply_config() doesn't care, and
+ * resctrl_arch_update_domains() relies on this.
+ */
+ res = container_of(r, struct mpam_resctrl_res, resctrl_res);
+ dom = container_of(d, struct mpam_resctrl_dom, resctrl_ctrl_dom);
+ cprops = &res->class->props;
+
+ partid = resctrl_get_config_index(closid, t);
+ if (!r->alloc_capable || partid >= resctrl_arch_get_num_closid(r)) {
+ pr_debug("Not alloc capable or computed PARTID out of range\n");
+ return -EINVAL;
+ }
+
+ /*
+ * Copy the current config to avoid clearing other resources when the
+ * same component is exposed multiple times through resctrl.
+ */
+ cfg = dom->ctrl_comp->cfg[partid];
+
+ switch (r->rid) {
+ case RDT_RESOURCE_L2:
+ case RDT_RESOURCE_L3:
+ cfg.cpbm = cfg_val;
+ mpam_set_feature(mpam_feat_cpor_part, &cfg);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return mpam_apply_config(dom->ctrl_comp, partid, &cfg);
+}
+
+int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
+{
+ int err;
+ struct rdt_ctrl_domain *d;
+
+ lockdep_assert_cpus_held();
+ lockdep_assert_irqs_enabled();
+
+ list_for_each_entry_rcu(d, &r->ctrl_domains, hdr.list) {
+ for (enum resctrl_conf_type t = 0; t < CDP_NUM_TYPES; t++) {
+ struct resctrl_staged_config *cfg = &d->staged_config[t];
+
+ if (!cfg->have_new_ctrl)
+ continue;
+
+ err = resctrl_arch_update_one(r, d, closid, t,
+ cfg->new_ctrl);
+ if (err)
+ return err;
+ }
+ }
+
+ return 0;
+}
+
void resctrl_arch_reset_all_ctrls(struct rdt_resource *r)
{
struct mpam_resctrl_res *res;
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 18/41] arm_mpam: resctrl: Add plumbing against arm64 task and cpu hooks
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (16 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 17/41] arm_mpam: resctrl: Implement helpers to update configuration Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:56 ` [PATCH v5 19/41] arm_mpam: resctrl: Add CDP emulation Ben Horgan
` (25 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
arm64 provides helpers for changing a task's and a cpu's mpam partid/pmg
values.
These are used to back a number of resctrl_arch_ functions. Connect them
up.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
apostrophes in commit message
---
drivers/resctrl/mpam_resctrl.c | 58 ++++++++++++++++++++++++++++++++++
include/linux/arm_mpam.h | 5 +++
2 files changed, 63 insertions(+)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 3ca762c3fae6..5551e5416620 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -8,6 +8,7 @@
#include <linux/cpu.h>
#include <linux/cpumask.h>
#include <linux/errno.h>
+#include <linux/limits.h>
#include <linux/list.h>
#include <linux/printk.h>
#include <linux/rculist.h>
@@ -34,6 +35,8 @@ static struct mpam_resctrl_res mpam_resctrl_controls[RDT_NUM_RESOURCES];
/* The lock for modifying resctrl's domain lists from cpuhp callbacks. */
static DEFINE_MUTEX(domain_list_lock);
+static bool cdp_enabled;
+
bool resctrl_arch_alloc_capable(void)
{
struct mpam_resctrl_res *res;
@@ -57,6 +60,61 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *ignored)
return mpam_partid_max + 1;
}
+void resctrl_arch_sched_in(struct task_struct *tsk)
+{
+ lockdep_assert_preemption_disabled();
+
+ mpam_thread_switch(tsk);
+}
+
+void resctrl_arch_set_cpu_default_closid_rmid(int cpu, u32 closid, u32 rmid)
+{
+ WARN_ON_ONCE(closid > U16_MAX);
+ WARN_ON_ONCE(rmid > U8_MAX);
+
+ if (!cdp_enabled) {
+ mpam_set_cpu_defaults(cpu, closid, closid, rmid, rmid);
+ } else {
+ /*
+ * When CDP is enabled, resctrl halves the closid range and we
+ * use odd/even partid for one closid.
+ */
+ u32 partid_d = resctrl_get_config_index(closid, CDP_DATA);
+ u32 partid_i = resctrl_get_config_index(closid, CDP_CODE);
+
+ mpam_set_cpu_defaults(cpu, partid_d, partid_i, rmid, rmid);
+ }
+}
+
+void resctrl_arch_sync_cpu_closid_rmid(void *info)
+{
+ struct resctrl_cpu_defaults *r = info;
+
+ lockdep_assert_preemption_disabled();
+
+ if (r) {
+ resctrl_arch_set_cpu_default_closid_rmid(smp_processor_id(),
+ r->closid, r->rmid);
+ }
+
+ resctrl_arch_sched_in(current);
+}
+
+void resctrl_arch_set_closid_rmid(struct task_struct *tsk, u32 closid, u32 rmid)
+{
+ WARN_ON_ONCE(closid > U16_MAX);
+ WARN_ON_ONCE(rmid > U8_MAX);
+
+ if (!cdp_enabled) {
+ mpam_set_task_partid_pmg(tsk, closid, closid, rmid, rmid);
+ } else {
+ u32 partid_d = resctrl_get_config_index(closid, CDP_DATA);
+ u32 partid_i = resctrl_get_config_index(closid, CDP_CODE);
+
+ mpam_set_task_partid_pmg(tsk, partid_d, partid_i, rmid, rmid);
+ }
+}
+
struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l)
{
if (l >= RDT_NUM_RESOURCES)
diff --git a/include/linux/arm_mpam.h b/include/linux/arm_mpam.h
index 2c7d1413a401..5a78299ec464 100644
--- a/include/linux/arm_mpam.h
+++ b/include/linux/arm_mpam.h
@@ -52,6 +52,11 @@ static inline int mpam_ris_create(struct mpam_msc *msc, u8 ris_idx,
bool resctrl_arch_alloc_capable(void);
bool resctrl_arch_mon_capable(void);
+void resctrl_arch_set_cpu_default_closid(int cpu, u32 closid);
+void resctrl_arch_set_closid_rmid(struct task_struct *tsk, u32 closid, u32 rmid);
+void resctrl_arch_set_cpu_default_closid_rmid(int cpu, u32 closid, u32 rmid);
+void resctrl_arch_sched_in(struct task_struct *tsk);
+
/**
* mpam_register_requestor() - Register a requestor with the MPAM driver
* @partid_max: The maximum PARTID value the requestor can generate.
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 19/41] arm_mpam: resctrl: Add CDP emulation
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (17 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 18/41] arm_mpam: resctrl: Add plumbing against arm64 task and cpu hooks Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-25 6:25 ` Zeng Heng
2026-02-24 17:56 ` [PATCH v5 20/41] arm_mpam: resctrl: Convert to/from MPAMs fixed-point formats Ben Horgan
` (24 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan, Dave Martin
From: James Morse <james.morse@arm.com>
Intel RDT's CDP feature allows the cache to use a different control value
depending on whether the accesses was for instruction fetch or a data
access. MPAM's equivalent feature is the other way up: the CPU assigns a
different partid label to traffic depending on whether it was instruction
fetch or a data access, which causes the cache to use a different control
value based solely on the partid.
MPAM can emulate CDP, with the side effect that the alternative partid is
seen by all MSC, it can't be enabled per-MSC.
Add the resctrl hooks to turn this on or off. Add the helpers that match a
closid against a task, which need to be aware that the value written to
hardware is not the same as the one resctrl is using.
Update the 'arm64_mpam_global_default' variable the arch code uses during
context switch to know when the per-cpu value should be used instead. Also,
update these per-cpu values and sync the resulting mpam partid/pmg
configuration to hardware.
resctrl can enable CDP for L2 caches, L3 caches or both. When it is enabled
by one and not the other MPAM globally enabled CDP but hides the effect
on the other cache resource. This hiding is possible as CPOR is the only
supported cache control and that uses a resource bitmap; two partids with
the same bitmap act as one.
Awkwardly, the MB controls don't implement CDP and CDP can't be hidden as
the memory bandwidth control is a maximum per partid which can't be
modelled with more partids. If the total maximum is used for both the data
and instruction partids then then the maximum may be exceeded and if it is
split in two then the one using more bandwidth will hit a lower
limit. Hence, hide the MB controls completely if CDP is enabled for any
resource.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
CC: Dave Martin <Dave.Martin@arm.com>
CC: Amit Singh Tomar <amitsinght@marvell.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
Fail cdp initialisation if there is only one partid
Correct data/code confusion
Changes since v2:
Don't include unused header
Changes since v3:
Update the per-cpu values and sync to h/w
Changes since v4:
Enable separately for L2 and L3
Disable MB controls if CDP enabled
Consider cdp hiding in resctrl_arch_update_one()
---
arch/arm64/include/asm/mpam.h | 1 +
drivers/resctrl/mpam_internal.h | 1 +
drivers/resctrl/mpam_resctrl.c | 119 ++++++++++++++++++++++++++++++++
include/linux/arm_mpam.h | 2 +
4 files changed, 123 insertions(+)
diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
index 05aa71200f61..70d396e7b6da 100644
--- a/arch/arm64/include/asm/mpam.h
+++ b/arch/arm64/include/asm/mpam.h
@@ -4,6 +4,7 @@
#ifndef __ASM__MPAM_H
#define __ASM__MPAM_H
+#include <linux/arm_mpam.h>
#include <linux/bitfield.h>
#include <linux/jump_label.h>
#include <linux/percpu.h>
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index e2704f678af5..57c3d9b962b9 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -346,6 +346,7 @@ struct mpam_resctrl_dom {
struct mpam_resctrl_res {
struct mpam_class *class;
struct rdt_resource resctrl_res;
+ bool cdp_enabled;
};
static inline int mpam_alloc_csu_mon(struct mpam_class *class)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 5551e5416620..fa818ee5db18 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -35,6 +35,10 @@ static struct mpam_resctrl_res mpam_resctrl_controls[RDT_NUM_RESOURCES];
/* The lock for modifying resctrl's domain lists from cpuhp callbacks. */
static DEFINE_MUTEX(domain_list_lock);
+/*
+ * MPAM emulates CDP by setting different PARTID in the I/D fields of MPAM0_EL1.
+ * This applies globally to all traffic the CPU generates.
+ */
static bool cdp_enabled;
bool resctrl_arch_alloc_capable(void)
@@ -50,6 +54,71 @@ bool resctrl_arch_alloc_capable(void)
return false;
}
+bool resctrl_arch_get_cdp_enabled(enum resctrl_res_level rid)
+{
+ return mpam_resctrl_controls[rid].cdp_enabled;
+}
+
+/**
+ * resctrl_reset_task_closids() - Reset the PARTID/PMG values for all tasks.
+ *
+ * At boot, all existing tasks use partid zero for D and I.
+ * To enable/disable CDP emulation, all these tasks need relabelling.
+ */
+static void resctrl_reset_task_closids(void)
+{
+ struct task_struct *p, *t;
+
+ read_lock(&tasklist_lock);
+ for_each_process_thread(p, t) {
+ resctrl_arch_set_closid_rmid(t, RESCTRL_RESERVED_CLOSID,
+ RESCTRL_RESERVED_RMID);
+ }
+ read_unlock(&tasklist_lock);
+}
+
+int resctrl_arch_set_cdp_enabled(enum resctrl_res_level rid, bool enable)
+{
+ u32 partid_i = RESCTRL_RESERVED_CLOSID, partid_d = RESCTRL_RESERVED_CLOSID;
+ int cpu;
+
+ /* This resctrl hook is only called with enable set to false on error */
+ cdp_enabled = enable;
+ mpam_resctrl_controls[rid].cdp_enabled = enable;
+
+ /* The mbw_max feature can't hide cdp as it's a per-partid maximum. */
+ if (cdp_enabled && !mpam_resctrl_controls[RDT_RESOURCE_MBA].cdp_enabled)
+ mpam_resctrl_controls[RDT_RESOURCE_MBA].resctrl_res.alloc_capable = false;
+
+ if (mpam_resctrl_controls[RDT_RESOURCE_MBA].cdp_enabled &&
+ mpam_resctrl_controls[RDT_RESOURCE_MBA].class)
+ mpam_resctrl_controls[RDT_RESOURCE_MBA].resctrl_res.alloc_capable = true;
+
+ if (enable) {
+ if (mpam_partid_max < 1)
+ return -EINVAL;
+
+ partid_d = resctrl_get_config_index(RESCTRL_RESERVED_CLOSID, CDP_DATA);
+ partid_i = resctrl_get_config_index(RESCTRL_RESERVED_CLOSID, CDP_CODE);
+ }
+
+ mpam_set_task_partid_pmg(current, partid_d, partid_i, 0, 0);
+ WRITE_ONCE(arm64_mpam_global_default, mpam_get_regval(current));
+
+ resctrl_reset_task_closids();
+
+ for_each_possible_cpu(cpu)
+ mpam_set_cpu_defaults(cpu, partid_d, partid_i, 0, 0);
+ on_each_cpu(resctrl_arch_sync_cpu_closid_rmid, NULL, 1);
+
+ return 0;
+}
+
+static bool mpam_resctrl_hide_cdp(enum resctrl_res_level rid)
+{
+ return cdp_enabled && !resctrl_arch_get_cdp_enabled(rid);
+}
+
/*
* MSC may raise an error interrupt if it sees an out or range partid/pmg,
* and go on to truncate the value. Regardless of what the hardware supports,
@@ -115,6 +184,30 @@ void resctrl_arch_set_closid_rmid(struct task_struct *tsk, u32 closid, u32 rmid)
}
}
+bool resctrl_arch_match_closid(struct task_struct *tsk, u32 closid)
+{
+ u64 regval = mpam_get_regval(tsk);
+ u32 tsk_closid = FIELD_GET(MPAM0_EL1_PARTID_D, regval);
+
+ if (cdp_enabled)
+ tsk_closid >>= 1;
+
+ return tsk_closid == closid;
+}
+
+/* The task's pmg is not unique, the partid must be considered too */
+bool resctrl_arch_match_rmid(struct task_struct *tsk, u32 closid, u32 rmid)
+{
+ u64 regval = mpam_get_regval(tsk);
+ u32 tsk_closid = FIELD_GET(MPAM0_EL1_PARTID_D, regval);
+ u32 tsk_rmid = FIELD_GET(MPAM0_EL1_PMG_D, regval);
+
+ if (cdp_enabled)
+ tsk_closid >>= 1;
+
+ return (tsk_closid == closid) && (tsk_rmid == rmid);
+}
+
struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l)
{
if (l >= RDT_NUM_RESOURCES)
@@ -245,6 +338,14 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain *d,
dom = container_of(d, struct mpam_resctrl_dom, resctrl_ctrl_dom);
cprops = &res->class->props;
+ /*
+ * When CDP is enabled, but the resource doesn't support it,
+ * the control is cloned across both partids.
+ * Pick one at random to read:
+ */
+ if (mpam_resctrl_hide_cdp(r->rid))
+ type = CDP_DATA;
+
partid = resctrl_get_config_index(closid, type);
cfg = &dom->ctrl_comp->cfg[partid];
@@ -272,6 +373,7 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain *d,
int resctrl_arch_update_one(struct rdt_resource *r, struct rdt_ctrl_domain *d,
u32 closid, enum resctrl_conf_type t, u32 cfg_val)
{
+ int err;
u32 partid;
struct mpam_config cfg;
struct mpam_props *cprops;
@@ -289,6 +391,9 @@ int resctrl_arch_update_one(struct rdt_resource *r, struct rdt_ctrl_domain *d,
dom = container_of(d, struct mpam_resctrl_dom, resctrl_ctrl_dom);
cprops = &res->class->props;
+ if (mpam_resctrl_hide_cdp(r->rid))
+ t = CDP_DATA;
+
partid = resctrl_get_config_index(closid, t);
if (!r->alloc_capable || partid >= resctrl_arch_get_num_closid(r)) {
pr_debug("Not alloc capable or computed PARTID out of range\n");
@@ -311,6 +416,20 @@ int resctrl_arch_update_one(struct rdt_resource *r, struct rdt_ctrl_domain *d,
return -EINVAL;
}
+ /*
+ * When CDP is enabled, but the resource doesn't support it, we need to
+ * apply the same configuration to the other partid.
+ */
+ if (mpam_resctrl_hide_cdp(r->rid)) {
+ partid = resctrl_get_config_index(closid, CDP_CODE);
+ err = mpam_apply_config(dom->ctrl_comp, partid, &cfg);
+ if (err)
+ return err;
+
+ partid = resctrl_get_config_index(closid, CDP_DATA);
+ return mpam_apply_config(dom->ctrl_comp, partid, &cfg);
+ }
+
return mpam_apply_config(dom->ctrl_comp, partid, &cfg);
}
diff --git a/include/linux/arm_mpam.h b/include/linux/arm_mpam.h
index 5a78299ec464..d329b1dc148b 100644
--- a/include/linux/arm_mpam.h
+++ b/include/linux/arm_mpam.h
@@ -56,6 +56,8 @@ void resctrl_arch_set_cpu_default_closid(int cpu, u32 closid);
void resctrl_arch_set_closid_rmid(struct task_struct *tsk, u32 closid, u32 rmid);
void resctrl_arch_set_cpu_default_closid_rmid(int cpu, u32 closid, u32 rmid);
void resctrl_arch_sched_in(struct task_struct *tsk);
+bool resctrl_arch_match_closid(struct task_struct *tsk, u32 closid);
+bool resctrl_arch_match_rmid(struct task_struct *tsk, u32 closid, u32 rmid);
/**
* mpam_register_requestor() - Register a requestor with the MPAM driver
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 19/41] arm_mpam: resctrl: Add CDP emulation
2026-02-24 17:56 ` [PATCH v5 19/41] arm_mpam: resctrl: Add CDP emulation Ben Horgan
@ 2026-02-25 6:25 ` Zeng Heng
0 siblings, 0 replies; 75+ messages in thread
From: Zeng Heng @ 2026-02-25 6:25 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, linux-doc,
Shaopeng Tan
On 2026/2/25 1:56, Ben Horgan wrote:
> From: James Morse <james.morse@arm.com>
>
> Intel RDT's CDP feature allows the cache to use a different control value
> depending on whether the accesses was for instruction fetch or a data
> access. MPAM's equivalent feature is the other way up: the CPU assigns a
> different partid label to traffic depending on whether it was instruction
> fetch or a data access, which causes the cache to use a different control
> value based solely on the partid.
>
> MPAM can emulate CDP, with the side effect that the alternative partid is
> seen by all MSC, it can't be enabled per-MSC.
>
> Add the resctrl hooks to turn this on or off. Add the helpers that match a
> closid against a task, which need to be aware that the value written to
> hardware is not the same as the one resctrl is using.
>
> Update the 'arm64_mpam_global_default' variable the arch code uses during
> context switch to know when the per-cpu value should be used instead. Also,
> update these per-cpu values and sync the resulting mpam partid/pmg
> configuration to hardware.
>
> resctrl can enable CDP for L2 caches, L3 caches or both. When it is enabled
> by one and not the other MPAM globally enabled CDP but hides the effect
> on the other cache resource. This hiding is possible as CPOR is the only
> supported cache control and that uses a resource bitmap; two partids with
> the same bitmap act as one.
>
> Awkwardly, the MB controls don't implement CDP and CDP can't be hidden as
> the memory bandwidth control is a maximum per partid which can't be
> modelled with more partids. If the total maximum is used for both the data
> and instruction partids then then the maximum may be exceeded and if it is
> split in two then the one using more bandwidth will hit a lower
> limit. Hence, hide the MB controls completely if CDP is enabled for any
> resource.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> CC: Dave Martin <Dave.Martin@arm.com>
> CC: Amit Singh Tomar <amitsinght@marvell.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> Changes since rfc:
> Fail cdp initialisation if there is only one partid
> Correct data/code confusion
>
> Changes since v2:
> Don't include unused header
>
> Changes since v3:
> Update the per-cpu values and sync to h/w
>
> Changes since v4:
> Enable separately for L2 and L3
> Disable MB controls if CDP enabled
> Consider cdp hiding in resctrl_arch_update_one()
[...]
@ -245,6 +338,14 @@ u32 resctrl_arch_get_config(struct rdt_resource *r,
struct rdt_ctrl_domain *d,
> dom = container_of(d, struct mpam_resctrl_dom, resctrl_ctrl_dom);
> cprops = &res->class->props;
>
> + /*
> + * When CDP is enabled, but the resource doesn't support it,
> + * the control is cloned across both partids.
> + * Pick one at random to read:
> + */
> + if (mpam_resctrl_hide_cdp(r->rid))
> + type = CDP_DATA;
> +
Yes, I have observed that this issue had already been addressed in
resctrl_arch_get_config() before the mpam_resctrl_glue_v4 release.
> partid = resctrl_get_config_index(closid, type);
> cfg = &dom->ctrl_comp->cfg[partid];
>
> @@ -272,6 +373,7 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain *d,
> int resctrl_arch_update_one(struct rdt_resource *r, struct rdt_ctrl_domain *d,
> u32 closid, enum resctrl_conf_type t, u32 cfg_val)
> {
> + int err;
> u32 partid;
> struct mpam_config cfg;
> struct mpam_props *cprops;
> @@ -289,6 +391,9 @@ int resctrl_arch_update_one(struct rdt_resource *r, struct rdt_ctrl_domain *d,
> dom = container_of(d, struct mpam_resctrl_dom, resctrl_ctrl_dom);
> cprops = &res->class->props;
>
> + if (mpam_resctrl_hide_cdp(r->rid))
> + t = CDP_DATA;
> +
Fix for resctrl_arch_update_one() has been confirmed.
Reviewed-by: Zeng Heng <zengheng4@huawei.com>
Best regards,
Zeng Heng
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 20/41] arm_mpam: resctrl: Convert to/from MPAMs fixed-point formats
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (18 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 19/41] arm_mpam: resctrl: Add CDP emulation Ben Horgan
@ 2026-02-24 17:56 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 21/41] arm_mpam: resctrl: Add kunit test for control format conversions Ben Horgan
` (23 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:56 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Dave Martin, Shaopeng Tan
From: Dave Martin <Dave.Martin@arm.com>
MPAM uses a fixed-point formats for some hardware controls. Resctrl
provides the bandwidth controls as a percentage. Add helpers to convert
between these.
Ensure bwa_wd is at most 16 to make it clear higher values have no meaning.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Ensure bwa_wd is at most 16 (moved from patch 40: arm_mpam: Generate a
configuration for min controls)
Expand comments
---
drivers/resctrl/mpam_devices.c | 7 +++++
drivers/resctrl/mpam_resctrl.c | 51 ++++++++++++++++++++++++++++++++++
2 files changed, 58 insertions(+)
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index e4a302a53991..90d69091e0b9 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -713,6 +713,13 @@ static void mpam_ris_hw_probe(struct mpam_msc_ris *ris)
mpam_set_feature(mpam_feat_mbw_part, props);
props->bwa_wd = FIELD_GET(MPAMF_MBW_IDR_BWA_WD, mbw_features);
+
+ /*
+ * The BWA_WD field can represent 0-63, but the control fields it
+ * describes have a maximum of 16 bits.
+ */
+ props->bwa_wd = min(props->bwa_wd, 16);
+
if (props->bwa_wd && FIELD_GET(MPAMF_MBW_IDR_HAS_MAX, mbw_features))
mpam_set_feature(mpam_feat_mbw_max, props);
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index fa818ee5db18..38d1b7f48ecf 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -10,6 +10,7 @@
#include <linux/errno.h>
#include <linux/limits.h>
#include <linux/list.h>
+#include <linux/math.h>
#include <linux/printk.h>
#include <linux/rculist.h>
#include <linux/resctrl.h>
@@ -227,6 +228,56 @@ static bool cache_has_usable_cpor(struct mpam_class *class)
return class->props.cpbm_wd <= 32;
}
+/*
+ * Each fixed-point hardware value architecturally represents a range
+ * of values: the full range 0% - 100% is split contiguously into
+ * (1 << cprops->bwa_wd) equal bands.
+ *
+ * Although the bwa_bwd fields have 6 bits the maximum valid value is 16
+ * as it reports the width of fields that are at most 16 bits. When
+ * fewer than 16 bits are valid the least significant bits are
+ * ignored. The implied binary point is kept between bits 15 and 16 and
+ * so the valid bits are leftmost.
+ *
+ * See ARM IHI0099B.a "MPAM system component specification", Section 9.3,
+ * "The fixed-point fractional format" for more information.
+ *
+ * Find the nearest percentage value to the upper bound of the selected band:
+ */
+static u32 mbw_max_to_percent(u16 mbw_max, struct mpam_props *cprops)
+{
+ u32 val = mbw_max;
+
+ val >>= 16 - cprops->bwa_wd;
+ val += 1;
+ val *= MAX_MBA_BW;
+ val = DIV_ROUND_CLOSEST(val, 1 << cprops->bwa_wd);
+
+ return val;
+}
+
+/*
+ * Find the band whose upper bound is closest to the specified percentage.
+ *
+ * A round-to-nearest policy is followed here as a balanced compromise
+ * between unexpected under-commit of the resource (where the total of
+ * a set of resource allocations after conversion is less than the
+ * expected total, due to rounding of the individual converted
+ * percentages) and over-commit (where the total of the converted
+ * allocations is greater than expected).
+ */
+static u16 percent_to_mbw_max(u8 pc, struct mpam_props *cprops)
+{
+ u32 val = pc;
+
+ val <<= cprops->bwa_wd;
+ val = DIV_ROUND_CLOSEST(val, MAX_MBA_BW);
+ val = max(val, 1) - 1;
+ val <<= 16 - cprops->bwa_wd;
+
+ return val;
+}
+
/* Test whether we can export MPAM_CLASS_CACHE:{2,3}? */
static void mpam_resctrl_pick_caches(void)
{
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 21/41] arm_mpam: resctrl: Add kunit test for control format conversions
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (19 preceding siblings ...)
2026-02-24 17:56 ` [PATCH v5 20/41] arm_mpam: resctrl: Convert to/from MPAMs fixed-point formats Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 22/41] arm_mpam: resctrl: Add rmid index helpers Ben Horgan
` (22 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Dave Martin, Shaopeng Tan
From: Dave Martin <Dave.Martin@arm.com>
resctrl specifies the format of the control schemes, and these don't match
the hardware.
Some of the conversions are a bit hairy - add some kunit tests.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
[morse: squashed enough of Dave's fixes in here that it's his patch now!]
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Include additional values from the latest spec
---
drivers/resctrl/mpam_resctrl.c | 4 +
drivers/resctrl/test_mpam_resctrl.c | 315 ++++++++++++++++++++++++++++
2 files changed, 319 insertions(+)
create mode 100644 drivers/resctrl/test_mpam_resctrl.c
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 38d1b7f48ecf..ded18a9d4cd4 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -764,3 +764,7 @@ int mpam_resctrl_setup(void)
return 0;
}
+
+#ifdef CONFIG_MPAM_KUNIT_TEST
+#include "test_mpam_resctrl.c"
+#endif
diff --git a/drivers/resctrl/test_mpam_resctrl.c b/drivers/resctrl/test_mpam_resctrl.c
new file mode 100644
index 000000000000..b93d6ad87e43
--- /dev/null
+++ b/drivers/resctrl/test_mpam_resctrl.c
@@ -0,0 +1,315 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2025 Arm Ltd.
+/* This file is intended to be included into mpam_resctrl.c */
+
+#include <kunit/test.h>
+#include <linux/array_size.h>
+#include <linux/bits.h>
+#include <linux/math.h>
+#include <linux/sprintf.h>
+
+struct percent_value_case {
+ u8 pc;
+ u8 width;
+ u16 value;
+};
+
+/*
+ * Mysterious inscriptions taken from the union of ARM DDI 0598D.b,
+ * "Arm Architecture Reference Manual Supplement - Memory System
+ * Resource Partitioning and Monitoring (MPAM), for A-profile
+ * architecture", Section 9.8, "About the fixed-point fractional
+ * format" (exact percentage entries only) and ARM IHI0099B.a
+ * "MPAM system component specification", Section 9.3,
+ * "The fixed-point fractional format":
+ */
+static const struct percent_value_case percent_value_cases[] = {
+ /* Architectural cases: */
+ { 1, 8, 1 }, { 1, 12, 0x27 }, { 1, 16, 0x28e },
+ { 25, 8, 0x3f }, { 25, 12, 0x3ff }, { 25, 16, 0x3fff },
+ { 33, 8, 0x53 }, { 33, 12, 0x546 }, { 33, 16, 0x5479 },
+ { 35, 8, 0x58 }, { 35, 12, 0x598 }, { 35, 16, 0x5998 },
+ { 45, 8, 0x72 }, { 45, 12, 0x732 }, { 45, 16, 0x7332 },
+ { 50, 8, 0x7f }, { 50, 12, 0x7ff }, { 50, 16, 0x7fff },
+ { 52, 8, 0x84 }, { 52, 12, 0x850 }, { 52, 16, 0x851d },
+ { 55, 8, 0x8b }, { 55, 12, 0x8cb }, { 55, 16, 0x8ccb },
+ { 58, 8, 0x93 }, { 58, 12, 0x946 }, { 58, 16, 0x9479 },
+ { 75, 8, 0xbf }, { 75, 12, 0xbff }, { 75, 16, 0xbfff },
+ { 80, 8, 0xcb }, { 80, 12, 0xccb }, { 80, 16, 0xcccb },
+ { 88, 8, 0xe0 }, { 88, 12, 0xe13 }, { 88, 16, 0xe146 },
+ { 95, 8, 0xf2 }, { 95, 12, 0xf32 }, { 95, 16, 0xf332 },
+ { 100, 8, 0xff }, { 100, 12, 0xfff }, { 100, 16, 0xffff },
+};
+
+static void test_percent_value_desc(const struct percent_value_case *param,
+ char *desc)
+{
+ snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+ "pc=%d, width=%d, value=0x%.*x\n",
+ param->pc, param->width,
+ DIV_ROUND_UP(param->width, 4), param->value);
+}
+
+KUNIT_ARRAY_PARAM(test_percent_value, percent_value_cases,
+ test_percent_value_desc);
+
+struct percent_value_test_info {
+ u32 pc; /* result of value-to-percent conversion */
+ u32 value; /* result of percent-to-value conversion */
+ u32 max_value; /* maximum raw value allowed by test params */
+ unsigned int shift; /* promotes raw testcase value to 16 bits */
+};
+
+/*
+ * Convert a reference percentage to a fixed-point MAX value and
+ * vice-versa, based on param (not test->param_value!)
+ */
+static void __prepare_percent_value_test(struct kunit *test,
+ struct percent_value_test_info *res,
+ const struct percent_value_case *param)
+{
+ struct mpam_props fake_props = { };
+
+ /* Reject bogus test parameters that would break the tests: */
+ KUNIT_ASSERT_GE(test, param->width, 1);
+ KUNIT_ASSERT_LE(test, param->width, 16);
+ KUNIT_ASSERT_LT(test, param->value, 1 << param->width);
+
+ mpam_set_feature(mpam_feat_mbw_max, &fake_props);
+ fake_props.bwa_wd = param->width;
+
+ res->shift = 16 - param->width;
+ res->max_value = GENMASK_U32(param->width - 1, 0);
+ res->value = percent_to_mbw_max(param->pc, &fake_props);
+ res->pc = mbw_max_to_percent(param->value << res->shift, &fake_props);
+}
+
+static void test_get_mba_granularity(struct kunit *test)
+{
+ int ret;
+ struct mpam_props fake_props = { };
+
+ /* Use MBW_MAX */
+ mpam_set_feature(mpam_feat_mbw_max, &fake_props);
+
+ fake_props.bwa_wd = 0;
+ KUNIT_EXPECT_FALSE(test, mba_class_use_mbw_max(&fake_props));
+
+ fake_props.bwa_wd = 1;
+ KUNIT_EXPECT_TRUE(test, mba_class_use_mbw_max(&fake_props));
+
+ /* Architectural maximum: */
+ fake_props.bwa_wd = 16;
+ KUNIT_EXPECT_TRUE(test, mba_class_use_mbw_max(&fake_props));
+
+ /* No usable control... */
+ fake_props.bwa_wd = 0;
+ ret = get_mba_granularity(&fake_props);
+ KUNIT_EXPECT_EQ(test, ret, 0);
+
+ fake_props.bwa_wd = 1;
+ ret = get_mba_granularity(&fake_props);
+ KUNIT_EXPECT_EQ(test, ret, 50); /* DIV_ROUND_UP(100, 1 << 1)% = 50% */
+
+ fake_props.bwa_wd = 2;
+ ret = get_mba_granularity(&fake_props);
+ KUNIT_EXPECT_EQ(test, ret, 25); /* DIV_ROUND_UP(100, 1 << 2)% = 25% */
+
+ fake_props.bwa_wd = 3;
+ ret = get_mba_granularity(&fake_props);
+ KUNIT_EXPECT_EQ(test, ret, 13); /* DIV_ROUND_UP(100, 1 << 3)% = 13% */
+
+ fake_props.bwa_wd = 6;
+ ret = get_mba_granularity(&fake_props);
+ KUNIT_EXPECT_EQ(test, ret, 2); /* DIV_ROUND_UP(100, 1 << 6)% = 2% */
+
+ fake_props.bwa_wd = 7;
+ ret = get_mba_granularity(&fake_props);
+ KUNIT_EXPECT_EQ(test, ret, 1); /* DIV_ROUND_UP(100, 1 << 7)% = 1% */
+
+ /* Granularity saturates at 1% */
+ fake_props.bwa_wd = 16; /* architectural maximum */
+ ret = get_mba_granularity(&fake_props);
+ KUNIT_EXPECT_EQ(test, ret, 1); /* DIV_ROUND_UP(100, 1 << 16)% = 1% */
+}
+
+static void test_mbw_max_to_percent(struct kunit *test)
+{
+ const struct percent_value_case *param = test->param_value;
+ struct percent_value_test_info res;
+
+ /*
+ * Since the reference values in percent_value_cases[] all
+ * correspond to exact percentages, round-to-nearest will
+ * always give the exact percentage back when the MPAM max
+ * value has precision of 0.5% or finer. (Always true for the
+ * reference data, since they all specify 8 bits or more of
+ * precision.
+ *
+ * So, keep it simple and demand an exact match:
+ */
+ __prepare_percent_value_test(test, &res, param);
+ KUNIT_EXPECT_EQ(test, res.pc, param->pc);
+}
+
+static void test_percent_to_mbw_max(struct kunit *test)
+{
+ const struct percent_value_case *param = test->param_value;
+ struct percent_value_test_info res;
+
+ __prepare_percent_value_test(test, &res, param);
+
+ KUNIT_EXPECT_GE(test, res.value, param->value << res.shift);
+ KUNIT_EXPECT_LE(test, res.value, (param->value + 1) << res.shift);
+ KUNIT_EXPECT_LE(test, res.value, res.max_value << res.shift);
+
+ /* No flexibility allowed for 0% and 100%! */
+
+ if (param->pc == 0)
+ KUNIT_EXPECT_EQ(test, res.value, 0);
+
+ if (param->pc == 100)
+ KUNIT_EXPECT_EQ(test, res.value, res.max_value << res.shift);
+}
+
+static const void *test_all_bwa_wd_gen_params(struct kunit *test, const void *prev,
+ char *desc)
+{
+ uintptr_t param = (uintptr_t)prev;
+
+ if (param > 15)
+ return NULL;
+
+ param++;
+
+ snprintf(desc, KUNIT_PARAM_DESC_SIZE, "wd=%u\n", (unsigned int)param);
+
+ return (void *)param;
+}
+
+static unsigned int test_get_bwa_wd(struct kunit *test)
+{
+ uintptr_t param = (uintptr_t)test->param_value;
+
+ KUNIT_ASSERT_GE(test, param, 1);
+ KUNIT_ASSERT_LE(test, param, 16);
+
+ return param;
+}
+
+static void test_mbw_max_to_percent_limits(struct kunit *test)
+{
+ struct mpam_props fake_props = {0};
+ u32 max_value;
+
+ mpam_set_feature(mpam_feat_mbw_max, &fake_props);
+ fake_props.bwa_wd = test_get_bwa_wd(test);
+ max_value = GENMASK(15, 16 - fake_props.bwa_wd);
+
+ KUNIT_EXPECT_EQ(test, mbw_max_to_percent(max_value, &fake_props),
+ MAX_MBA_BW);
+ KUNIT_EXPECT_EQ(test, mbw_max_to_percent(0, &fake_props),
+ get_mba_min(&fake_props));
+
+ /*
+ * Rounding policy dependent 0% sanity-check:
+ * With round-to-nearest, the minimum mbw_max value really
+ * should map to 0% if there are at least 200 steps.
+ * (100 steps may be enough for some other rounding policies.)
+ */
+ if (fake_props.bwa_wd >= 8)
+ KUNIT_EXPECT_EQ(test, mbw_max_to_percent(0, &fake_props), 0);
+
+ if (fake_props.bwa_wd < 8 &&
+ mbw_max_to_percent(0, &fake_props) == 0)
+ kunit_warn(test, "wd=%d: Testsuite/driver Rounding policy mismatch?",
+ fake_props.bwa_wd);
+}
+
+/*
+ * Check that converting a percentage to mbw_max and back again (or, as
+ * appropriate, vice-versa) always restores the original value:
+ */
+static void test_percent_max_roundtrip_stability(struct kunit *test)
+{
+ struct mpam_props fake_props = {0};
+ unsigned int shift;
+ u32 pc, max, pc2, max2;
+
+ mpam_set_feature(mpam_feat_mbw_max, &fake_props);
+ fake_props.bwa_wd = test_get_bwa_wd(test);
+ shift = 16 - fake_props.bwa_wd;
+
+ /*
+ * Converting a valid value from the coarser scale to the finer
+ * scale and back again must yield the original value:
+ */
+ if (fake_props.bwa_wd >= 7) {
+ /* More than 100 steps: only test exact pc values: */
+ for (pc = get_mba_min(&fake_props); pc <= MAX_MBA_BW; pc++) {
+ max = percent_to_mbw_max(pc, &fake_props);
+ pc2 = mbw_max_to_percent(max, &fake_props);
+ KUNIT_EXPECT_EQ(test, pc2, pc);
+ }
+ } else {
+ /* Fewer than 100 steps: only test exact mbw_max values: */
+ for (max = 0; max < 1 << 16; max += 1 << shift) {
+ pc = mbw_max_to_percent(max, &fake_props);
+ max2 = percent_to_mbw_max(pc, &fake_props);
+ KUNIT_EXPECT_EQ(test, max2, max);
+ }
+ }
+}
+
+static void test_percent_to_max_rounding(struct kunit *test)
+{
+ const struct percent_value_case *param = test->param_value;
+ unsigned int num_rounded_up = 0, total = 0;
+ struct percent_value_test_info res;
+
+ for (param = percent_value_cases, total = 0;
+ param < &percent_value_cases[ARRAY_SIZE(percent_value_cases)];
+ param++, total++) {
+ __prepare_percent_value_test(test, &res, param);
+ if (res.value > param->value << res.shift)
+ num_rounded_up++;
+ }
+
+ /*
+ * The MPAM driver applies a round-to-nearest policy, whereas a
+ * round-down policy seems to have been applied in the
+ * reference table from which the test vectors were selected.
+ *
+ * For a large and well-distributed suite of test vectors,
+ * about half should be rounded up and half down compared with
+ * the reference table. The actual test vectors are few in
+ * number and probably not very well distributed however, so
+ * tolerate a round-up rate of between 1/4 and 3/4 before
+ * crying foul:
+ */
+
+ kunit_info(test, "Round-up rate: %u%% (%u/%u)\n",
+ DIV_ROUND_CLOSEST(num_rounded_up * 100, total),
+ num_rounded_up, total);
+
+ KUNIT_EXPECT_GE(test, 4 * num_rounded_up, 1 * total);
+ KUNIT_EXPECT_LE(test, 4 * num_rounded_up, 3 * total);
+}
+
+static struct kunit_case mpam_resctrl_test_cases[] = {
+ KUNIT_CASE(test_get_mba_granularity),
+ KUNIT_CASE_PARAM(test_mbw_max_to_percent, test_percent_value_gen_params),
+ KUNIT_CASE_PARAM(test_percent_to_mbw_max, test_percent_value_gen_params),
+ KUNIT_CASE_PARAM(test_mbw_max_to_percent_limits, test_all_bwa_wd_gen_params),
+ KUNIT_CASE(test_percent_to_max_rounding),
+ KUNIT_CASE_PARAM(test_percent_max_roundtrip_stability,
+ test_all_bwa_wd_gen_params),
+ {}
+};
+
+static struct kunit_suite mpam_resctrl_test_suite = {
+ .name = "mpam_resctrl_test_suite",
+ .test_cases = mpam_resctrl_test_cases,
+};
+
+kunit_test_suites(&mpam_resctrl_test_suite);
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 22/41] arm_mpam: resctrl: Add rmid index helpers
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (20 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 21/41] arm_mpam: resctrl: Add kunit test for control format conversions Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 23/41] arm_mpam: resctrl: Add kunit test for rmid idx conversions Ben Horgan
` (21 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
Because MPAM's pmg aren't identical to RDT's rmid, resctrl handles some
data structures by index. This allows x86 to map indexes to RMID, and MPAM
to map them to partid-and-pmg.
Add the helpers to do this.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Suggested-by: James Morse <james.morse@arm.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
Use ~0U instead of ~0 in lhs of left shift
Changes since v2:
Drop changes signed-off-by as reworked patch
Use multiply and add rather than shift to avoid holes
---
drivers/resctrl/mpam_resctrl.c | 16 ++++++++++++++++
include/linux/arm_mpam.h | 3 +++
2 files changed, 19 insertions(+)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index ded18a9d4cd4..48f96d7f9109 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -130,6 +130,22 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *ignored)
return mpam_partid_max + 1;
}
+u32 resctrl_arch_system_num_rmid_idx(void)
+{
+ return (mpam_pmg_max + 1) * (mpam_partid_max + 1);
+}
+
+u32 resctrl_arch_rmid_idx_encode(u32 closid, u32 rmid)
+{
+ return closid * (mpam_pmg_max + 1) + rmid;
+}
+
+void resctrl_arch_rmid_idx_decode(u32 idx, u32 *closid, u32 *rmid)
+{
+ *closid = idx / (mpam_pmg_max + 1);
+ *rmid = idx % (mpam_pmg_max + 1);
+}
+
void resctrl_arch_sched_in(struct task_struct *tsk)
{
lockdep_assert_preemption_disabled();
diff --git a/include/linux/arm_mpam.h b/include/linux/arm_mpam.h
index d329b1dc148b..7d23c90f077d 100644
--- a/include/linux/arm_mpam.h
+++ b/include/linux/arm_mpam.h
@@ -58,6 +58,9 @@ void resctrl_arch_set_cpu_default_closid_rmid(int cpu, u32 closid, u32 rmid);
void resctrl_arch_sched_in(struct task_struct *tsk);
bool resctrl_arch_match_closid(struct task_struct *tsk, u32 closid);
bool resctrl_arch_match_rmid(struct task_struct *tsk, u32 closid, u32 rmid);
+u32 resctrl_arch_rmid_idx_encode(u32 closid, u32 rmid);
+void resctrl_arch_rmid_idx_decode(u32 idx, u32 *closid, u32 *rmid);
+u32 resctrl_arch_system_num_rmid_idx(void);
/**
* mpam_register_requestor() - Register a requestor with the MPAM driver
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 23/41] arm_mpam: resctrl: Add kunit test for rmid idx conversions
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (21 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 22/41] arm_mpam: resctrl: Add rmid index helpers Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 24/41] arm_mpam: resctrl: Wait for cacheinfo to be ready Ben Horgan
` (20 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
As MPAM's pmg are scoped by partid and RDT's rmid are global the
rescrl mapping to an index needs to differ.
Add some tests for the MPAM rmid mapping.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
drivers/resctrl/test_mpam_resctrl.c | 49 +++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
diff --git a/drivers/resctrl/test_mpam_resctrl.c b/drivers/resctrl/test_mpam_resctrl.c
index b93d6ad87e43..a20da161d965 100644
--- a/drivers/resctrl/test_mpam_resctrl.c
+++ b/drivers/resctrl/test_mpam_resctrl.c
@@ -296,6 +296,54 @@ static void test_percent_to_max_rounding(struct kunit *test)
KUNIT_EXPECT_LE(test, 4 * num_rounded_up, 3 * total);
}
+struct rmid_idx_case {
+ u32 max_partid;
+ u32 max_pmg;
+};
+
+static const struct rmid_idx_case rmid_idx_cases[] = {
+ {0, 0}, {1, 4}, {3, 1}, {5, 9}, {4, 4}, {100, 11}, {0xFFFF, 0xFF},
+};
+
+static void test_rmid_idx_desc(const struct rmid_idx_case *param, char *desc)
+{
+ snprintf(desc, KUNIT_PARAM_DESC_SIZE, "max_partid=%d, max_pmg=%d\n",
+ param->max_partid, param->max_pmg);
+}
+
+KUNIT_ARRAY_PARAM(test_rmid_idx, rmid_idx_cases, test_rmid_idx_desc);
+
+static void test_rmid_idx_encoding(struct kunit *test)
+{
+ u32 orig_mpam_partid_max = mpam_partid_max;
+ u32 orig_mpam_pmg_max = mpam_pmg_max;
+ const struct rmid_idx_case *param = test->param_value;
+ u32 idx, num_idx, count = 0;
+
+ mpam_partid_max = param->max_partid;
+ mpam_pmg_max = param->max_pmg;
+
+ for (u32 partid = 0; partid <= mpam_partid_max; partid++) {
+ for (u32 pmg = 0; pmg <= mpam_pmg_max; pmg++) {
+ u32 partid_out, pmg_out;
+
+ idx = resctrl_arch_rmid_idx_encode(partid, pmg);
+ /* Confirm there are no holes in the rmid idx range */
+ KUNIT_EXPECT_EQ(test, count, idx);
+ count++;
+ resctrl_arch_rmid_idx_decode(idx, &partid_out, &pmg_out);
+ KUNIT_EXPECT_EQ(test, pmg, pmg_out);
+ KUNIT_EXPECT_EQ(test, partid, partid_out);
+ }
+ }
+ num_idx = resctrl_arch_system_num_rmid_idx();
+ KUNIT_EXPECT_EQ(test, idx + 1, num_idx);
+
+ /* Restore global variables that were messed with */
+ mpam_partid_max = orig_mpam_partid_max;
+ mpam_pmg_max = orig_mpam_pmg_max;
+}
+
static struct kunit_case mpam_resctrl_test_cases[] = {
KUNIT_CASE(test_get_mba_granularity),
KUNIT_CASE_PARAM(test_mbw_max_to_percent, test_percent_value_gen_params),
@@ -304,6 +352,7 @@ static struct kunit_case mpam_resctrl_test_cases[] = {
KUNIT_CASE(test_percent_to_max_rounding),
KUNIT_CASE_PARAM(test_percent_max_roundtrip_stability,
test_all_bwa_wd_gen_params),
+ KUNIT_CASE_PARAM(test_rmid_idx_encoding, test_rmid_idx_gen_params),
{}
};
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 24/41] arm_mpam: resctrl: Wait for cacheinfo to be ready
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (22 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 23/41] arm_mpam: resctrl: Add kunit test for rmid idx conversions Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 25/41] arm_mpam: resctrl: Add support for 'MB' resource Ben Horgan
` (19 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
In order to calculate the rmid realloc threshold the size of the cache
needs to be known. Cache domains will also be named after the cache id. So
that this information can be extracted from cacheinfo we need to wait for
it to be ready. The cacheinfo information is populated in device_initcall()
so we wait for that.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
[horgan: split out from another patch]
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
This is moved into it's own patch to allow all uses of cacheinfo to be
valid when they are introduced.
---
drivers/resctrl/mpam_resctrl.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 48f96d7f9109..fc5877eb5970 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -16,6 +16,7 @@
#include <linux/resctrl.h>
#include <linux/slab.h>
#include <linux/types.h>
+#include <linux/wait.h>
#include <asm/mpam.h>
@@ -42,6 +43,13 @@ static DEFINE_MUTEX(domain_list_lock);
*/
static bool cdp_enabled;
+/*
+ * We use cacheinfo to discover the size of the caches and their id. cacheinfo
+ * populates this from a device_initcall(). mpam_resctrl_setup() must wait.
+ */
+static bool cacheinfo_ready;
+static DECLARE_WAIT_QUEUE_HEAD(wait_cacheinfo_ready);
+
bool resctrl_arch_alloc_capable(void)
{
struct mpam_resctrl_res *res;
@@ -743,6 +751,8 @@ int mpam_resctrl_setup(void)
struct mpam_resctrl_res *res;
enum resctrl_res_level rid;
+ wait_event(wait_cacheinfo_ready, cacheinfo_ready);
+
cpus_read_lock();
for_each_mpam_resctrl_control(res, rid) {
INIT_LIST_HEAD_RCU(&res->resctrl_res.ctrl_domains);
@@ -781,6 +791,15 @@ int mpam_resctrl_setup(void)
return 0;
}
+static int __init __cacheinfo_ready(void)
+{
+ cacheinfo_ready = true;
+ wake_up(&wait_cacheinfo_ready);
+
+ return 0;
+}
+device_initcall_sync(__cacheinfo_ready);
+
#ifdef CONFIG_MPAM_KUNIT_TEST
#include "test_mpam_resctrl.c"
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 25/41] arm_mpam: resctrl: Add support for 'MB' resource
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (23 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 24/41] arm_mpam: resctrl: Wait for cacheinfo to be ready Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 26/41] arm_mpam: resctrl: Add monitor initialisation and domain boilerplate Ben Horgan
` (18 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan, Dave Martin
From: James Morse <james.morse@arm.com>
resctrl supports 'MB', as a percentage throttling of traffic from the
L3. This is the control that mba_sc uses, so ideally the class chosen
should be as close as possible to the counters used for mbm_total. If
there is a single L3 and the topology of the memory matches then the
traffic at the memory controller will be equivalent to that at egress of
the L3. If these conditions are met allow the memory class to back MB.
MB's percentage control should be backed either with the fixed point
fraction MBW_MAX or bandwidth portion bitmaps. The bandwidth portion
bitmaps is not used as its tricky to pick which bits to use to avoid
contention, and may be possible to expose this as something other than a
percentage in the future.
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Co-developed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Code flow change
Commit message 'or'
Changes since v3:
initialise tmp_cpumask
update commit message
check the traffic matches l3
update comment on candidate_class update, only mbm_total
drop tags due to rework
Changes since v4:
Move __free declarations to point of first use
New line for a '{'
set r->alloc_capable last (Reinette)
---
drivers/resctrl/mpam_resctrl.c | 275 ++++++++++++++++++++++++++++++++-
1 file changed, 274 insertions(+), 1 deletion(-)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index fc5877eb5970..29efcad163e6 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -252,6 +252,33 @@ static bool cache_has_usable_cpor(struct mpam_class *class)
return class->props.cpbm_wd <= 32;
}
+static bool mba_class_use_mbw_max(struct mpam_props *cprops)
+{
+ return (mpam_has_feature(mpam_feat_mbw_max, cprops) &&
+ cprops->bwa_wd);
+}
+
+static bool class_has_usable_mba(struct mpam_props *cprops)
+{
+ return mba_class_use_mbw_max(cprops);
+}
+
+/*
+ * Calculate the worst-case percentage change from each implemented step
+ * in the control.
+ */
+static u32 get_mba_granularity(struct mpam_props *cprops)
+{
+ if (!mba_class_use_mbw_max(cprops))
+ return 0;
+
+ /*
+ * bwa_wd is the number of bits implemented in the 0.xxx
+ * fixed point fraction. 1 bit is 50%, 2 is 25% etc.
+ */
+ return DIV_ROUND_UP(MAX_MBA_BW, 1 << cprops->bwa_wd);
+}
+
/*
* Each fixed-point hardware value architecturally represents a range
* of values: the full range 0% - 100% is split contiguously into
@@ -302,6 +329,154 @@ static u16 percent_to_mbw_max(u8 pc, struct mpam_props *cprops)
return val;
}
+static u32 get_mba_min(struct mpam_props *cprops)
+{
+ if (!mba_class_use_mbw_max(cprops)) {
+ WARN_ON_ONCE(1);
+ return 0;
+ }
+
+ return mbw_max_to_percent(0, cprops);
+}
+
+/* Find the L3 cache that has affinity with this CPU */
+static int find_l3_equivalent_bitmask(int cpu, cpumask_var_t tmp_cpumask)
+{
+ u32 cache_id = get_cpu_cacheinfo_id(cpu, 3);
+
+ lockdep_assert_cpus_held();
+
+ return mpam_get_cpumask_from_cache_id(cache_id, 3, tmp_cpumask);
+}
+
+/*
+ * topology_matches_l3() - Is the provided class the same shape as L3
+ * @victim: The class we'd like to pretend is L3.
+ *
+ * resctrl expects all the world's a Xeon, and all counters are on the
+ * L3. We allow some mapping counters on other classes. This requires
+ * that the CPU->domain mapping is the same kind of shape.
+ *
+ * Using cacheinfo directly would make this work even if resctrl can't
+ * use the L3 - but cacheinfo can't tell us anything about offline CPUs.
+ * Using the L3 resctrl domain list also depends on CPUs being online.
+ * Using the mpam_class we picked for L3 so we can use its domain list
+ * assumes that there are MPAM controls on the L3.
+ * Instead, this path eventually uses the mpam_get_cpumask_from_cache_id()
+ * helper which can tell us about offline CPUs ... but getting the cache_id
+ * to start with relies on at least one CPU per L3 cache being online at
+ * boot.
+ *
+ * Walk the victim component list and compare the affinity mask with the
+ * corresponding L3. The topology matches if each victim:component's affinity
+ * mask is the same as the CPU's corresponding L3's. These lists/masks are
+ * computed from firmware tables so don't change at runtime.
+ */
+static bool topology_matches_l3(struct mpam_class *victim)
+{
+ int cpu, err;
+ struct mpam_component *victim_iter;
+
+ lockdep_assert_cpus_held();
+
+ cpumask_var_t __free(free_cpumask_var) tmp_cpumask = CPUMASK_VAR_NULL;
+ if (!alloc_cpumask_var(&tmp_cpumask, GFP_KERNEL))
+ return false;
+
+ guard(srcu)(&mpam_srcu);
+ list_for_each_entry_srcu(victim_iter, &victim->components, class_list,
+ srcu_read_lock_held(&mpam_srcu)) {
+ if (cpumask_empty(&victim_iter->affinity)) {
+ pr_debug("class %u has CPU-less component %u - can't match L3!\n",
+ victim->level, victim_iter->comp_id);
+ return false;
+ }
+
+ cpu = cpumask_any_and(&victim_iter->affinity, cpu_online_mask);
+ if (WARN_ON_ONCE(cpu >= nr_cpu_ids))
+ return false;
+
+ cpumask_clear(tmp_cpumask);
+ err = find_l3_equivalent_bitmask(cpu, tmp_cpumask);
+ if (err) {
+ pr_debug("Failed to find L3's equivalent component to class %u component %u\n",
+ victim->level, victim_iter->comp_id);
+ return false;
+ }
+
+ /* Any differing bits in the affinity mask? */
+ if (!cpumask_equal(tmp_cpumask, &victim_iter->affinity)) {
+ pr_debug("class %u component %u has Mismatched CPU mask with L3 equivalent\n"
+ "L3:%*pbl != victim:%*pbl\n",
+ victim->level, victim_iter->comp_id,
+ cpumask_pr_args(tmp_cpumask),
+ cpumask_pr_args(&victim_iter->affinity));
+
+ return false;
+ }
+ }
+
+ return true;
+}
+
+/*
+ * Test if the traffic for a class matches that at egress from the L3. For
+ * MSC at memory controllers this is only possible if there is a single L3
+ * as otherwise the counters at the memory can include bandwidth from the
+ * non-local L3.
+ */
+static bool traffic_matches_l3(struct mpam_class *class)
+{
+ int err, cpu;
+
+ lockdep_assert_cpus_held();
+
+ if (class->type == MPAM_CLASS_CACHE && class->level == 3)
+ return true;
+
+ if (class->type == MPAM_CLASS_CACHE && class->level != 3) {
+ pr_debug("class %u is a different cache from L3\n", class->level);
+ return false;
+ }
+
+ if (class->type != MPAM_CLASS_MEMORY) {
+ pr_debug("class %u is neither of type cache or memory\n", class->level);
+ return false;
+ }
+
+ cpumask_var_t __free(free_cpumask_var) tmp_cpumask = CPUMASK_VAR_NULL;
+ if (!alloc_cpumask_var(&tmp_cpumask, GFP_KERNEL)) {
+ pr_debug("cpumask allocation failed\n");
+ return false;
+ }
+
+ if (class->type != MPAM_CLASS_MEMORY) {
+ pr_debug("class %u is neither of type cache or memory\n",
+ class->level);
+ return false;
+ }
+
+ cpu = cpumask_any_and(&class->affinity, cpu_online_mask);
+ err = find_l3_equivalent_bitmask(cpu, tmp_cpumask);
+ if (err) {
+ pr_debug("Failed to find L3 downstream to cpu %d\n", cpu);
+ return false;
+ }
+
+ if (!cpumask_equal(tmp_cpumask, cpu_possible_mask)) {
+ pr_debug("There is more than one L3\n");
+ return false;
+ }
+
+ /* Be strict; the traffic might stop in the intermediate cache. */
+ if (get_cpu_cacheinfo_id(cpu, 4) != -1) {
+ pr_debug("L3 isn't the last level of cache\n");
+ return false;
+ }
+
+ return true;
+}
+
/* Test whether we can export MPAM_CLASS_CACHE:{2,3}? */
static void mpam_resctrl_pick_caches(void)
{
@@ -343,9 +518,68 @@ static void mpam_resctrl_pick_caches(void)
}
}
+static void mpam_resctrl_pick_mba(void)
+{
+ struct mpam_class *class, *candidate_class = NULL;
+ struct mpam_resctrl_res *res;
+
+ lockdep_assert_cpus_held();
+
+ guard(srcu)(&mpam_srcu);
+ list_for_each_entry_srcu(class, &mpam_classes, classes_list,
+ srcu_read_lock_held(&mpam_srcu)) {
+ struct mpam_props *cprops = &class->props;
+
+ if (class->level != 3 && class->type == MPAM_CLASS_CACHE) {
+ pr_debug("class %u is a cache but not the L3\n", class->level);
+ continue;
+ }
+
+ if (!class_has_usable_mba(cprops)) {
+ pr_debug("class %u has no bandwidth control\n",
+ class->level);
+ continue;
+ }
+
+ if (!cpumask_equal(&class->affinity, cpu_possible_mask)) {
+ pr_debug("class %u has missing CPUs\n", class->level);
+ continue;
+ }
+
+ if (!topology_matches_l3(class)) {
+ pr_debug("class %u topology doesn't match L3\n",
+ class->level);
+ continue;
+ }
+
+ if (!traffic_matches_l3(class)) {
+ pr_debug("class %u traffic doesn't match L3 egress\n",
+ class->level);
+ continue;
+ }
+
+ /*
+ * Pick a resource to be MBA that as close as possible to
+ * the L3. mbm_total counts the bandwidth leaving the L3
+ * cache and MBA should correspond as closely as possible
+ * for proper operation of mba_sc.
+ */
+ if (!candidate_class || class->level < candidate_class->level)
+ candidate_class = class;
+ }
+
+ if (candidate_class) {
+ pr_debug("selected class %u to back MBA\n",
+ candidate_class->level);
+ res = &mpam_resctrl_controls[RDT_RESOURCE_MBA];
+ res->class = candidate_class;
+ }
+}
+
static int mpam_resctrl_control_init(struct mpam_resctrl_res *res)
{
struct mpam_class *class = res->class;
+ struct mpam_props *cprops = &class->props;
struct rdt_resource *r = &res->resctrl_res;
switch (r->rid) {
@@ -375,6 +609,19 @@ static int mpam_resctrl_control_init(struct mpam_resctrl_res *res)
r->cache.shareable_bits = resctrl_get_default_ctrl(r);
r->alloc_capable = true;
break;
+ case RDT_RESOURCE_MBA:
+ r->schema_fmt = RESCTRL_SCHEMA_RANGE;
+ r->ctrl_scope = RESCTRL_L3_CACHE;
+
+ r->membw.delay_linear = true;
+ r->membw.throttle_mode = THREAD_THROTTLE_UNDEFINED;
+ r->membw.min_bw = get_mba_min(cprops);
+ r->membw.max_bw = MAX_MBA_BW;
+ r->membw.bw_gran = get_mba_granularity(cprops);
+
+ r->name = "MB";
+ r->alloc_capable = true;
+ break;
default:
return -EINVAL;
}
@@ -389,7 +636,17 @@ static int mpam_resctrl_pick_domain_id(int cpu, struct mpam_component *comp)
if (class->type == MPAM_CLASS_CACHE)
return comp->comp_id;
- /* TODO: repaint domain ids to match the L3 domain ids */
+ if (topology_matches_l3(class)) {
+ /* Use the corresponding L3 component ID as the domain ID */
+ int id = get_cpu_cacheinfo_id(cpu, 3);
+
+ /* Implies topology_matches_l3() made a mistake */
+ if (WARN_ON_ONCE(id == -1))
+ return comp->comp_id;
+
+ return id;
+ }
+
/* Otherwise, expose the ID used by the firmware table code. */
return comp->comp_id;
}
@@ -429,6 +686,12 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain *d,
case RDT_RESOURCE_L3:
configured_by = mpam_feat_cpor_part;
break;
+ case RDT_RESOURCE_MBA:
+ if (mpam_has_feature(mpam_feat_mbw_max, cprops)) {
+ configured_by = mpam_feat_mbw_max;
+ break;
+ }
+ fallthrough;
default:
return resctrl_get_default_ctrl(r);
}
@@ -440,6 +703,8 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain *d,
switch (configured_by) {
case mpam_feat_cpor_part:
return cfg->cpbm;
+ case mpam_feat_mbw_max:
+ return mbw_max_to_percent(cfg->mbw_max, cprops);
default:
return resctrl_get_default_ctrl(r);
}
@@ -487,6 +752,13 @@ int resctrl_arch_update_one(struct rdt_resource *r, struct rdt_ctrl_domain *d,
cfg.cpbm = cfg_val;
mpam_set_feature(mpam_feat_cpor_part, &cfg);
break;
+ case RDT_RESOURCE_MBA:
+ if (mpam_has_feature(mpam_feat_mbw_max, cprops)) {
+ cfg.mbw_max = percent_to_mbw_max(cfg_val, cprops);
+ mpam_set_feature(mpam_feat_mbw_max, &cfg);
+ break;
+ }
+ fallthrough;
default:
return -EINVAL;
}
@@ -761,6 +1033,7 @@ int mpam_resctrl_setup(void)
/* Find some classes to use for controls */
mpam_resctrl_pick_caches();
+ mpam_resctrl_pick_mba();
/* Initialise the resctrl structures from the classes */
for_each_mpam_resctrl_control(res, rid) {
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 26/41] arm_mpam: resctrl: Add monitor initialisation and domain boilerplate
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (24 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 25/41] arm_mpam: resctrl: Add support for 'MB' resource Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-25 11:14 ` Jonathan Cameron
2026-02-26 3:47 ` Zeng Heng
2026-02-24 17:57 ` [PATCH v5 27/41] arm_mpam: resctrl: Add support for csu counters Ben Horgan
` (17 subsequent siblings)
43 siblings, 2 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc
Add the boilerplate that tells resctrl about the mpam monitors that are
available. resctrl expects all (non-telemetry) monitors to be on the L3 and
so advertise them there and invent an L3 resctrl resource if required. The
L3 cache itself has to exist as the cache ids are used as the domain
ids.
Bring the resctrl monitor domains online and offline based on the cpus
they contain.
Support for specific monitor types is left to later.
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
New patch but mostly moved from the existing patches to
separate the monitors from the controls and the boilerplate
from the specific counters.
Use l3->mon_capable in resctrl_arch_mon_capable() as
resctrl_enable_mon_event() now returns a bool.
---
drivers/resctrl/mpam_internal.h | 7 ++
drivers/resctrl/mpam_resctrl.c | 142 +++++++++++++++++++++++++++++---
2 files changed, 139 insertions(+), 10 deletions(-)
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index 57c3d9b962b9..472bd5d27baa 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -341,6 +341,7 @@ struct mpam_msc_ris {
struct mpam_resctrl_dom {
struct mpam_component *ctrl_comp;
struct rdt_ctrl_domain resctrl_ctrl_dom;
+ struct rdt_l3_mon_domain resctrl_mon_dom;
};
struct mpam_resctrl_res {
@@ -349,6 +350,12 @@ struct mpam_resctrl_res {
bool cdp_enabled;
};
+struct mpam_resctrl_mon {
+ struct mpam_class *class;
+
+ /* per-class data that resctrl needs will live here */
+};
+
static inline int mpam_alloc_csu_mon(struct mpam_class *class)
{
struct mpam_props *cprops = &class->props;
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 29efcad163e6..c14e59e8586d 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -34,6 +34,23 @@ static struct mpam_resctrl_res mpam_resctrl_controls[RDT_NUM_RESOURCES];
rid < RDT_NUM_RESOURCES; \
rid++, res = &mpam_resctrl_controls[rid])
+/*
+ * The classes we've picked to map to resctrl events.
+ * Resctrl believes all the worlds a Xeon, and these are all on the L3. This
+ * array lets us find the actual class backing the event counters. e.g.
+ * the only memory bandwidth counters may be on the memory controller, but to
+ * make use of them, we pretend they are on L3. Restrict the events considered
+ * to those supported by MPAM.
+ * Class pointer may be NULL.
+ */
+#define MPAM_MAX_EVENT QOS_L3_MBM_TOTAL_EVENT_ID
+static struct mpam_resctrl_mon mpam_resctrl_counters[MPAM_MAX_EVENT + 1];
+
+#define for_each_mpam_resctrl_mon(mon, eventid) \
+ for (eventid = QOS_FIRST_EVENT, mon = &mpam_resctrl_counters[eventid]; \
+ eventid <= MPAM_MAX_EVENT; \
+ eventid++, mon = &mpam_resctrl_counters[eventid])
+
/* The lock for modifying resctrl's domain lists from cpuhp callbacks. */
static DEFINE_MUTEX(domain_list_lock);
@@ -63,6 +80,15 @@ bool resctrl_arch_alloc_capable(void)
return false;
}
+bool resctrl_arch_mon_capable(void)
+{
+ struct mpam_resctrl_res *res = &mpam_resctrl_controls[RDT_RESOURCE_L3];
+ struct rdt_resource *l3 = &res->resctrl_res;
+
+ /* All monitors are presented as being on the L3 cache */
+ return l3->mon_capable;
+}
+
bool resctrl_arch_get_cdp_enabled(enum resctrl_res_level rid)
{
return mpam_resctrl_controls[rid].cdp_enabled;
@@ -651,6 +677,57 @@ static int mpam_resctrl_pick_domain_id(int cpu, struct mpam_component *comp)
return comp->comp_id;
}
+static int mpam_resctrl_monitor_init(struct mpam_resctrl_mon *mon,
+ enum resctrl_event_id type)
+{
+ struct mpam_resctrl_res *res = &mpam_resctrl_controls[RDT_RESOURCE_L3];
+ struct rdt_resource *l3 = &res->resctrl_res;
+
+ lockdep_assert_cpus_held();
+
+ /*
+ * There also needs to be an L3 cache present.
+ * The check just requires any online CPU and it can't go offline as we
+ * hold the cpu lock.
+ */
+ if (get_cpu_cacheinfo_id(raw_smp_processor_id(), 3) == -1)
+ return 0;
+
+ /*
+ * If there are no MPAM resources on L3, force it into existence.
+ * topology_matches_l3() already ensures this looks like the L3.
+ * The domain-ids will be fixed up by mpam_resctrl_domain_hdr_init().
+ */
+ if (!res->class) {
+ pr_warn_once("Faking L3 MSC to enable counters.\n");
+ res->class = mpam_resctrl_counters[type].class;
+ }
+
+ /*
+ * Called multiple times!, once per event type that has a
+ * monitoring class.
+ * Setting name is necessary on monitor only platforms.
+ */
+ l3->name = "L3";
+ l3->mon_scope = RESCTRL_L3_CACHE;
+
+ /*
+ * num-rmid is the upper bound for the number of monitoring
+ * groups that can exist simultaneously, including the
+ * default monitoring group for each control group. Hence,
+ * advertise the whole rmid_idx space even though each
+ * control group has its own pmg/rmid space. Unfortunately,
+ * this does mean userspace needs to know the architecture
+ * to correctly interpret this value.
+ */
+ l3->mon.num_rmid = resctrl_arch_system_num_rmid_idx();
+
+ if (resctrl_enable_mon_event(type, false, 0, NULL))
+ l3->mon_capable = true;
+
+ return 0;
+}
+
u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain *d,
u32 closid, enum resctrl_conf_type type)
{
@@ -883,6 +960,7 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
{
int err;
struct mpam_resctrl_dom *dom;
+ struct rdt_l3_mon_domain *mon_d;
struct rdt_ctrl_domain *ctrl_d;
struct mpam_class *class = res->class;
struct mpam_component *comp_iter, *ctrl_comp;
@@ -922,6 +1000,20 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
} else {
pr_debug("Skipped control domain online - no controls\n");
}
+
+ if (resctrl_arch_mon_capable()) {
+ mon_d = &dom->resctrl_mon_dom;
+ mpam_resctrl_domain_hdr_init(cpu, any_mon_comp, r->rid, &mon_d->hdr);
+ mon_d->hdr.type = RESCTRL_MON_DOMAIN;
+ err = resctrl_online_mon_domain(r, &mon_d->hdr);
+ if (err)
+ goto offline_ctrl_domain;
+
+ mpam_resctrl_domain_insert(&r->mon_domains, &mon_d->hdr);
+ } else {
+ pr_debug("Skipped monitor domain online - no monitors\n");
+ }
+
return dom;
offline_ctrl_domain:
@@ -973,6 +1065,11 @@ int mpam_resctrl_online_cpu(unsigned int cpu)
mpam_resctrl_online_domain_hdr(cpu, &ctrl_d->hdr);
}
+ if (resctrl_arch_mon_capable()) {
+ struct rdt_l3_mon_domain *mon_d = &dom->resctrl_mon_dom;
+
+ mpam_resctrl_online_domain_hdr(cpu, &mon_d->hdr);
+ }
}
if (IS_ERR(dom))
return PTR_ERR(dom);
@@ -993,8 +1090,9 @@ void mpam_resctrl_offline_cpu(unsigned int cpu)
guard(mutex)(&domain_list_lock);
for_each_mpam_resctrl_control(res, rid) {
struct mpam_resctrl_dom *dom;
+ struct rdt_l3_mon_domain *mon_d;
struct rdt_ctrl_domain *ctrl_d;
- bool ctrl_dom_empty;
+ bool ctrl_dom_empty, mon_dom_empty;
if (!res->class)
continue; // dummy resource
@@ -1012,7 +1110,16 @@ void mpam_resctrl_offline_cpu(unsigned int cpu)
ctrl_dom_empty = true;
}
- if (ctrl_dom_empty)
+ if (resctrl_arch_mon_capable()) {
+ mon_d = &dom->resctrl_mon_dom;
+ mon_dom_empty = mpam_resctrl_offline_domain_hdr(cpu, &mon_d->hdr);
+ if (mon_dom_empty)
+ resctrl_offline_mon_domain(&res->resctrl_res, &mon_d->hdr);
+ } else {
+ mon_dom_empty = true;
+ }
+
+ if (ctrl_dom_empty && mon_dom_empty)
kfree(dom);
}
}
@@ -1022,12 +1129,15 @@ int mpam_resctrl_setup(void)
int err = 0;
struct mpam_resctrl_res *res;
enum resctrl_res_level rid;
+ struct mpam_resctrl_mon *mon;
+ enum resctrl_event_id eventid;
wait_event(wait_cacheinfo_ready, cacheinfo_ready);
cpus_read_lock();
for_each_mpam_resctrl_control(res, rid) {
INIT_LIST_HEAD_RCU(&res->resctrl_res.ctrl_domains);
+ INIT_LIST_HEAD_RCU(&res->resctrl_res.mon_domains);
res->resctrl_res.rid = rid;
}
@@ -1043,25 +1153,37 @@ int mpam_resctrl_setup(void)
err = mpam_resctrl_control_init(res);
if (err) {
pr_debug("Failed to initialise rid %u\n", rid);
- break;
+ goto internal_error;
}
}
- cpus_read_unlock();
- if (err) {
- pr_debug("Internal error %d - resctrl not supported\n", err);
- return err;
+ for_each_mpam_resctrl_mon(mon, eventid) {
+ if (!mon->class)
+ continue; // dummy resource
+
+ err = mpam_resctrl_monitor_init(mon, eventid);
+ if (err) {
+ pr_debug("Failed to initialise event %u\n", eventid);
+ goto internal_error;
+ }
}
- if (!resctrl_arch_alloc_capable()) {
- pr_debug("No alloc(%u) found - resctrl not supported\n",
- resctrl_arch_alloc_capable());
+ cpus_read_unlock();
+
+ if (!resctrl_arch_alloc_capable() && !resctrl_arch_mon_capable()) {
+ pr_debug("No alloc(%u) or monitor(%u) found - resctrl not supported\n",
+ resctrl_arch_alloc_capable(), resctrl_arch_mon_capable());
return -EOPNOTSUPP;
}
/* TODO: call resctrl_init() */
return 0;
+
+internal_error:
+ cpus_read_unlock();
+ pr_debug("Internal error %d - resctrl not supported\n", err);
+ return err;
}
static int __init __cacheinfo_ready(void)
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 26/41] arm_mpam: resctrl: Add monitor initialisation and domain boilerplate
2026-02-24 17:57 ` [PATCH v5 26/41] arm_mpam: resctrl: Add monitor initialisation and domain boilerplate Ben Horgan
@ 2026-02-25 11:14 ` Jonathan Cameron
2026-02-26 3:47 ` Zeng Heng
1 sibling, 0 replies; 75+ messages in thread
From: Jonathan Cameron @ 2026-02-25 11:14 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, kobak, lcherian,
linux-arm-kernel, linux-kernel, peternewman, punit.agrawal,
quic_jiles, reinette.chatre, rohit.mathew, scott, sdonthineni,
tan.shaopeng, xhao, catalin.marinas, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc
On Tue, 24 Feb 2026 17:57:05 +0000
Ben Horgan <ben.horgan@arm.com> wrote:
> Add the boilerplate that tells resctrl about the mpam monitors that are
> available. resctrl expects all (non-telemetry) monitors to be on the L3 and
> so advertise them there and invent an L3 resctrl resource if required. The
> L3 cache itself has to exist as the cache ids are used as the domain
> ids.
>
> Bring the resctrl monitor domains online and offline based on the cpus
> they contain.
>
> Support for specific monitor types is left to later.
>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> New patch but mostly moved from the existing patches to
> separate the monitors from the controls and the boilerplate
> from the specific counters.
> Use l3->mon_capable in resctrl_arch_mon_capable() as
> resctrl_enable_mon_event() now returns a bool.
Just one trivial comment on short line wrap. I'm not that fussed though so
I don't mind if you only tidy that up if doing a v6.
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
> index 29efcad163e6..c14e59e8586d 100644
> --- a/drivers/resctrl/mpam_resctrl.c
> +++ b/drivers/resctrl/mpam_resctrl.c
> +static int mpam_resctrl_monitor_init(struct mpam_resctrl_mon *mon,
> + enum resctrl_event_id type)
> +{
...
> +
> + /*
> + * num-rmid is the upper bound for the number of monitoring
> + * groups that can exist simultaneously, including the
> + * default monitoring group for each control group. Hence,
> + * advertise the whole rmid_idx space even though each
> + * control group has its own pmg/rmid space. Unfortunately,
> + * this does mean userspace needs to know the architecture
> + * to correctly interpret this value.
Trivial but that's an oddly short wrap. Should be.
* num-rmid is the upper bound for the number of monitoring groups that
* can exist simultaneously, including the default monitoring group for
* each control group. Hence, advertise the whole rmid_idx space even
* though each control group has its own pmg/rmid space. Unfortunately,
* this does mean userspace needs to know the architecture to correctly
* interpret this value.
The wonder of an email client with rulers :)
J
> + */
> + l3->mon.num_rmid = resctrl_arch_system_num_rmid_idx();
> +
> + if (resctrl_enable_mon_event(type, false, 0, NULL))
> + l3->mon_capable = true;
> +
> + return 0;
> +}
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 26/41] arm_mpam: resctrl: Add monitor initialisation and domain boilerplate
2026-02-24 17:57 ` [PATCH v5 26/41] arm_mpam: resctrl: Add monitor initialisation and domain boilerplate Ben Horgan
2026-02-25 11:14 ` Jonathan Cameron
@ 2026-02-26 3:47 ` Zeng Heng
2026-02-26 10:26 ` Ben Horgan
1 sibling, 1 reply; 75+ messages in thread
From: Zeng Heng @ 2026-02-26 3:47 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, linux-doc
Hi Ben,
On 2026/2/25 1:57, Ben Horgan wrote:
> Add the boilerplate that tells resctrl about the mpam monitors that are
> available. resctrl expects all (non-telemetry) monitors to be on the L3 and
> so advertise them there and invent an L3 resctrl resource if required. The
> L3 cache itself has to exist as the cache ids are used as the domain
> ids.
>
> Bring the resctrl monitor domains online and offline based on the cpus
> they contain.
>
> Support for specific monitor types is left to later.
>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
> New patch but mostly moved from the existing patches to
> separate the monitors from the controls and the boilerplate
> from the specific counters.
> Use l3->mon_capable in resctrl_arch_mon_capable() as
> resctrl_enable_mon_event() now returns a bool.
> ---
> drivers/resctrl/mpam_internal.h | 7 ++
> drivers/resctrl/mpam_resctrl.c | 142 +++++++++++++++++++++++++++++---
> 2 files changed, 139 insertions(+), 10 deletions(-)
>
[...]
> @@ -922,6 +1000,20 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
> } else {
> pr_debug("Skipped control domain online - no controls\n");
> }
> +
> + if (resctrl_arch_mon_capable()) {
> + mon_d = &dom->resctrl_mon_dom;
> + mpam_resctrl_domain_hdr_init(cpu, any_mon_comp, r->rid, &mon_d->hdr);
> + mon_d->hdr.type = RESCTRL_MON_DOMAIN;
> + err = resctrl_online_mon_domain(r, &mon_d->hdr);
> + if (err)
> + goto offline_ctrl_domain;
> +
> + mpam_resctrl_domain_insert(&r->mon_domains, &mon_d->hdr);
> + } else {
> + pr_debug("Skipped monitor domain online - no monitors\n");
> + }
> +
> return dom;
>
I noticed that resctrl_arch_mon_capable() only performs checks for L3
monitoring functionality. This leads to an issue on platforms that
include L2 monitoring capabilities, where the code incorrectly enters
this branch and triggers the following warning by
mpam_resctrl_domain_insert():
[ 22.867070] ------------[ cut here ]------------
[ 22.867073] WARNING: drivers/resctrl/mpam_resctrl.c:1495 at
mpam_resctrl_domain_insert+0x74/0x80, CPU#2: cpuhp/2/25
[ 29.376035] Modules linked in:
[ 29.379080] CPU: 2 UID: 0 PID: 25 Comm: cpuhp/2 Not tainted
7.0.0-rc1-g4288ec146462 #30 PREEMPT
[ 29.387853] Hardware name: To Be Filled By O.E.M. 183.0/To Be Filled
By O.E.M., BIOS 183.0 02/12/2026
[ 29.397058] pstate: 61400009 (nZCv daif +PAN -UAO -TCO +DIT -SSBS
BTYPE=--)
[ 29.404007] pc : mpam_resctrl_domain_insert+0x74/0x80
[ 29.409048] lr : mpam_resctrl_domain_insert+0x34/0x80
[ 29.414088] sp : ffff8000876abc60
...
[ 29.488625] Call trace:
[ 29.491060] mpam_resctrl_domain_insert+0x74/0x80 (P)
[ 29.496100] mpam_resctrl_online_cpu+0x2b4/0x428
[ 29.500706] mpam_cpu_online+0x274/0x298
[ 29.504618] cpuhp_invoke_callback+0x104/0x20c
[ 29.509052] cpuhp_thread_fun+0xa4/0x17c
[ 29.512963] smpboot_thread_fn+0x220/0x24c
[ 29.517048] kthread+0x120/0x12c
[ 29.520265] ret_from_fork+0x10/0x20
[ 29.523830] ---[ end trace 0000000000000000 ]---
To preserve the existing public interface of resctrl_arch_mon_capable(),
please consider the following approach:
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 694ea8548a05..b06a89494ff0 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -1563,6 +1563,10 @@ mpam_resctrl_alloc_domain(unsigned int cpu,
struct mpam_resctrl_res *res)
if (resctrl_arch_mon_capable()) {
struct mpam_component *any_mon_comp;
struct mpam_resctrl_mon *mon;
enum resctrl_event_id eventid;
+ /* TODO: Only supports L3 monitor type currently. */
+ if (r->rid != RDT_RESOURCE_L3)
+ return dom;
Best regards,
Zeng Heng
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 26/41] arm_mpam: resctrl: Add monitor initialisation and domain boilerplate
2026-02-26 3:47 ` Zeng Heng
@ 2026-02-26 10:26 ` Ben Horgan
2026-02-27 3:01 ` Zeng Heng
0 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-26 10:26 UTC (permalink / raw)
To: Zeng Heng
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, linux-doc
Hi Zeng,
On 2/26/26 03:47, Zeng Heng wrote:
> Hi Ben,
>
> On 2026/2/25 1:57, Ben Horgan wrote:
>> Add the boilerplate that tells resctrl about the mpam monitors that are
>> available. resctrl expects all (non-telemetry) monitors to be on the
>> L3 and
>> so advertise them there and invent an L3 resctrl resource if required.
>> The
>> L3 cache itself has to exist as the cache ids are used as the domain
>> ids.
>>
>> Bring the resctrl monitor domains online and offline based on the cpus
>> they contain.
>>
>> Support for specific monitor types is left to later.
>>
>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>> ---
>> New patch but mostly moved from the existing patches to
>> separate the monitors from the controls and the boilerplate
>> from the specific counters.
>> Use l3->mon_capable in resctrl_arch_mon_capable() as
>> resctrl_enable_mon_event() now returns a bool.
>> ---
>> drivers/resctrl/mpam_internal.h | 7 ++
>> drivers/resctrl/mpam_resctrl.c | 142 +++++++++++++++++++++++++++++---
>> 2 files changed, 139 insertions(+), 10 deletions(-)
>>
>
> [...]
>
>> @@ -922,6 +1000,20 @@ mpam_resctrl_alloc_domain(unsigned int cpu,
>> struct mpam_resctrl_res *res)
>> } else {
>> pr_debug("Skipped control domain online - no controls\n");
>> }
>> +
>> + if (resctrl_arch_mon_capable()) {
>> + mon_d = &dom->resctrl_mon_dom;
>> + mpam_resctrl_domain_hdr_init(cpu, any_mon_comp, r->rid,
>> &mon_d->hdr);
>> + mon_d->hdr.type = RESCTRL_MON_DOMAIN;
>> + err = resctrl_online_mon_domain(r, &mon_d->hdr);
>> + if (err)
>> + goto offline_ctrl_domain;
>> +
>> + mpam_resctrl_domain_insert(&r->mon_domains, &mon_d->hdr);
>> + } else {
>> + pr_debug("Skipped monitor domain online - no monitors\n");
>> + }
>> +
>> return dom;
>>
>
> I noticed that resctrl_arch_mon_capable() only performs checks for L3
> monitoring functionality. This leads to an issue on platforms that
> include L2 monitoring capabilities, where the code incorrectly enters
> this branch and triggers the following warning by
> mpam_resctrl_domain_insert():
>
> [ 22.867070] ------------[ cut here ]------------
> [ 22.867073] WARNING: drivers/resctrl/mpam_resctrl.c:1495 at
> mpam_resctrl_domain_insert+0x74/0x80, CPU#2: cpuhp/2/25
> [ 29.376035] Modules linked in:
> [ 29.379080] CPU: 2 UID: 0 PID: 25 Comm: cpuhp/2 Not tainted 7.0.0-
> rc1-g4288ec146462 #30 PREEMPT
> [ 29.387853] Hardware name: To Be Filled By O.E.M. 183.0/To Be Filled
> By O.E.M., BIOS 183.0 02/12/2026
> [ 29.397058] pstate: 61400009 (nZCv daif +PAN -UAO -TCO +DIT -SSBS
> BTYPE=--)
> [ 29.404007] pc : mpam_resctrl_domain_insert+0x74/0x80
> [ 29.409048] lr : mpam_resctrl_domain_insert+0x34/0x80
> [ 29.414088] sp : ffff8000876abc60
> ...
> [ 29.488625] Call trace:
> [ 29.491060] mpam_resctrl_domain_insert+0x74/0x80 (P)
> [ 29.496100] mpam_resctrl_online_cpu+0x2b4/0x428
> [ 29.500706] mpam_cpu_online+0x274/0x298
> [ 29.504618] cpuhp_invoke_callback+0x104/0x20c
> [ 29.509052] cpuhp_thread_fun+0xa4/0x17c
> [ 29.512963] smpboot_thread_fn+0x220/0x24c
> [ 29.517048] kthread+0x120/0x12c
> [ 29.520265] ret_from_fork+0x10/0x20
> [ 29.523830] ---[ end trace 0000000000000000 ]---
Thanks for reporting this bug. It looks to be because resctrl_arch_mon_capable() is telling us if
there is any mon capable resource when really what we want to know is if this resource is mon capable.
The pattern occurs in a few places. Does this diff help?
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 694ea8548a05..19b306017845 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -1543,7 +1543,7 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
if (!dom)
return ERR_PTR(-ENOMEM);
- if (resctrl_arch_alloc_capable()) {
+ if (r->alloc_capable) {
dom->ctrl_comp = ctrl_comp;
ctrl_d = &dom->resctrl_ctrl_dom;
@@ -1558,7 +1558,7 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
pr_debug("Skipped control domain online - no controls\n");
}
- if (resctrl_arch_mon_capable()) {
+ if (r->mon_capable) {
struct mpam_component *any_mon_comp;
struct mpam_resctrl_mon *mon;
enum resctrl_event_id eventid;
@@ -1603,7 +1603,7 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
return dom;
offline_ctrl_domain:
- if (resctrl_arch_alloc_capable()) {
+ if (r->alloc_capable) {
mpam_resctrl_offline_domain_hdr(cpu, &ctrl_d->hdr);
resctrl_offline_ctrl_domain(r, ctrl_d);
}
@@ -1671,6 +1671,7 @@ int mpam_resctrl_online_cpu(unsigned int cpu)
guard(mutex)(&domain_list_lock);
for_each_mpam_resctrl_control(res, rid) {
struct mpam_resctrl_dom *dom;
+ struct rdt_resource *r = &res->resctrl_res;
if (!res->class)
continue; // dummy_resource;
@@ -1679,12 +1680,12 @@ int mpam_resctrl_online_cpu(unsigned int cpu)
if (!dom) {
dom = mpam_resctrl_alloc_domain(cpu, res);
} else {
- if (resctrl_arch_alloc_capable()) {
+ if (r->alloc_capable) {
struct rdt_ctrl_domain *ctrl_d = &dom->resctrl_ctrl_dom;
mpam_resctrl_online_domain_hdr(cpu, &ctrl_d->hdr);
}
- if (resctrl_arch_mon_capable()) {
+ if (r->mon_capable) {
struct rdt_l3_mon_domain *mon_d = &dom->resctrl_mon_dom;
mpam_resctrl_online_domain_hdr(cpu, &mon_d->hdr);
@@ -1712,6 +1713,7 @@ void mpam_resctrl_offline_cpu(unsigned int cpu)
struct rdt_l3_mon_domain *mon_d;
struct rdt_ctrl_domain *ctrl_d;
bool ctrl_dom_empty, mon_dom_empty;
+ struct rdt_resource *r = &res->resctrl_res;
if (!res->class)
continue; // dummy resource
@@ -1720,7 +1722,7 @@ void mpam_resctrl_offline_cpu(unsigned int cpu)
if (WARN_ON_ONCE(!dom))
continue;
- if (resctrl_arch_alloc_capable()) {
+ if (r->alloc_capable) {
ctrl_d = &dom->resctrl_ctrl_dom;
ctrl_dom_empty = mpam_resctrl_offline_domain_hdr(cpu, &ctrl_d->hdr);
if (ctrl_dom_empty)
@@ -1729,7 +1731,7 @@ void mpam_resctrl_offline_cpu(unsigned int cpu)
ctrl_dom_empty = true;
}
- if (resctrl_arch_mon_capable()) {
+ if (r->mon_capable) {
mon_d = &dom->resctrl_mon_dom;
mon_dom_empty = mpam_resctrl_offline_domain_hdr(cpu, &mon_d->hdr);
if (mon_dom_empty)
>
>
> To preserve the existing public interface of resctrl_arch_mon_capable(),
> please consider the following approach:
>
> diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/
> mpam_resctrl.c
> index 694ea8548a05..b06a89494ff0 100644
> --- a/drivers/resctrl/mpam_resctrl.c
> +++ b/drivers/resctrl/mpam_resctrl.c
> @@ -1563,6 +1563,10 @@ mpam_resctrl_alloc_domain(unsigned int cpu,
> struct mpam_resctrl_res *res)
> if (resctrl_arch_mon_capable()) {
> struct mpam_component *any_mon_comp;
> struct mpam_resctrl_mon *mon;
> enum resctrl_event_id eventid;
>
> + /* TODO: Only supports L3 monitor type currently. */
> + if (r->rid != RDT_RESOURCE_L3)
> + return dom;
>
>
>
> Best regards,
> Zeng Heng
Thanks,
Ben
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 26/41] arm_mpam: resctrl: Add monitor initialisation and domain boilerplate
2026-02-26 10:26 ` Ben Horgan
@ 2026-02-27 3:01 ` Zeng Heng
0 siblings, 0 replies; 75+ messages in thread
From: Zeng Heng @ 2026-02-27 3:01 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, linux-doc
Hi Ben,
On 2026/2/26 18:26, Ben Horgan wrote:
> Hi Zeng,
>
> On 2/26/26 03:47, Zeng Heng wrote:
>> Hi Ben,
>>
>> On 2026/2/25 1:57, Ben Horgan wrote:
>>> Add the boilerplate that tells resctrl about the mpam monitors that are
>>> available. resctrl expects all (non-telemetry) monitors to be on the
>>> L3 and
>>> so advertise them there and invent an L3 resctrl resource if required.
>>> The
>>> L3 cache itself has to exist as the cache ids are used as the domain
>>> ids.
>>>
>>> Bring the resctrl monitor domains online and offline based on the cpus
>>> they contain.
>>>
>>> Support for specific monitor types is left to later.
>>>
>>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>>> ---
>>> New patch but mostly moved from the existing patches to
>>> separate the monitors from the controls and the boilerplate
>>> from the specific counters.
>>> Use l3->mon_capable in resctrl_arch_mon_capable() as
>>> resctrl_enable_mon_event() now returns a bool.
>>> ---
>>> drivers/resctrl/mpam_internal.h | 7 ++
>>> drivers/resctrl/mpam_resctrl.c | 142 +++++++++++++++++++++++++++++---
>>> 2 files changed, 139 insertions(+), 10 deletions(-)
>>>
>>
>> [...]
>>
>>> @@ -922,6 +1000,20 @@ mpam_resctrl_alloc_domain(unsigned int cpu,
>>> struct mpam_resctrl_res *res)
>>> } else {
>>> pr_debug("Skipped control domain online - no controls\n");
>>> }
>>> +
>>> + if (resctrl_arch_mon_capable()) {
>>> + mon_d = &dom->resctrl_mon_dom;
>>> + mpam_resctrl_domain_hdr_init(cpu, any_mon_comp, r->rid,
>>> &mon_d->hdr);
>>> + mon_d->hdr.type = RESCTRL_MON_DOMAIN;
>>> + err = resctrl_online_mon_domain(r, &mon_d->hdr);
>>> + if (err)
>>> + goto offline_ctrl_domain;
>>> +
>>> + mpam_resctrl_domain_insert(&r->mon_domains, &mon_d->hdr);
>>> + } else {
>>> + pr_debug("Skipped monitor domain online - no monitors\n");
>>> + }
>>> +
>>> return dom;
>>>
>>
>> I noticed that resctrl_arch_mon_capable() only performs checks for L3
>> monitoring functionality. This leads to an issue on platforms that
>> include L2 monitoring capabilities, where the code incorrectly enters
>> this branch and triggers the following warning by
>> mpam_resctrl_domain_insert():
>>
>> [ 22.867070] ------------[ cut here ]------------
>> [ 22.867073] WARNING: drivers/resctrl/mpam_resctrl.c:1495 at
>> mpam_resctrl_domain_insert+0x74/0x80, CPU#2: cpuhp/2/25
>> [ 29.376035] Modules linked in:
>> [ 29.379080] CPU: 2 UID: 0 PID: 25 Comm: cpuhp/2 Not tainted 7.0.0-
>> rc1-g4288ec146462 #30 PREEMPT
>> [ 29.387853] Hardware name: To Be Filled By O.E.M. 183.0/To Be Filled
>> By O.E.M., BIOS 183.0 02/12/2026
>> [ 29.397058] pstate: 61400009 (nZCv daif +PAN -UAO -TCO +DIT -SSBS
>> BTYPE=--)
>> [ 29.404007] pc : mpam_resctrl_domain_insert+0x74/0x80
>> [ 29.409048] lr : mpam_resctrl_domain_insert+0x34/0x80
>> [ 29.414088] sp : ffff8000876abc60
>> ...
>> [ 29.488625] Call trace:
>> [ 29.491060] mpam_resctrl_domain_insert+0x74/0x80 (P)
>> [ 29.496100] mpam_resctrl_online_cpu+0x2b4/0x428
>> [ 29.500706] mpam_cpu_online+0x274/0x298
>> [ 29.504618] cpuhp_invoke_callback+0x104/0x20c
>> [ 29.509052] cpuhp_thread_fun+0xa4/0x17c
>> [ 29.512963] smpboot_thread_fn+0x220/0x24c
>> [ 29.517048] kthread+0x120/0x12c
>> [ 29.520265] ret_from_fork+0x10/0x20
>> [ 29.523830] ---[ end trace 0000000000000000 ]---
>
> Thanks for reporting this bug. It looks to be because resctrl_arch_mon_capable() is telling us if
> there is any mon capable resource when really what we want to know is if this resource is mon capable.
> The pattern occurs in a few places. Does this diff help?
>
I've adapted to the changes and local verification passes also.
> diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
> index 694ea8548a05..19b306017845 100644
> --- a/drivers/resctrl/mpam_resctrl.c
> +++ b/drivers/resctrl/mpam_resctrl.c
> @@ -1543,7 +1543,7 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
> if (!dom)
> return ERR_PTR(-ENOMEM);
>
> - if (resctrl_arch_alloc_capable()) {
> + if (r->alloc_capable) {
Yes, using r->alloc_capable and r->mon_capable here is indeed more
accurate and appropriate. I should have noticed this when reviewing
resctrl_arch_alloc_capable() and resctrl_arch_mon_capable().
Reviewed-by: Zeng Heng <zengheng4@huawei.com>
Thanks,
Zeng Heng
> dom->ctrl_comp = ctrl_comp;
>
> ctrl_d = &dom->resctrl_ctrl_dom;
> @@ -1558,7 +1558,7 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
> pr_debug("Skipped control domain online - no controls\n");
> }
>
> - if (resctrl_arch_mon_capable()) {
> + if (r->mon_capable) {
> struct mpam_component *any_mon_comp;
> struct mpam_resctrl_mon *mon;
> enum resctrl_event_id eventid;
> @@ -1603,7 +1603,7 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
> return dom;
>
> offline_ctrl_domain:
> - if (resctrl_arch_alloc_capable()) {
> + if (r->alloc_capable) {
> mpam_resctrl_offline_domain_hdr(cpu, &ctrl_d->hdr);
> resctrl_offline_ctrl_domain(r, ctrl_d);
> }
> @@ -1671,6 +1671,7 @@ int mpam_resctrl_online_cpu(unsigned int cpu)
> guard(mutex)(&domain_list_lock);
> for_each_mpam_resctrl_control(res, rid) {
> struct mpam_resctrl_dom *dom;
> + struct rdt_resource *r = &res->resctrl_res;
>
> if (!res->class)
> continue; // dummy_resource;
> @@ -1679,12 +1680,12 @@ int mpam_resctrl_online_cpu(unsigned int cpu)
> if (!dom) {
> dom = mpam_resctrl_alloc_domain(cpu, res);
> } else {
> - if (resctrl_arch_alloc_capable()) {
> + if (r->alloc_capable) {
> struct rdt_ctrl_domain *ctrl_d = &dom->resctrl_ctrl_dom;
>
> mpam_resctrl_online_domain_hdr(cpu, &ctrl_d->hdr);
> }
> - if (resctrl_arch_mon_capable()) {
> + if (r->mon_capable) {
> struct rdt_l3_mon_domain *mon_d = &dom->resctrl_mon_dom;
>
> mpam_resctrl_online_domain_hdr(cpu, &mon_d->hdr);
> @@ -1712,6 +1713,7 @@ void mpam_resctrl_offline_cpu(unsigned int cpu)
> struct rdt_l3_mon_domain *mon_d;
> struct rdt_ctrl_domain *ctrl_d;
> bool ctrl_dom_empty, mon_dom_empty;
> + struct rdt_resource *r = &res->resctrl_res;
>
> if (!res->class)
> continue; // dummy resource
> @@ -1720,7 +1722,7 @@ void mpam_resctrl_offline_cpu(unsigned int cpu)
> if (WARN_ON_ONCE(!dom))
> continue;
>
> - if (resctrl_arch_alloc_capable()) {
> + if (r->alloc_capable) {
> ctrl_d = &dom->resctrl_ctrl_dom;
> ctrl_dom_empty = mpam_resctrl_offline_domain_hdr(cpu, &ctrl_d->hdr);
> if (ctrl_dom_empty)
> @@ -1729,7 +1731,7 @@ void mpam_resctrl_offline_cpu(unsigned int cpu)
> ctrl_dom_empty = true;
> }
>
> - if (resctrl_arch_mon_capable()) {
> + if (r->mon_capable) {
> mon_d = &dom->resctrl_mon_dom;
> mon_dom_empty = mpam_resctrl_offline_domain_hdr(cpu, &mon_d->hdr);
> if (mon_dom_empty)
>
>
>>
>>
>> To preserve the existing public interface of resctrl_arch_mon_capable(),
>> please consider the following approach:
>>
>> diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/
>> mpam_resctrl.c
>> index 694ea8548a05..b06a89494ff0 100644
>> --- a/drivers/resctrl/mpam_resctrl.c
>> +++ b/drivers/resctrl/mpam_resctrl.c
>> @@ -1563,6 +1563,10 @@ mpam_resctrl_alloc_domain(unsigned int cpu,
>> struct mpam_resctrl_res *res)
>> if (resctrl_arch_mon_capable()) {
>> struct mpam_component *any_mon_comp;
>> struct mpam_resctrl_mon *mon;
>> enum resctrl_event_id eventid;
>>
>> + /* TODO: Only supports L3 monitor type currently. */
>> + if (r->rid != RDT_RESOURCE_L3)
>> + return dom;
>>
>>
>>
>> Best regards,
>> Zeng Heng
>
>
> Thanks,
>
> Ben
>
>
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 27/41] arm_mpam: resctrl: Add support for csu counters
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (25 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 26/41] arm_mpam: resctrl: Add monitor initialisation and domain boilerplate Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 28/41] arm_mpam: resctrl: Pick classes for use as mbm counters Ben Horgan
` (16 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
resctrl exposes a counter via a file named llc_occupancy. This isn't really
a counter as its value goes up and down, this is a snapshot of the cache
storage usage monitor.
Add some picking code which will only find an L3. The resctrl counter
file is called llc_occupancy but we don't check it is the last one as
it is already identified as L3.
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Co-developed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
Allow csu counters however many partid or pmg there are
else if -> if
reduce scope of local variables
drop has_csu
Changes since v2:
return -> break so works for mbwu in later patch
add for_each_mpam_resctrl_mon
return error from mpam_resctrl_monitor_init(). It may fail when is abmc
allocation introduced in a later patch.
Squashed in patch from Dave Martin:
https://lore.kernel.org/lkml/20250820131621.54983-1-Dave.Martin@arm.com/
Changes since v3:
resctrl_enable_mon_event() signature update
Restrict the events considered
num-rmid update
Use raw_smp_processor_id()
Tighten heuristics:
Make sure it is the L3
Please shout if this means the counters aren't exposed on any platforms
Drop tags due to change in policy/rework
Changes since v4:
Move generic monitor boilerplate to separate patch
---
drivers/resctrl/mpam_resctrl.c | 83 ++++++++++++++++++++++++++++++++++
1 file changed, 83 insertions(+)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index c14e59e8586d..a570cdf06feb 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -289,6 +289,28 @@ static bool class_has_usable_mba(struct mpam_props *cprops)
return mba_class_use_mbw_max(cprops);
}
+static bool cache_has_usable_csu(struct mpam_class *class)
+{
+ struct mpam_props *cprops;
+
+ if (!class)
+ return false;
+
+ cprops = &class->props;
+
+ if (!mpam_has_feature(mpam_feat_msmon_csu, cprops))
+ return false;
+
+ /*
+ * CSU counters settle on the value, so we can get away with
+ * having only one.
+ */
+ if (!cprops->num_csu_mon)
+ return false;
+
+ return true;
+}
+
/*
* Calculate the worst-case percentage change from each implemented step
* in the control.
@@ -602,6 +624,64 @@ static void mpam_resctrl_pick_mba(void)
}
}
+static void counter_update_class(enum resctrl_event_id evt_id,
+ struct mpam_class *class)
+{
+ struct mpam_class *existing_class = mpam_resctrl_counters[evt_id].class;
+
+ if (existing_class) {
+ if (class->level == 3) {
+ pr_debug("Existing class is L3 - L3 wins\n");
+ return;
+ }
+
+ if (existing_class->level < class->level) {
+ pr_debug("Existing class is closer to L3, %u versus %u - closer is better\n",
+ existing_class->level, class->level);
+ return;
+ }
+ }
+
+ mpam_resctrl_counters[evt_id].class = class;
+}
+
+static void mpam_resctrl_pick_counters(void)
+{
+ struct mpam_class *class;
+
+ lockdep_assert_cpus_held();
+
+ guard(srcu)(&mpam_srcu);
+ list_for_each_entry_srcu(class, &mpam_classes, classes_list,
+ srcu_read_lock_held(&mpam_srcu)) {
+ /* The name of the resource is L3... */
+ if (class->type == MPAM_CLASS_CACHE && class->level != 3) {
+ pr_debug("class %u is a cache but not the L3", class->level);
+ continue;
+ }
+
+ if (!cpumask_equal(&class->affinity, cpu_possible_mask)) {
+ pr_debug("class %u does not cover all CPUs",
+ class->level);
+ continue;
+ }
+
+ if (cache_has_usable_csu(class)) {
+ pr_debug("class %u has usable CSU",
+ class->level);
+
+ /* CSU counters only make sense on a cache. */
+ switch (class->type) {
+ case MPAM_CLASS_CACHE:
+ counter_update_class(QOS_L3_OCCUP_EVENT_ID, class);
+ break;
+ default:
+ break;
+ }
+ }
+ }
+}
+
static int mpam_resctrl_control_init(struct mpam_resctrl_res *res)
{
struct mpam_class *class = res->class;
@@ -1157,6 +1237,9 @@ int mpam_resctrl_setup(void)
}
}
+ /* Find some classes to use for monitors */
+ mpam_resctrl_pick_counters();
+
for_each_mpam_resctrl_mon(mon, eventid) {
if (!mon->class)
continue; // dummy resource
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 28/41] arm_mpam: resctrl: Pick classes for use as mbm counters
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (26 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 27/41] arm_mpam: resctrl: Add support for csu counters Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 29/41] arm_mpam: resctrl: Pre-allocate free running monitors Ben Horgan
` (15 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
resctrl has two types of counters, NUMA-local and global. MPAM can only
count global either using MSC at the L3 cache or in the memory controllers.
When global and local equate to the same thing continue just to call it
global.
Because the class or component backing the event may not be 'the L3', it is
necessary for mpam_resctrl_get_domain_from_cpu() to search the monitor
domains too. This matters the most for 'monitor only' systems, where 'the
L3' control domains may be empty, and the ctrl_comp pointer NULL.
resctrl expects there to be enough monitors for every possible control and
monitor group to have one. Such a system gets called 'free running' as the
monitors can be programmed once and left running. Any other platform will
need to emulate ABMC.
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
drop has_mbwu
Changes since v2:
Iterate over mpam_resctrl_dom directly (Jonathan)
Use for_each_mpam_resctrl_mon
Changes since v3:
Don't continue if mon not found to avoid NULL pointer deref
use int for cache_id in mpam_resctrl_alloc_domain()
Update commit message
Take traffic into account
Only use mbm_total.
Drop tags due to rework
Changes since v4:
Add debug log when insufficient free running counters (added as abmc
dropped for now)
---
drivers/resctrl/mpam_internal.h | 8 +++
drivers/resctrl/mpam_resctrl.c | 124 +++++++++++++++++++++++++++++++-
2 files changed, 131 insertions(+), 1 deletion(-)
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index 472bd5d27baa..d58428ba2005 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -340,6 +340,14 @@ struct mpam_msc_ris {
struct mpam_resctrl_dom {
struct mpam_component *ctrl_comp;
+
+ /*
+ * There is no single mon_comp because different events may be backed
+ * by different class/components. mon_comp is indexed by the event
+ * number.
+ */
+ struct mpam_component *mon_comp[QOS_NUM_EVENTS];
+
struct rdt_ctrl_domain resctrl_ctrl_dom;
struct rdt_l3_mon_domain resctrl_mon_dom;
};
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index a570cdf06feb..ddcf73567723 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -67,6 +67,14 @@ static bool cdp_enabled;
static bool cacheinfo_ready;
static DECLARE_WAIT_QUEUE_HEAD(wait_cacheinfo_ready);
+/* Whether this num_mbw_mon could result in a free_running system */
+static int __mpam_monitors_free_running(u16 num_mbwu_mon)
+{
+ if (num_mbwu_mon >= resctrl_arch_system_num_rmid_idx())
+ return resctrl_arch_system_num_rmid_idx();
+ return 0;
+}
+
bool resctrl_arch_alloc_capable(void)
{
struct mpam_resctrl_res *res;
@@ -311,6 +319,28 @@ static bool cache_has_usable_csu(struct mpam_class *class)
return true;
}
+static bool class_has_usable_mbwu(struct mpam_class *class)
+{
+ struct mpam_props *cprops = &class->props;
+
+ if (!mpam_has_feature(mpam_feat_msmon_mbwu, cprops))
+ return false;
+
+ /*
+ * resctrl expects the bandwidth counters to be free running,
+ * which means we need as many monitors as resctrl has
+ * control/monitor groups.
+ */
+ if (__mpam_monitors_free_running(cprops->num_mbwu_mon)) {
+ pr_debug("monitors usable in free-running mode\n");
+ return true;
+ }
+
+ pr_debug("Insufficient monitors for free-running mode\n");
+
+ return false;
+}
+
/*
* Calculate the worst-case percentage change from each implemented step
* in the control.
@@ -679,6 +709,22 @@ static void mpam_resctrl_pick_counters(void)
break;
}
}
+
+ if (class_has_usable_mbwu(class) &&
+ topology_matches_l3(class) &&
+ traffic_matches_l3(class)) {
+ pr_debug("class %u has usable MBWU, and matches L3 topology and traffic\n",
+ class->level);
+
+ /*
+ * We can't distinguish traffic by destination so
+ * we don't know if it's staying on the same NUMA
+ * node. Hence, we can't calculate mbm_local except
+ * when we only have one L3 and it's equivalent to
+ * mbm_total and so always use mbm_total.
+ */
+ counter_update_class(QOS_L3_MBM_TOTAL_EVENT_ID, class);
+ }
}
}
@@ -1035,6 +1081,20 @@ static void mpam_resctrl_domain_insert(struct list_head *list,
list_add_tail_rcu(&new->list, pos);
}
+static struct mpam_component *find_component(struct mpam_class *class, int cpu)
+{
+ struct mpam_component *comp;
+
+ guard(srcu)(&mpam_srcu);
+ list_for_each_entry_srcu(comp, &class->components, class_list,
+ srcu_read_lock_held(&mpam_srcu)) {
+ if (cpumask_test_cpu(cpu, &comp->affinity))
+ return comp;
+ }
+
+ return NULL;
+}
+
static struct mpam_resctrl_dom *
mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
{
@@ -1082,6 +1142,35 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
}
if (resctrl_arch_mon_capable()) {
+ struct mpam_component *any_mon_comp;
+ struct mpam_resctrl_mon *mon;
+ enum resctrl_event_id eventid;
+
+ /*
+ * Even if the monitor domain is backed by a different
+ * component, the L3 component IDs need to be used... only
+ * there may be no ctrl_comp for the L3.
+ * Search each event's class list for a component with
+ * overlapping CPUs and set up the dom->mon_comp array.
+ */
+
+ for_each_mpam_resctrl_mon(mon, eventid) {
+ struct mpam_component *mon_comp;
+
+ if (!mon->class)
+ continue; // dummy resource
+
+ mon_comp = find_component(mon->class, cpu);
+ dom->mon_comp[eventid] = mon_comp;
+ if (mon_comp)
+ any_mon_comp = mon_comp;
+ }
+ if (!any_mon_comp) {
+ WARN_ON_ONCE(0);
+ err = -EFAULT;
+ goto offline_ctrl_domain;
+ }
+
mon_d = &dom->resctrl_mon_dom;
mpam_resctrl_domain_hdr_init(cpu, any_mon_comp, r->rid, &mon_d->hdr);
mon_d->hdr.type = RESCTRL_MON_DOMAIN;
@@ -1108,6 +1197,35 @@ mpam_resctrl_alloc_domain(unsigned int cpu, struct mpam_resctrl_res *res)
return dom;
}
+/*
+ * We know all the monitors are associated with the L3, even if there are no
+ * controls and therefore no control component. Find the cache-id for the CPU
+ * and use that to search for existing resctrl domains.
+ * This relies on mpam_resctrl_pick_domain_id() using the L3 cache-id
+ * for anything that is not a cache.
+ */
+static struct mpam_resctrl_dom *mpam_resctrl_get_mon_domain_from_cpu(int cpu)
+{
+ int cache_id;
+ struct mpam_resctrl_dom *dom;
+ struct mpam_resctrl_res *l3 = &mpam_resctrl_controls[RDT_RESOURCE_L3];
+
+ lockdep_assert_cpus_held();
+
+ if (!l3->class)
+ return NULL;
+ cache_id = get_cpu_cacheinfo_id(cpu, 3);
+ if (cache_id < 0)
+ return NULL;
+
+ list_for_each_entry_rcu(dom, &l3->resctrl_res.mon_domains, resctrl_mon_dom.hdr.list) {
+ if (dom->resctrl_mon_dom.hdr.id == cache_id)
+ return dom;
+ }
+
+ return NULL;
+}
+
static struct mpam_resctrl_dom *
mpam_resctrl_get_domain_from_cpu(int cpu, struct mpam_resctrl_res *res)
{
@@ -1121,7 +1239,11 @@ mpam_resctrl_get_domain_from_cpu(int cpu, struct mpam_resctrl_res *res)
return dom;
}
- return NULL;
+ if (r->rid != RDT_RESOURCE_L3)
+ return NULL;
+
+ /* Search the mon domain list too - needed on monitor only platforms. */
+ return mpam_resctrl_get_mon_domain_from_cpu(cpu);
}
int mpam_resctrl_online_cpu(unsigned int cpu)
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 29/41] arm_mpam: resctrl: Pre-allocate free running monitors
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (27 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 28/41] arm_mpam: resctrl: Pick classes for use as mbm counters Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 30/41] arm_mpam: resctrl: Allow resctrl to allocate monitors Ben Horgan
` (14 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
When there are enough monitors, the resctrl mbm local and total files can
be exposed. These need all the monitors that resctrl may use to be
allocated up front.
Add helpers to do this.
If a different candidate class is discovered, the old array should be
free'd and the allocated monitors returned to the driver.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Code flow tidying (Jonathan)
---
drivers/resctrl/mpam_internal.h | 3 +-
drivers/resctrl/mpam_resctrl.c | 81 ++++++++++++++++++++++++++++++++-
2 files changed, 81 insertions(+), 3 deletions(-)
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index d58428ba2005..f278fa7307af 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -361,7 +361,8 @@ struct mpam_resctrl_res {
struct mpam_resctrl_mon {
struct mpam_class *class;
- /* per-class data that resctrl needs will live here */
+ /* Array of allocated MBWU monitors, indexed by (closid, rmid). */
+ int *mbwu_idx_to_mon;
};
static inline int mpam_alloc_csu_mon(struct mpam_class *class)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index ddcf73567723..c07f0304fae6 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -654,10 +654,58 @@ static void mpam_resctrl_pick_mba(void)
}
}
+static void __free_mbwu_mon(struct mpam_class *class, int *array,
+ u16 num_mbwu_mon)
+{
+ for (int i = 0; i < num_mbwu_mon; i++) {
+ if (array[i] < 0)
+ continue;
+
+ mpam_free_mbwu_mon(class, array[i]);
+ array[i] = ~0;
+ }
+}
+
+static int __alloc_mbwu_mon(struct mpam_class *class, int *array,
+ u16 num_mbwu_mon)
+{
+ for (int i = 0; i < num_mbwu_mon; i++) {
+ int mbwu_mon = mpam_alloc_mbwu_mon(class);
+
+ if (mbwu_mon < 0) {
+ __free_mbwu_mon(class, array, num_mbwu_mon);
+ return mbwu_mon;
+ }
+ array[i] = mbwu_mon;
+ }
+
+ return 0;
+}
+
+static int *__alloc_mbwu_array(struct mpam_class *class, u16 num_mbwu_mon)
+{
+ int err;
+ size_t array_size = num_mbwu_mon * sizeof(int);
+ int *array __free(kfree) = kmalloc(array_size, GFP_KERNEL);
+
+ if (!array)
+ return ERR_PTR(-ENOMEM);
+
+ memset(array, -1, array_size);
+
+ err = __alloc_mbwu_mon(class, array, num_mbwu_mon);
+ if (err)
+ return ERR_PTR(err);
+ return_ptr(array);
+}
+
static void counter_update_class(enum resctrl_event_id evt_id,
struct mpam_class *class)
{
- struct mpam_class *existing_class = mpam_resctrl_counters[evt_id].class;
+ struct mpam_resctrl_mon *mon = &mpam_resctrl_counters[evt_id];
+ struct mpam_class *existing_class = mon->class;
+ u16 num_mbwu_mon = class->props.num_mbwu_mon;
+ int *new_array, *existing_array = mon->mbwu_idx_to_mon;
if (existing_class) {
if (class->level == 3) {
@@ -672,7 +720,36 @@ static void counter_update_class(enum resctrl_event_id evt_id,
}
}
- mpam_resctrl_counters[evt_id].class = class;
+ pr_debug("Updating event %u to use class %u\n", evt_id, class->level);
+
+ /* Might not need all the monitors */
+ num_mbwu_mon = __mpam_monitors_free_running(num_mbwu_mon);
+
+ if (evt_id != QOS_L3_OCCUP_EVENT_ID && num_mbwu_mon) {
+ /*
+ * This is the pre-allocated free-running monitors path. It always
+ * allocates one monitor per PARTID * PMG.
+ */
+ WARN_ON_ONCE(num_mbwu_mon != resctrl_arch_system_num_rmid_idx());
+
+ new_array = __alloc_mbwu_array(class, num_mbwu_mon);
+ if (IS_ERR(new_array)) {
+ pr_debug("Failed to allocate MBWU array\n");
+ return;
+ }
+ mon->mbwu_idx_to_mon = new_array;
+
+ if (existing_array) {
+ pr_debug("Releasing previous class %u's monitors\n",
+ existing_class->level);
+ __free_mbwu_mon(existing_class, existing_array, num_mbwu_mon);
+ kfree(existing_array);
+ }
+ } else if (evt_id != QOS_L3_OCCUP_EVENT_ID) {
+ pr_debug("Not pre-allocating free-running counters\n");
+ }
+
+ mon->class = class;
}
static void mpam_resctrl_pick_counters(void)
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 30/41] arm_mpam: resctrl: Allow resctrl to allocate monitors
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (28 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 29/41] arm_mpam: resctrl: Pre-allocate free running monitors Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 31/41] arm_mpam: resctrl: Add resctrl_arch_rmid_read() and resctrl_arch_reset_rmid() Ben Horgan
` (13 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
When resctrl wants to read a domain's 'QOS_L3_OCCUP', it needs to allocate
a monitor on the corresponding resource. Monitors are allocated by class
instead of component.
If there are enough MBM monitors, they will be pre-allocated and
free-running.
Add helpers to allocate a CSU monitor. These helper return an out of range
value for MBM counters.
Allocating a montitor context is expected to block until hardware resources
become available. This only makes sense for QOS_L3_OCCUP as unallocated MBM
counters are losing data.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
USE_RMID_IDX -> USE_PRE_ALLOCATED in comment
Remove unnecessary arch_mon_ctx = NULL
Changes since v2:
Add include of resctrl_types.h as dropped from earlier patch
Changes since v3:
Don't mention ABMC in commit message
---
drivers/resctrl/mpam_internal.h | 14 ++++++-
drivers/resctrl/mpam_resctrl.c | 67 +++++++++++++++++++++++++++++++++
include/linux/arm_mpam.h | 5 +++
3 files changed, 85 insertions(+), 1 deletion(-)
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index f278fa7307af..5fac8fa115ff 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -29,6 +29,14 @@ struct platform_device;
#define PACKED_FOR_KUNIT
#endif
+/*
+ * This 'mon' values must not alias an actual monitor, so must be larger than
+ * U16_MAX, but not be confused with an errno value, so smaller than
+ * (u32)-SZ_4K.
+ * USE_PRE_ALLOCATED is used to avoid confusion with an actual monitor.
+ */
+#define USE_PRE_ALLOCATED (U16_MAX + 1)
+
static inline bool mpam_is_enabled(void)
{
return static_branch_likely(&mpam_enabled);
@@ -216,7 +224,11 @@ enum mon_filter_options {
};
struct mon_cfg {
- u16 mon;
+ /*
+ * mon must be large enough to hold out of range values like
+ * USE_PRE_ALLOCATED
+ */
+ u32 mon;
u8 pmg;
bool match_pmg;
bool csu_exclude_clean;
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index c07f0304fae6..ce261af2ca2c 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -22,6 +22,8 @@
#include "mpam_internal.h"
+DECLARE_WAIT_QUEUE_HEAD(resctrl_mon_ctx_waiters);
+
/*
* The classes we've picked to map to resctrl resources, wrapped
* in with their resctrl structure.
@@ -275,6 +277,71 @@ struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l)
return &mpam_resctrl_controls[l].resctrl_res;
}
+static int resctrl_arch_mon_ctx_alloc_no_wait(enum resctrl_event_id evtid)
+{
+ struct mpam_resctrl_mon *mon = &mpam_resctrl_counters[evtid];
+
+ if (!mon->class)
+ return -EINVAL;
+
+ switch (evtid) {
+ case QOS_L3_OCCUP_EVENT_ID:
+ /* With CDP, one monitor gets used for both code/data reads */
+ return mpam_alloc_csu_mon(mon->class);
+ case QOS_L3_MBM_LOCAL_EVENT_ID:
+ case QOS_L3_MBM_TOTAL_EVENT_ID:
+ return USE_PRE_ALLOCATED;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+void *resctrl_arch_mon_ctx_alloc(struct rdt_resource *r,
+ enum resctrl_event_id evtid)
+{
+ DEFINE_WAIT(wait);
+ int *ret;
+
+ ret = kmalloc(sizeof(*ret), GFP_KERNEL);
+ if (!ret)
+ return ERR_PTR(-ENOMEM);
+
+ do {
+ prepare_to_wait(&resctrl_mon_ctx_waiters, &wait,
+ TASK_INTERRUPTIBLE);
+ *ret = resctrl_arch_mon_ctx_alloc_no_wait(evtid);
+ if (*ret == -ENOSPC)
+ schedule();
+ } while (*ret == -ENOSPC && !signal_pending(current));
+ finish_wait(&resctrl_mon_ctx_waiters, &wait);
+
+ return ret;
+}
+
+static void resctrl_arch_mon_ctx_free_no_wait(enum resctrl_event_id evtid,
+ u32 mon_idx)
+{
+ struct mpam_resctrl_mon *mon = &mpam_resctrl_counters[evtid];
+
+ if (!mon->class)
+ return;
+
+ if (evtid == QOS_L3_OCCUP_EVENT_ID)
+ mpam_free_csu_mon(mon->class, mon_idx);
+
+ wake_up(&resctrl_mon_ctx_waiters);
+}
+
+void resctrl_arch_mon_ctx_free(struct rdt_resource *r,
+ enum resctrl_event_id evtid, void *arch_mon_ctx)
+{
+ u32 mon_idx = *(u32 *)arch_mon_ctx;
+
+ kfree(arch_mon_ctx);
+
+ resctrl_arch_mon_ctx_free_no_wait(evtid, mon_idx);
+}
+
static bool cache_has_usable_cpor(struct mpam_class *class)
{
struct mpam_props *cprops = &class->props;
diff --git a/include/linux/arm_mpam.h b/include/linux/arm_mpam.h
index 7d23c90f077d..e1461e32af75 100644
--- a/include/linux/arm_mpam.h
+++ b/include/linux/arm_mpam.h
@@ -5,6 +5,7 @@
#define __LINUX_ARM_MPAM_H
#include <linux/acpi.h>
+#include <linux/resctrl_types.h>
#include <linux/types.h>
struct mpam_msc;
@@ -62,6 +63,10 @@ u32 resctrl_arch_rmid_idx_encode(u32 closid, u32 rmid);
void resctrl_arch_rmid_idx_decode(u32 idx, u32 *closid, u32 *rmid);
u32 resctrl_arch_system_num_rmid_idx(void);
+struct rdt_resource;
+void *resctrl_arch_mon_ctx_alloc(struct rdt_resource *r, enum resctrl_event_id evtid);
+void resctrl_arch_mon_ctx_free(struct rdt_resource *r, enum resctrl_event_id evtid, void *ctx);
+
/**
* mpam_register_requestor() - Register a requestor with the MPAM driver
* @partid_max: The maximum PARTID value the requestor can generate.
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 31/41] arm_mpam: resctrl: Add resctrl_arch_rmid_read() and resctrl_arch_reset_rmid()
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (29 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 30/41] arm_mpam: resctrl: Allow resctrl to allocate monitors Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-03-07 9:29 ` Zeng Heng
2026-02-24 17:57 ` [PATCH v5 32/41] arm_mpam: resctrl: Update the rmid reallocation limit Ben Horgan
` (12 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
resctrl uses resctrl_arch_rmid_read() to read counters. CDP emulation means
the counter may need reading in three different ways. The same goes for
reset.
The helpers behind the resctrl_arch_ functions will be re-used for the ABMC
equivalent functions.
Add the rounding helper for checking monitor values while we're here.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
cfg initialisation style
code flow at end of read_mon_cdp_safe()
Changes since v2:
Whitespace changes
Changes since v3:
Update function signatures
Remove abmc check
---
drivers/resctrl/mpam_resctrl.c | 153 +++++++++++++++++++++++++++++++++
include/linux/arm_mpam.h | 5 ++
2 files changed, 158 insertions(+)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index ce261af2ca2c..99b6ad89f1ab 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -342,6 +342,159 @@ void resctrl_arch_mon_ctx_free(struct rdt_resource *r,
resctrl_arch_mon_ctx_free_no_wait(evtid, mon_idx);
}
+static int __read_mon(struct mpam_resctrl_mon *mon, struct mpam_component *mon_comp,
+ enum mpam_device_features mon_type,
+ int mon_idx,
+ enum resctrl_conf_type cdp_type, u32 closid, u32 rmid, u64 *val)
+{
+ struct mon_cfg cfg;
+
+ if (!mpam_is_enabled())
+ return -EINVAL;
+
+ /* Shift closid to account for CDP */
+ closid = resctrl_get_config_index(closid, cdp_type);
+
+ if (mon_idx == USE_PRE_ALLOCATED) {
+ int mbwu_idx = resctrl_arch_rmid_idx_encode(closid, rmid);
+
+ mon_idx = mon->mbwu_idx_to_mon[mbwu_idx];
+ if (mon_idx == -1)
+ return -EINVAL;
+ }
+
+ if (irqs_disabled()) {
+ /* Check if we can access this domain without an IPI */
+ return -EIO;
+ }
+
+ cfg = (struct mon_cfg) {
+ .mon = mon_idx,
+ .match_pmg = true,
+ .partid = closid,
+ .pmg = rmid,
+ };
+
+ return mpam_msmon_read(mon_comp, &cfg, mon_type, val);
+}
+
+static int read_mon_cdp_safe(struct mpam_resctrl_mon *mon, struct mpam_component *mon_comp,
+ enum mpam_device_features mon_type,
+ int mon_idx, u32 closid, u32 rmid, u64 *val)
+{
+ if (cdp_enabled) {
+ u64 code_val = 0, data_val = 0;
+ int err;
+
+ err = __read_mon(mon, mon_comp, mon_type, mon_idx,
+ CDP_CODE, closid, rmid, &code_val);
+ if (err)
+ return err;
+
+ err = __read_mon(mon, mon_comp, mon_type, mon_idx,
+ CDP_DATA, closid, rmid, &data_val);
+ if (err)
+ return err;
+
+ *val += code_val + data_val;
+ return 0;
+ }
+
+ return __read_mon(mon, mon_comp, mon_type, mon_idx,
+ CDP_NONE, closid, rmid, val);
+}
+
+/* MBWU when not in ABMC mode, and CSU counters. */
+int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *hdr,
+ u32 closid, u32 rmid, enum resctrl_event_id eventid,
+ void *arch_priv, u64 *val, void *arch_mon_ctx)
+{
+ struct mpam_resctrl_dom *l3_dom;
+ struct mpam_component *mon_comp;
+ u32 mon_idx = *(u32 *)arch_mon_ctx;
+ enum mpam_device_features mon_type;
+ struct mpam_resctrl_mon *mon = &mpam_resctrl_counters[eventid];
+
+ resctrl_arch_rmid_read_context_check();
+
+ if (eventid >= QOS_NUM_EVENTS || !mon->class)
+ return -EINVAL;
+
+ l3_dom = container_of(hdr, struct mpam_resctrl_dom, resctrl_mon_dom.hdr);
+ mon_comp = l3_dom->mon_comp[eventid];
+
+ switch (eventid) {
+ case QOS_L3_OCCUP_EVENT_ID:
+ mon_type = mpam_feat_msmon_csu;
+ break;
+ case QOS_L3_MBM_LOCAL_EVENT_ID:
+ case QOS_L3_MBM_TOTAL_EVENT_ID:
+ mon_type = mpam_feat_msmon_mbwu;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return read_mon_cdp_safe(mon, mon_comp, mon_type, mon_idx,
+ closid, rmid, val);
+}
+
+static void __reset_mon(struct mpam_resctrl_mon *mon, struct mpam_component *mon_comp,
+ int mon_idx,
+ enum resctrl_conf_type cdp_type, u32 closid, u32 rmid)
+{
+ struct mon_cfg cfg = { };
+
+ if (!mpam_is_enabled())
+ return;
+
+ /* Shift closid to account for CDP */
+ closid = resctrl_get_config_index(closid, cdp_type);
+
+ if (mon_idx == USE_PRE_ALLOCATED) {
+ int mbwu_idx = resctrl_arch_rmid_idx_encode(closid, rmid);
+
+ mon_idx = mon->mbwu_idx_to_mon[mbwu_idx];
+ }
+
+ if (mon_idx == -1)
+ return;
+ cfg.mon = mon_idx;
+ mpam_msmon_reset_mbwu(mon_comp, &cfg);
+}
+
+static void reset_mon_cdp_safe(struct mpam_resctrl_mon *mon, struct mpam_component *mon_comp,
+ int mon_idx, u32 closid, u32 rmid)
+{
+ if (cdp_enabled) {
+ __reset_mon(mon, mon_comp, mon_idx, CDP_CODE, closid, rmid);
+ __reset_mon(mon, mon_comp, mon_idx, CDP_DATA, closid, rmid);
+ } else {
+ __reset_mon(mon, mon_comp, mon_idx, CDP_NONE, closid, rmid);
+ }
+}
+
+/* Called via IPI. Call with read_cpus_lock() held. */
+void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_l3_mon_domain *d,
+ u32 closid, u32 rmid, enum resctrl_event_id eventid)
+{
+ struct mpam_resctrl_dom *l3_dom;
+ struct mpam_component *mon_comp;
+ struct mpam_resctrl_mon *mon = &mpam_resctrl_counters[eventid];
+
+ if (!mpam_is_enabled())
+ return;
+
+ /* Only MBWU counters are relevant, and for supported event types. */
+ if (eventid == QOS_L3_OCCUP_EVENT_ID || !mon->class)
+ return;
+
+ l3_dom = container_of(d, struct mpam_resctrl_dom, resctrl_mon_dom);
+ mon_comp = l3_dom->mon_comp[eventid];
+
+ reset_mon_cdp_safe(mon, mon_comp, USE_PRE_ALLOCATED, closid, rmid);
+}
+
static bool cache_has_usable_cpor(struct mpam_class *class)
{
struct mpam_props *cprops = &class->props;
diff --git a/include/linux/arm_mpam.h b/include/linux/arm_mpam.h
index e1461e32af75..86d5e326d2bd 100644
--- a/include/linux/arm_mpam.h
+++ b/include/linux/arm_mpam.h
@@ -67,6 +67,11 @@ struct rdt_resource;
void *resctrl_arch_mon_ctx_alloc(struct rdt_resource *r, enum resctrl_event_id evtid);
void resctrl_arch_mon_ctx_free(struct rdt_resource *r, enum resctrl_event_id evtid, void *ctx);
+static inline unsigned int resctrl_arch_round_mon_val(unsigned int val)
+{
+ return val;
+}
+
/**
* mpam_register_requestor() - Register a requestor with the MPAM driver
* @partid_max: The maximum PARTID value the requestor can generate.
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 31/41] arm_mpam: resctrl: Add resctrl_arch_rmid_read() and resctrl_arch_reset_rmid()
2026-02-24 17:57 ` [PATCH v5 31/41] arm_mpam: resctrl: Add resctrl_arch_rmid_read() and resctrl_arch_reset_rmid() Ben Horgan
@ 2026-03-07 9:29 ` Zeng Heng
2026-03-09 16:30 ` Ben Horgan
0 siblings, 1 reply; 75+ messages in thread
From: Zeng Heng @ 2026-03-07 9:29 UTC (permalink / raw)
To: Ben Horgan, James Morse
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, linux-doc,
Shaopeng Tan, Kefeng Wang
Hi Ben,
On 2026/2/25 1:57, Ben Horgan wrote:
> From: James Morse <james.morse@arm.com>
>
> resctrl uses resctrl_arch_rmid_read() to read counters. CDP emulation means
> the counter may need reading in three different ways. The same goes for
> reset.
>
> The helpers behind the resctrl_arch_ functions will be re-used for the ABMC
> equivalent functions.
>
> Add the rounding helper for checking monitor values while we're here.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Tested-by: Peter Newman <peternewman@google.com>
> Tested-by: Zeng Heng <zengheng4@huawei.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
> ---
[...]
> +
> +static int read_mon_cdp_safe(struct mpam_resctrl_mon *mon, struct mpam_component *mon_comp,
> + enum mpam_device_features mon_type,
> + int mon_idx, u32 closid, u32 rmid, u64 *val)
> +{
> + if (cdp_enabled) {
While reviewing the resctrl limbo handling code, I noticed a issue in
__check_limbo() that could lead to premature RMID release when CDP is
enabled.
In __check_limbo(), RMIDs in limbo state undergo L3 occupancy checks
before being released. This check is performed via
resctrl_arch_rmid_read(), on arm64 MPAM, which relies on the cdp_enabled
state to determine to check which PARTID.
The concern arises in the following scenario: Filesystem is mounted with
CDP enabled. During normal operation, some RMIDs enter limbo. On umount,
cdp_enabled is reset to false. __check_limbo() may then run and perform
L3 checks with cdp_enabled = false. This could cause RMIDs to be
incorrectly released from limbo while still effectively busy after
remount.
Apologies for not providing a ready-made fix in this email. However,
I would appreciate to hear the community's thoughts on this issue.
> + u64 code_val = 0, data_val = 0;
> + int err;
> +
> + err = __read_mon(mon, mon_comp, mon_type, mon_idx,
> + CDP_CODE, closid, rmid, &code_val);
> + if (err)
> + return err;
> +
> + err = __read_mon(mon, mon_comp, mon_type, mon_idx,
> + CDP_DATA, closid, rmid, &data_val);
> + if (err)
> + return err;
> +
> + *val += code_val + data_val;
> + return 0;
> + }
> +
> + return __read_mon(mon, mon_comp, mon_type, mon_idx,
> + CDP_NONE, closid, rmid, val);
> +}
> +
> +/* MBWU when not in ABMC mode, and CSU counters. */
> +int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *hdr,
> + u32 closid, u32 rmid, enum resctrl_event_id eventid,
> + void *arch_priv, u64 *val, void *arch_mon_ctx)
> +{
> + struct mpam_resctrl_dom *l3_dom;
> + struct mpam_component *mon_comp;
> + u32 mon_idx = *(u32 *)arch_mon_ctx;
> + enum mpam_device_features mon_type;
> + struct mpam_resctrl_mon *mon = &mpam_resctrl_counters[eventid];
> +
> + resctrl_arch_rmid_read_context_check();
> +
> + if (eventid >= QOS_NUM_EVENTS || !mon->class)
> + return -EINVAL;
> +
> + l3_dom = container_of(hdr, struct mpam_resctrl_dom, resctrl_mon_dom.hdr);
> + mon_comp = l3_dom->mon_comp[eventid];
> +
> + switch (eventid) {
> + case QOS_L3_OCCUP_EVENT_ID:
> + mon_type = mpam_feat_msmon_csu;
> + break;
> + case QOS_L3_MBM_LOCAL_EVENT_ID:
> + case QOS_L3_MBM_TOTAL_EVENT_ID:
> + mon_type = mpam_feat_msmon_mbwu;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + return read_mon_cdp_safe(mon, mon_comp, mon_type, mon_idx,
> + closid, rmid, val);
> +}
> +
Best regards,
Zeng Heng
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 31/41] arm_mpam: resctrl: Add resctrl_arch_rmid_read() and resctrl_arch_reset_rmid()
2026-03-07 9:29 ` Zeng Heng
@ 2026-03-09 16:30 ` Ben Horgan
2026-03-10 3:23 ` Zeng Heng
0 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-03-09 16:30 UTC (permalink / raw)
To: Zeng Heng, James Morse
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, jonathan.cameron, kobak, lcherian,
linux-arm-kernel, linux-kernel, peternewman, punit.agrawal,
quic_jiles, reinette.chatre, rohit.mathew, scott, sdonthineni,
tan.shaopeng, xhao, catalin.marinas, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, linux-doc, Shaopeng Tan,
Kefeng Wang
Hi Zeng,
On 3/7/26 09:29, Zeng Heng wrote:
> Hi Ben,
>
> On 2026/2/25 1:57, Ben Horgan wrote:
>> From: James Morse <james.morse@arm.com>
>>
>> resctrl uses resctrl_arch_rmid_read() to read counters. CDP emulation
>> means
>> the counter may need reading in three different ways. The same goes for
>> reset.
>>
>> The helpers behind the resctrl_arch_ functions will be re-used for the
>> ABMC
>> equivalent functions.
>>
>> Add the rounding helper for checking monitor values while we're here.
>>
>> Tested-by: Gavin Shan <gshan@redhat.com>
>> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>> Tested-by: Peter Newman <peternewman@google.com>
>> Tested-by: Zeng Heng <zengheng4@huawei.com>
>> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
>> Signed-off-by: James Morse <james.morse@arm.com>
>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>> ---
>
> [...]
>
>> +
>> +static int read_mon_cdp_safe(struct mpam_resctrl_mon *mon, struct
>> mpam_component *mon_comp,
>> + enum mpam_device_features mon_type,
>> + int mon_idx, u32 closid, u32 rmid, u64 *val)
>> +{
>> + if (cdp_enabled) {
>
> While reviewing the resctrl limbo handling code, I noticed a issue in
> __check_limbo() that could lead to premature RMID release when CDP is
> enabled.
>
> In __check_limbo(), RMIDs in limbo state undergo L3 occupancy checks
> before being released. This check is performed via
> resctrl_arch_rmid_read(), on arm64 MPAM, which relies on the cdp_enabled
> state to determine to check which PARTID.
>
> The concern arises in the following scenario: Filesystem is mounted with
> CDP enabled. During normal operation, some RMIDs enter limbo. On umount,
> cdp_enabled is reset to false. __check_limbo() may then run and perform
> L3 checks with cdp_enabled = false. This could cause RMIDs to be
> incorrectly released from limbo while still effectively busy after
> remount.
I think a stale limbo list cause more problems than that. If you mount
with cdp disabled, cause some rmids to be dirty, unmount and then
remount with cdp enabled then you may have some of the entries in upper
half marked as busy but when the limbo code checks them it ends up using
an out of range partid and may trigger an mpam error interrupt.
To avoid a stale list we could disable the limbo checking at unmount and
at remount remake the bitmap. This would involve some resctrl changes
which I will have a further look into. For now, to avoid the dependency
without a lot of patch churn in this series I think we can hide the cdp
enablement behind CONFIG_EXPERT. Does that sound ok to you?
Thanks,
Ben
>
> Apologies for not providing a ready-made fix in this email. However,
> I would appreciate to hear the community's thoughts on this issue.
>
>
>> + u64 code_val = 0, data_val = 0;
>> + int err;
>> +
>> + err = __read_mon(mon, mon_comp, mon_type, mon_idx,
>> + CDP_CODE, closid, rmid, &code_val);
>> + if (err)
>> + return err;
>> +
>> + err = __read_mon(mon, mon_comp, mon_type, mon_idx,
>> + CDP_DATA, closid, rmid, &data_val);
>> + if (err)
>> + return err;
>> +
>> + *val += code_val + data_val;
>> + return 0;
>> + }
>> +
>> + return __read_mon(mon, mon_comp, mon_type, mon_idx,
>> + CDP_NONE, closid, rmid, val);
>> +}
>> +
>> +/* MBWU when not in ABMC mode, and CSU counters. */
>> +int resctrl_arch_rmid_read(struct rdt_resource *r, struct
>> rdt_domain_hdr *hdr,
>> + u32 closid, u32 rmid, enum resctrl_event_id eventid,
>> + void *arch_priv, u64 *val, void *arch_mon_ctx)
>> +{
>> + struct mpam_resctrl_dom *l3_dom;
>> + struct mpam_component *mon_comp;
>> + u32 mon_idx = *(u32 *)arch_mon_ctx;
>> + enum mpam_device_features mon_type;
>> + struct mpam_resctrl_mon *mon = &mpam_resctrl_counters[eventid];
>> +
>> + resctrl_arch_rmid_read_context_check();
>> +
>> + if (eventid >= QOS_NUM_EVENTS || !mon->class)
>> + return -EINVAL;
>> +
>> + l3_dom = container_of(hdr, struct mpam_resctrl_dom,
>> resctrl_mon_dom.hdr);
>> + mon_comp = l3_dom->mon_comp[eventid];
>> +
>> + switch (eventid) {
>> + case QOS_L3_OCCUP_EVENT_ID:
>> + mon_type = mpam_feat_msmon_csu;
>> + break;
>> + case QOS_L3_MBM_LOCAL_EVENT_ID:
>> + case QOS_L3_MBM_TOTAL_EVENT_ID:
>> + mon_type = mpam_feat_msmon_mbwu;
>> + break;
>> + default:
>> + return -EINVAL;
>> + }
>> +
>> + return read_mon_cdp_safe(mon, mon_comp, mon_type, mon_idx,
>> + closid, rmid, val);
>> +}
>> +
>
>
> Best regards,
> Zeng Heng
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 31/41] arm_mpam: resctrl: Add resctrl_arch_rmid_read() and resctrl_arch_reset_rmid()
2026-03-09 16:30 ` Ben Horgan
@ 2026-03-10 3:23 ` Zeng Heng
0 siblings, 0 replies; 75+ messages in thread
From: Zeng Heng @ 2026-03-10 3:23 UTC (permalink / raw)
To: Ben Horgan, James Morse
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, jonathan.cameron, kobak, lcherian,
linux-arm-kernel, linux-kernel, peternewman, punit.agrawal,
quic_jiles, reinette.chatre, rohit.mathew, scott, sdonthineni,
tan.shaopeng, xhao, catalin.marinas, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, linux-doc, Shaopeng Tan,
Kefeng Wang
Hi Ben,
On 2026/3/10 0:30, Ben Horgan wrote:
> Hi Zeng,
>
> On 3/7/26 09:29, Zeng Heng wrote:
>> Hi Ben,
>>
>> On 2026/2/25 1:57, Ben Horgan wrote:
>>> From: James Morse <james.morse@arm.com>
>>>
>>> resctrl uses resctrl_arch_rmid_read() to read counters. CDP emulation
>>> means
>>> the counter may need reading in three different ways. The same goes for
>>> reset.
>>>
>>> The helpers behind the resctrl_arch_ functions will be re-used for the
>>> ABMC
>>> equivalent functions.
>>>
>>> Add the rounding helper for checking monitor values while we're here.
>>>
>>> Tested-by: Gavin Shan <gshan@redhat.com>
>>> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>>> Tested-by: Peter Newman <peternewman@google.com>
>>> Tested-by: Zeng Heng <zengheng4@huawei.com>
>>> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>>> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
>>> Signed-off-by: James Morse <james.morse@arm.com>
>>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>>> ---
>>
>> [...]
>>
>>> +
>>> +static int read_mon_cdp_safe(struct mpam_resctrl_mon *mon, struct
>>> mpam_component *mon_comp,
>>> + enum mpam_device_features mon_type,
>>> + int mon_idx, u32 closid, u32 rmid, u64 *val)
>>> +{
>>> + if (cdp_enabled) {
>>
>> While reviewing the resctrl limbo handling code, I noticed a issue in
>> __check_limbo() that could lead to premature RMID release when CDP is
>> enabled.
>>
>> In __check_limbo(), RMIDs in limbo state undergo L3 occupancy checks
>> before being released. This check is performed via
>> resctrl_arch_rmid_read(), on arm64 MPAM, which relies on the cdp_enabled
>> state to determine to check which PARTID.
>>
>> The concern arises in the following scenario: Filesystem is mounted with
>> CDP enabled. During normal operation, some RMIDs enter limbo. On umount,
>> cdp_enabled is reset to false. __check_limbo() may then run and perform
>> L3 checks with cdp_enabled = false. This could cause RMIDs to be
>> incorrectly released from limbo while still effectively busy after
>> remount.
>
> I think a stale limbo list cause more problems than that. If you mount
> with cdp disabled, cause some rmids to be dirty, unmount and then
> remount with cdp enabled then you may have some of the entries in upper
> half marked as busy but when the limbo code checks them it ends up using
> an out of range partid and may trigger an mpam error interrupt.
>
> To avoid a stale list we could disable the limbo checking at unmount and
> at remount remake the bitmap. This would involve some resctrl changes
> which I will have a further look into. For now, to avoid the dependency
> without a lot of patch churn in this series I think we can hide the cdp
> enablement behind CONFIG_EXPERT. Does that sound ok to you?
>
> Thanks,
>
> Ben
>
Confirmed. Toggling between non-CDP and CDP mount modes leads to
out-of-range PARTID hardware errors and memory access violations. This
can cause MPAM to halt by provoking mpam_broken_work.
I agreed properly fixing this will require resctrl modifications to
handle the limbo state across mount cycles. Hiding CDP behind
CONFIG_EXPERT is acceptable as a short-term mitigation to prevent users
from hitting this bug accidentally.
Best regards,
Zeng Heng
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 32/41] arm_mpam: resctrl: Update the rmid reallocation limit
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (30 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 31/41] arm_mpam: resctrl: Add resctrl_arch_rmid_read() and resctrl_arch_reset_rmid() Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 33/41] arm_mpam: resctrl: Add empty definitions for assorted resctrl functions Ben Horgan
` (11 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
resctrl's limbo code needs to be told when the data left in a cache is
small enough for the partid+pmg value to be re-allocated.
x86 uses the cache size divided by the number of rmid users the cache may
have. Do the same, but for the smallest cache, and with the number of
partid-and-pmg users.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Move waiting for cache info into it's own patch
Changes since v3:
Move check class is csu higher (just kept to document intent)
continue -> break
to squash update rmid limits
use raw_smp_processor_id()
---
drivers/resctrl/mpam_resctrl.c | 39 ++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 99b6ad89f1ab..d4fdad875d95 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -495,6 +495,42 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_l3_mon_domain *d
reset_mon_cdp_safe(mon, mon_comp, USE_PRE_ALLOCATED, closid, rmid);
}
+/*
+ * The rmid realloc threshold should be for the smallest cache exposed to
+ * resctrl.
+ */
+static int update_rmid_limits(struct mpam_class *class)
+{
+ u32 num_unique_pmg = resctrl_arch_system_num_rmid_idx();
+ struct mpam_props *cprops = &class->props;
+ struct cacheinfo *ci;
+
+ lockdep_assert_cpus_held();
+
+ if (!mpam_has_feature(mpam_feat_msmon_csu, cprops))
+ return 0;
+
+ /*
+ * Assume cache levels are the same size for all CPUs...
+ * The check just requires any online CPU and it can't go offline as we
+ * hold the cpu lock.
+ */
+ ci = get_cpu_cacheinfo_level(raw_smp_processor_id(), class->level);
+ if (!ci || ci->size == 0) {
+ pr_debug("Could not read cache size for class %u\n",
+ class->level);
+ return -EINVAL;
+ }
+
+ if (!resctrl_rmid_realloc_limit ||
+ ci->size < resctrl_rmid_realloc_limit) {
+ resctrl_rmid_realloc_limit = ci->size;
+ resctrl_rmid_realloc_threshold = ci->size / num_unique_pmg;
+ }
+
+ return 0;
+}
+
static bool cache_has_usable_cpor(struct mpam_class *class)
{
struct mpam_props *cprops = &class->props;
@@ -1000,6 +1036,9 @@ static void mpam_resctrl_pick_counters(void)
/* CSU counters only make sense on a cache. */
switch (class->type) {
case MPAM_CLASS_CACHE:
+ if (update_rmid_limits(class))
+ break;
+
counter_update_class(QOS_L3_OCCUP_EVENT_ID, class);
break;
default:
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 33/41] arm_mpam: resctrl: Add empty definitions for assorted resctrl functions
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (31 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 32/41] arm_mpam: resctrl: Update the rmid reallocation limit Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 34/41] arm64: mpam: Select ARCH_HAS_CPU_RESCTRL Ben Horgan
` (10 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
A few resctrl features and hooks need to be provided, but aren't needed or
supported on MPAM platforms.
resctrl has individual hooks to separately enable and disable the
closid/partid and rmid/pmg context switching code. For MPAM this is all the
same thing, as the value in struct task_struct is used to cache the value
that should be written to hardware. arm64's context switching code is
enabled once MPAM is usable, but doesn't touch the hardware unless the
value has changed.
For now event configuration is not supported, and can be turned off by
returning 'false' from resctrl_arch_is_evt_configurable().
The new io_alloc feature is not supported either, always return false from
the enable helper to indicate and fail the enable.
Add this, and empty definitions for the other hooks.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v3:
Add resctrl_arch_pre_mount() {}
resctrl_arch_reset_rmid_all() signature update
add stubs for abmc
keep empty definitions together
---
drivers/resctrl/mpam_resctrl.c | 60 ++++++++++++++++++++++++++++++++++
include/linux/arm_mpam.h | 9 +++++
2 files changed, 69 insertions(+)
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index d4fdad875d95..490e49ab730c 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -99,6 +99,66 @@ bool resctrl_arch_mon_capable(void)
return l3->mon_capable;
}
+bool resctrl_arch_is_evt_configurable(enum resctrl_event_id evt)
+{
+ return false;
+}
+
+void resctrl_arch_mon_event_config_read(void *info)
+{
+}
+
+void resctrl_arch_mon_event_config_write(void *info)
+{
+}
+
+void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_mon_domain *d)
+{
+}
+
+void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_l3_mon_domain *d,
+ u32 closid, u32 rmid, int cntr_id,
+ enum resctrl_event_id eventid)
+{
+}
+
+void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_l3_mon_domain *d,
+ enum resctrl_event_id evtid, u32 rmid, u32 closid,
+ u32 cntr_id, bool assign)
+{
+}
+
+int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_l3_mon_domain *d,
+ u32 unused, u32 rmid, int cntr_id,
+ enum resctrl_event_id eventid, u64 *val)
+{
+ return -EOPNOTSUPP;
+}
+
+bool resctrl_arch_mbm_cntr_assign_enabled(struct rdt_resource *r)
+{
+ return false;
+}
+
+int resctrl_arch_mbm_cntr_assign_set(struct rdt_resource *r, bool enable)
+{
+ return -EINVAL;
+}
+
+int resctrl_arch_io_alloc_enable(struct rdt_resource *r, bool enable)
+{
+ return -EOPNOTSUPP;
+}
+
+bool resctrl_arch_get_io_alloc_enabled(struct rdt_resource *r)
+{
+ return false;
+}
+
+void resctrl_arch_pre_mount(void)
+{
+}
+
bool resctrl_arch_get_cdp_enabled(enum resctrl_res_level rid)
{
return mpam_resctrl_controls[rid].cdp_enabled;
diff --git a/include/linux/arm_mpam.h b/include/linux/arm_mpam.h
index 86d5e326d2bd..f92a36187a52 100644
--- a/include/linux/arm_mpam.h
+++ b/include/linux/arm_mpam.h
@@ -67,6 +67,15 @@ struct rdt_resource;
void *resctrl_arch_mon_ctx_alloc(struct rdt_resource *r, enum resctrl_event_id evtid);
void resctrl_arch_mon_ctx_free(struct rdt_resource *r, enum resctrl_event_id evtid, void *ctx);
+/*
+ * The CPU configuration for MPAM is cheap to write, and is only written if it
+ * has changed. No need for fine grained enables.
+ */
+static inline void resctrl_arch_enable_mon(void) { }
+static inline void resctrl_arch_disable_mon(void) { }
+static inline void resctrl_arch_enable_alloc(void) { }
+static inline void resctrl_arch_disable_alloc(void) { }
+
static inline unsigned int resctrl_arch_round_mon_val(unsigned int val)
{
return val;
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 34/41] arm64: mpam: Select ARCH_HAS_CPU_RESCTRL
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (32 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 33/41] arm_mpam: resctrl: Add empty definitions for assorted resctrl functions Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 35/41] arm_mpam: resctrl: Call resctrl_init() on platforms that can support resctrl Ben Horgan
` (9 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
Enough MPAM support is present to enable ARCH_HAS_CPU_RESCTRL. Let it
rip^Wlink!
ARCH_HAS_CPU_RESCTRL indicates resctrl can be enabled. It is enabled by the
arch code simply because it has 'arch' in its name.
This removes ARM_CPU_RESCTRL as a mimic of X86_CPU_RESCTRL. While here,
move the ACPI dependency to the driver's Kconfig file.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
arch/arm64/Kconfig | 2 +-
arch/arm64/include/asm/resctrl.h | 2 ++
drivers/resctrl/Kconfig | 7 +++++++
drivers/resctrl/Makefile | 2 +-
4 files changed, 11 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/include/asm/resctrl.h
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3170c67464fb..41a5b4ef86b4 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2017,7 +2017,7 @@ config ARM64_TLB_RANGE
config ARM64_MPAM
bool "Enable support for MPAM"
select ARM64_MPAM_DRIVER
- select ACPI_MPAM if ACPI
+ select ARCH_HAS_CPU_RESCTRL
help
Memory System Resource Partitioning and Monitoring (MPAM) is an
optional extension to the Arm architecture that allows each
diff --git a/arch/arm64/include/asm/resctrl.h b/arch/arm64/include/asm/resctrl.h
new file mode 100644
index 000000000000..b506e95cf6e3
--- /dev/null
+++ b/arch/arm64/include/asm/resctrl.h
@@ -0,0 +1,2 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <linux/arm_mpam.h>
diff --git a/drivers/resctrl/Kconfig b/drivers/resctrl/Kconfig
index c34e059c6e41..672abea3b03c 100644
--- a/drivers/resctrl/Kconfig
+++ b/drivers/resctrl/Kconfig
@@ -1,6 +1,7 @@
menuconfig ARM64_MPAM_DRIVER
bool "MPAM driver"
depends on ARM64 && ARM64_MPAM
+ select ACPI_MPAM if ACPI
help
Memory System Resource Partitioning and Monitoring (MPAM) driver for
System IP, e.g. caches and memory controllers.
@@ -22,3 +23,9 @@ config MPAM_KUNIT_TEST
If unsure, say N.
endif
+
+config ARM64_MPAM_RESCTRL_FS
+ bool
+ default y if ARM64_MPAM_DRIVER && RESCTRL_FS
+ select RESCTRL_RMID_DEPENDS_ON_CLOSID
+ select RESCTRL_ASSIGN_FIXED
diff --git a/drivers/resctrl/Makefile b/drivers/resctrl/Makefile
index 40beaf999582..4f6d0e81f9b8 100644
--- a/drivers/resctrl/Makefile
+++ b/drivers/resctrl/Makefile
@@ -1,5 +1,5 @@
obj-$(CONFIG_ARM64_MPAM_DRIVER) += mpam.o
mpam-y += mpam_devices.o
-mpam-$(CONFIG_ARM_CPU_RESCTRL) += mpam_resctrl.o
+mpam-$(CONFIG_ARM64_MPAM_RESCTRL_FS) += mpam_resctrl.o
ccflags-$(CONFIG_ARM64_MPAM_DRIVER_DEBUG) += -DDEBUG
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 35/41] arm_mpam: resctrl: Call resctrl_init() on platforms that can support resctrl
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (33 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 34/41] arm64: mpam: Select ARCH_HAS_CPU_RESCTRL Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 36/41] arm_mpam: Add quirk framework Ben Horgan
` (8 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: James Morse <james.morse@arm.com>
Now that MPAM links against resctrl, call resctrl_init() to register the
filesystem and setup resctrl's structures.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Peter Newman <peternewman@google.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v2:
Use for_each_mpam...
error path tidying
Changes since v3:
Don't consider abmc in teardown
---
drivers/resctrl/mpam_devices.c | 32 ++++++++++++--
drivers/resctrl/mpam_internal.h | 4 ++
drivers/resctrl/mpam_resctrl.c | 76 ++++++++++++++++++++++++++++++++-
3 files changed, 107 insertions(+), 5 deletions(-)
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index 90d69091e0b9..528936ececd9 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -73,6 +73,14 @@ static DECLARE_WORK(mpam_broken_work, &mpam_disable);
/* When mpam is disabled, the printed reason to aid debugging */
static char *mpam_disable_reason;
+/*
+ * Whether resctrl has been setup. Used by cpuhp in preference to
+ * mpam_is_enabled(). The disable call after an error interrupt makes
+ * mpam_is_enabled() false before the cpuhp callbacks are made.
+ * Reads/writes should hold mpam_cpuhp_state_lock, (or be cpuhp callbacks).
+ */
+static bool mpam_resctrl_enabled;
+
/*
* An MSC is a physical container for controls and monitors, each identified by
* their RIS index. These share a base-address, interrupts and some MMIO
@@ -1635,7 +1643,7 @@ static int mpam_cpu_online(unsigned int cpu)
mpam_reprogram_msc(msc);
}
- if (mpam_is_enabled())
+ if (mpam_resctrl_enabled)
return mpam_resctrl_online_cpu(cpu);
return 0;
@@ -1681,7 +1689,7 @@ static int mpam_cpu_offline(unsigned int cpu)
{
struct mpam_msc *msc;
- if (mpam_is_enabled())
+ if (mpam_resctrl_enabled)
mpam_resctrl_offline_cpu(cpu);
guard(srcu)(&mpam_srcu);
@@ -2542,6 +2550,7 @@ static void mpam_enable_once(void)
}
static_branch_enable(&mpam_enabled);
+ mpam_resctrl_enabled = true;
mpam_register_cpuhp_callbacks(mpam_cpu_online, mpam_cpu_offline,
"mpam:online");
@@ -2601,24 +2610,39 @@ static void mpam_reset_class(struct mpam_class *class)
void mpam_disable(struct work_struct *ignored)
{
int idx;
+ bool do_resctrl_exit;
struct mpam_class *class;
struct mpam_msc *msc, *tmp;
+ if (mpam_is_enabled())
+ static_branch_disable(&mpam_enabled);
+
mutex_lock(&mpam_cpuhp_state_lock);
if (mpam_cpuhp_state) {
cpuhp_remove_state(mpam_cpuhp_state);
mpam_cpuhp_state = 0;
}
+
+ /*
+ * Removing the cpuhp state called mpam_cpu_offline() and told resctrl
+ * all the CPUs are offline.
+ */
+ do_resctrl_exit = mpam_resctrl_enabled;
+ mpam_resctrl_enabled = false;
mutex_unlock(&mpam_cpuhp_state_lock);
- static_branch_disable(&mpam_enabled);
+ if (do_resctrl_exit)
+ mpam_resctrl_exit();
mpam_unregister_irqs();
idx = srcu_read_lock(&mpam_srcu);
list_for_each_entry_srcu(class, &mpam_classes, classes_list,
- srcu_read_lock_held(&mpam_srcu))
+ srcu_read_lock_held(&mpam_srcu)) {
mpam_reset_class(class);
+ if (do_resctrl_exit)
+ mpam_resctrl_teardown_class(class);
+ }
srcu_read_unlock(&mpam_srcu, idx);
mutex_lock(&mpam_list_lock);
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index 5fac8fa115ff..a79c7670f7ae 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -436,12 +436,16 @@ int mpam_get_cpumask_from_cache_id(unsigned long cache_id, u32 cache_level,
#ifdef CONFIG_RESCTRL_FS
int mpam_resctrl_setup(void);
+void mpam_resctrl_exit(void);
int mpam_resctrl_online_cpu(unsigned int cpu);
void mpam_resctrl_offline_cpu(unsigned int cpu);
+void mpam_resctrl_teardown_class(struct mpam_class *class);
#else
static inline int mpam_resctrl_setup(void) { return 0; }
+static inline void mpam_resctrl_exit(void) { }
static inline int mpam_resctrl_online_cpu(unsigned int cpu) { return 0; }
static inline void mpam_resctrl_offline_cpu(unsigned int cpu) { }
+static inline void mpam_resctrl_teardown_class(struct mpam_class *class) { }
#endif /* CONFIG_RESCTRL_FS */
/*
diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c
index 490e49ab730c..694ea8548a05 100644
--- a/drivers/resctrl/mpam_resctrl.c
+++ b/drivers/resctrl/mpam_resctrl.c
@@ -69,6 +69,12 @@ static bool cdp_enabled;
static bool cacheinfo_ready;
static DECLARE_WAIT_QUEUE_HEAD(wait_cacheinfo_ready);
+/*
+ * If resctrl_init() succeeded, resctrl_exit() can be used to remove support
+ * for the filesystem in the event of an error.
+ */
+static bool resctrl_enabled;
+
/* Whether this num_mbw_mon could result in a free_running system */
static int __mpam_monitors_free_running(u16 num_mbwu_mon)
{
@@ -341,6 +347,9 @@ static int resctrl_arch_mon_ctx_alloc_no_wait(enum resctrl_event_id evtid)
{
struct mpam_resctrl_mon *mon = &mpam_resctrl_counters[evtid];
+ if (!mpam_is_enabled())
+ return -EINVAL;
+
if (!mon->class)
return -EINVAL;
@@ -383,6 +392,9 @@ static void resctrl_arch_mon_ctx_free_no_wait(enum resctrl_event_id evtid,
{
struct mpam_resctrl_mon *mon = &mpam_resctrl_counters[evtid];
+ if (!mpam_is_enabled())
+ return;
+
if (!mon->class)
return;
@@ -477,6 +489,9 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *hdr,
resctrl_arch_rmid_read_context_check();
+ if (!mpam_is_enabled())
+ return -EINVAL;
+
if (eventid >= QOS_NUM_EVENTS || !mon->class)
return -EINVAL;
@@ -1322,6 +1337,9 @@ int resctrl_arch_update_one(struct rdt_resource *r, struct rdt_ctrl_domain *d,
lockdep_assert_cpus_held();
lockdep_assert_irqs_enabled();
+ if (!mpam_is_enabled())
+ return -EINVAL;
+
/*
* No need to check the CPU as mpam_apply_config() doesn't care, and
* resctrl_arch_update_domains() relies on this.
@@ -1387,6 +1405,9 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
lockdep_assert_cpus_held();
lockdep_assert_irqs_enabled();
+ if (!mpam_is_enabled())
+ return -EINVAL;
+
list_for_each_entry_rcu(d, &r->ctrl_domains, hdr.list) {
for (enum resctrl_conf_type t = 0; t < CDP_NUM_TYPES; t++) {
struct resctrl_staged_config *cfg = &d->staged_config[t];
@@ -1777,7 +1798,11 @@ int mpam_resctrl_setup(void)
return -EOPNOTSUPP;
}
- /* TODO: call resctrl_init() */
+ err = resctrl_init();
+ if (err)
+ return err;
+
+ WRITE_ONCE(resctrl_enabled, true);
return 0;
@@ -1787,6 +1812,55 @@ int mpam_resctrl_setup(void)
return err;
}
+void mpam_resctrl_exit(void)
+{
+ if (!READ_ONCE(resctrl_enabled))
+ return;
+
+ WRITE_ONCE(resctrl_enabled, false);
+ resctrl_exit();
+}
+
+static void mpam_resctrl_teardown_mon(struct mpam_resctrl_mon *mon, struct mpam_class *class)
+{
+ u32 num_mbwu_mon = resctrl_arch_system_num_rmid_idx();
+
+ if (!mon->mbwu_idx_to_mon)
+ return;
+
+ __free_mbwu_mon(class, mon->mbwu_idx_to_mon, num_mbwu_mon);
+ mon->mbwu_idx_to_mon = NULL;
+}
+
+/*
+ * The driver is detaching an MSC from this class, if resctrl was using it,
+ * pull on resctrl_exit().
+ */
+void mpam_resctrl_teardown_class(struct mpam_class *class)
+{
+ struct mpam_resctrl_res *res;
+ enum resctrl_res_level rid;
+ struct mpam_resctrl_mon *mon;
+ enum resctrl_event_id eventid;
+
+ might_sleep();
+
+ for_each_mpam_resctrl_control(res, rid) {
+ if (res->class == class) {
+ res->class = NULL;
+ break;
+ }
+ }
+ for_each_mpam_resctrl_mon(mon, eventid) {
+ if (mon->class == class) {
+ mon->class = NULL;
+
+ mpam_resctrl_teardown_mon(mon, class);
+ break;
+ }
+ }
+}
+
static int __init __cacheinfo_ready(void)
{
cacheinfo_ready = true;
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 36/41] arm_mpam: Add quirk framework
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (34 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 35/41] arm_mpam: resctrl: Call resctrl_init() on platforms that can support resctrl Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 37/41] arm_mpam: Add workaround for T241-MPAM-1 Ben Horgan
` (7 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: Shanker Donthineni <sdonthineni@nvidia.com>
The MPAM specification includes the MPAMF_IIDR, which serves to uniquely
identify the MSC implementation through a combination of implementer
details, product ID, variant, and revision. Certain hardware issues/errata
can be resolved using software workarounds.
Introduce a quirk framework to allow workarounds to be enabled based on the
MPAMF_IIDR value.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Tested-by: Zeng Heng <zengheng4@huawei.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
Co-developed-by: James Morse <james.morse@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes by James:
Stash the IIDR so this doesn't need an IPI, enable quirks only
once, move the description to the callback so it can be pr_once()d, add an
enum of workarounds for popular errata. Add macros for making lists of
product/revision/vendor half readable
Changes since rfc:
remove trailing commas in last element of enums
Make mpam_enable_quirks() in charge of mpam_set_quirk() even if there
is an enable.
Changes since v3:
Brackets in macro
---
drivers/resctrl/mpam_devices.c | 32 ++++++++++++++++++++++++++++++++
drivers/resctrl/mpam_internal.h | 25 +++++++++++++++++++++++++
2 files changed, 57 insertions(+)
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index 528936ececd9..382dc5c9b885 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -630,6 +630,30 @@ static struct mpam_msc_ris *mpam_get_or_create_ris(struct mpam_msc *msc,
return ERR_PTR(-ENOENT);
}
+static const struct mpam_quirk mpam_quirks[] = {
+ { NULL } /* Sentinel */
+};
+
+static void mpam_enable_quirks(struct mpam_msc *msc)
+{
+ const struct mpam_quirk *quirk;
+
+ for (quirk = &mpam_quirks[0]; quirk->iidr_mask; quirk++) {
+ int err = 0;
+
+ if (quirk->iidr != (msc->iidr & quirk->iidr_mask))
+ continue;
+
+ if (quirk->init)
+ err = quirk->init(msc, quirk);
+
+ if (err)
+ continue;
+
+ mpam_set_quirk(quirk->workaround, msc);
+ }
+}
+
/*
* IHI009A.a has this nugget: "If a monitor does not support automatic behaviour
* of NRDY, software can use this bit for any purpose" - so hardware might not
@@ -864,8 +888,11 @@ static int mpam_msc_hw_probe(struct mpam_msc *msc)
/* Grab an IDR value to find out how many RIS there are */
mutex_lock(&msc->part_sel_lock);
idr = mpam_msc_read_idr(msc);
+ msc->iidr = mpam_read_partsel_reg(msc, IIDR);
mutex_unlock(&msc->part_sel_lock);
+ mpam_enable_quirks(msc);
+
msc->ris_max = FIELD_GET(MPAMF_IDR_RIS_MAX, idr);
/* Use these values so partid/pmg always starts with a valid value */
@@ -1988,6 +2015,7 @@ static bool mpam_has_cmax_wd_feature(struct mpam_props *props)
* resulting safe value must be compatible with both. When merging values in
* the tree, all the aliasing resources must be handled first.
* On mismatch, parent is modified.
+ * Quirks on an MSC will apply to all MSC in that class.
*/
static void __props_mismatch(struct mpam_props *parent,
struct mpam_props *child, bool alias)
@@ -2107,6 +2135,7 @@ static void __props_mismatch(struct mpam_props *parent,
* nobble the class feature, as we can't configure all the resources.
* e.g. The L3 cache is composed of two resources with 13 and 17 portion
* bitmaps respectively.
+ * Quirks on an MSC will apply to all MSC in that class.
*/
static void
__class_props_mismatch(struct mpam_class *class, struct mpam_vmsc *vmsc)
@@ -2120,6 +2149,9 @@ __class_props_mismatch(struct mpam_class *class, struct mpam_vmsc *vmsc)
dev_dbg(dev, "Merging features for class:0x%lx &= vmsc:0x%lx\n",
(long)cprops->features, (long)vprops->features);
+ /* Merge quirks */
+ class->quirks |= vmsc->msc->quirks;
+
/* Take the safe value for any common features */
__props_mismatch(cprops, vprops, false);
}
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index a79c7670f7ae..60e445e94ee6 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -85,6 +85,8 @@ struct mpam_msc {
u8 pmg_max;
unsigned long ris_idxs;
u32 ris_max;
+ u32 iidr;
+ u16 quirks;
/*
* error_irq_lock is taken when registering/unregistering the error
@@ -216,6 +218,28 @@ struct mpam_props {
#define mpam_set_feature(_feat, x) __set_bit(_feat, (x)->features)
#define mpam_clear_feature(_feat, x) __clear_bit(_feat, (x)->features)
+/* Workaround bits for msc->quirks */
+enum mpam_device_quirks {
+ MPAM_QUIRK_LAST
+};
+
+#define mpam_has_quirk(_quirk, x) ((1 << (_quirk) & (x)->quirks))
+#define mpam_set_quirk(_quirk, x) ((x)->quirks |= (1 << (_quirk)))
+
+struct mpam_quirk {
+ int (*init)(struct mpam_msc *msc, const struct mpam_quirk *quirk);
+
+ u32 iidr;
+ u32 iidr_mask;
+
+ enum mpam_device_quirks workaround;
+};
+
+#define MPAM_IIDR_MATCH_ONE (FIELD_PREP_CONST(MPAMF_IIDR_PRODUCTID, 0xfff) | \
+ FIELD_PREP_CONST(MPAMF_IIDR_VARIANT, 0xf) | \
+ FIELD_PREP_CONST(MPAMF_IIDR_REVISION, 0xf) | \
+ FIELD_PREP_CONST(MPAMF_IIDR_IMPLEMENTER, 0xfff))
+
/* The values for MSMON_CFG_MBWU_FLT.RWBW */
enum mon_filter_options {
COUNT_BOTH = 0,
@@ -259,6 +283,7 @@ struct mpam_class {
struct mpam_props props;
u32 nrdy_usec;
+ u16 quirks;
u8 level;
enum mpam_class_types type;
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 37/41] arm_mpam: Add workaround for T241-MPAM-1
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (35 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 36/41] arm_mpam: Add quirk framework Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 38/41] arm_mpam: Add workaround for T241-MPAM-4 Ben Horgan
` (6 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: Shanker Donthineni <sdonthineni@nvidia.com>
The MPAM bandwidth partitioning controls will not be correctly configured,
and hardware will retain default configuration register values, meaning
generally that bandwidth will remain unprovisioned.
To address the issue, follow the below steps after updating the MBW_MIN
and/or MBW_MAX registers.
- Perform 64b reads from all 12 bridge MPAM shadow registers at offsets
(0x360048 + slice*0x10000 + partid*8). These registers are read-only.
- Continue iterating until all 12 shadow register values match in a loop.
pr_warn_once if the values fail to match within the loop count 1000.
- Perform 64b writes with the value 0x0 to the two spare registers at
offsets 0x1b0000 and 0x1c0000.
In the hardware, writes to the MPAMCFG_MBW_MAX MPAMCFG_MBW_MIN registers
are transformed into broadcast writes to the 12 shadow registers. The
final two writes to the spare registers cause a final rank of downstream
micro-architectural MPAM registers to be updated from the shadow copies.
The intervening loop to read the 12 shadow registers helps avoid a race
condition where writes to the spare registers occur before all shadow
registers have been updated.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes from James:
Merged the min/max update into a single
mpam_quirk_post_config_change() helper. Stashed the t241_id in the msc
instead of carrying the physical address around. Test the msc quirk bit
instead of a static key.
Changes since rfc:
MPAM_IIDR_NVIDIA_T421 -> MPAM_IIDR_NVIDIA_T241
return err from init
Be specific about the errata in the init name,
mpam_enable_quirk_nvidia_t241 -> mpam_enable_quirk_nvidia_t241_1
Changes since v3:
parentheses
---
Documentation/arch/arm64/silicon-errata.rst | 2 +
drivers/resctrl/mpam_devices.c | 88 +++++++++++++++++++++
drivers/resctrl/mpam_internal.h | 9 +++
3 files changed, 99 insertions(+)
diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
index 4c300caad901..a65620f98e3a 100644
--- a/Documentation/arch/arm64/silicon-errata.rst
+++ b/Documentation/arch/arm64/silicon-errata.rst
@@ -247,6 +247,8 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| NVIDIA | T241 GICv3/4.x | T241-FABRIC-4 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
+| NVIDIA | T241 MPAM | T241-MPAM-1 | N/A |
++----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
+----------------+-----------------+-----------------+-----------------------------+
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index 382dc5c9b885..08cb080592d9 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -29,6 +29,16 @@
#include "mpam_internal.h"
+/* Values for the T241 errata workaround */
+#define T241_CHIPS_MAX 4
+#define T241_CHIP_NSLICES 12
+#define T241_SPARE_REG0_OFF 0x1b0000
+#define T241_SPARE_REG1_OFF 0x1c0000
+#define T241_CHIP_ID(phys) FIELD_GET(GENMASK_ULL(44, 43), phys)
+#define T241_SHADOW_REG_OFF(sidx, pid) (0x360048 + (sidx) * 0x10000 + (pid) * 8)
+#define SMCCC_SOC_ID_T241 0x036b0241
+static void __iomem *t241_scratch_regs[T241_CHIPS_MAX];
+
/*
* mpam_list_lock protects the SRCU lists when writing. Once the
* mpam_enabled key is enabled these lists are read-only,
@@ -630,7 +640,45 @@ static struct mpam_msc_ris *mpam_get_or_create_ris(struct mpam_msc *msc,
return ERR_PTR(-ENOENT);
}
+static int mpam_enable_quirk_nvidia_t241_1(struct mpam_msc *msc,
+ const struct mpam_quirk *quirk)
+{
+ s32 soc_id = arm_smccc_get_soc_id_version();
+ struct resource *r;
+ phys_addr_t phys;
+
+ /*
+ * A mapping to a device other than the MSC is needed, check
+ * SOC_ID is NVIDIA T241 chip (036b:0241)
+ */
+ if (soc_id < 0 || soc_id != SMCCC_SOC_ID_T241)
+ return -EINVAL;
+
+ r = platform_get_resource(msc->pdev, IORESOURCE_MEM, 0);
+ if (!r)
+ return -EINVAL;
+
+ /* Find the internal registers base addr from the CHIP ID */
+ msc->t241_id = T241_CHIP_ID(r->start);
+ phys = FIELD_PREP(GENMASK_ULL(45, 44), msc->t241_id) | 0x19000000ULL;
+
+ t241_scratch_regs[msc->t241_id] = ioremap(phys, SZ_8M);
+ if (WARN_ON_ONCE(!t241_scratch_regs[msc->t241_id]))
+ return -EINVAL;
+
+ pr_info_once("Enabled workaround for NVIDIA T241 erratum T241-MPAM-1\n");
+
+ return 0;
+}
+
static const struct mpam_quirk mpam_quirks[] = {
+ {
+ /* NVIDIA t241 erratum T241-MPAM-1 */
+ .init = mpam_enable_quirk_nvidia_t241_1,
+ .iidr = MPAM_IIDR_NVIDIA_T241,
+ .iidr_mask = MPAM_IIDR_MATCH_ONE,
+ .workaround = T241_SCRUB_SHADOW_REGS,
+ },
{ NULL } /* Sentinel */
};
@@ -1378,6 +1426,44 @@ static void mpam_reset_msc_bitmap(struct mpam_msc *msc, u16 reg, u16 wd)
__mpam_write_reg(msc, reg, bm);
}
+static void mpam_apply_t241_erratum(struct mpam_msc_ris *ris, u16 partid)
+{
+ int sidx, i, lcount = 1000;
+ void __iomem *regs;
+ u64 val0, val;
+
+ regs = t241_scratch_regs[ris->vmsc->msc->t241_id];
+
+ for (i = 0; i < lcount; i++) {
+ /* Read the shadow register at index 0 */
+ val0 = readq_relaxed(regs + T241_SHADOW_REG_OFF(0, partid));
+
+ /* Check if all the shadow registers have the same value */
+ for (sidx = 1; sidx < T241_CHIP_NSLICES; sidx++) {
+ val = readq_relaxed(regs +
+ T241_SHADOW_REG_OFF(sidx, partid));
+ if (val != val0)
+ break;
+ }
+ if (sidx == T241_CHIP_NSLICES)
+ break;
+ }
+
+ if (i == lcount)
+ pr_warn_once("t241: inconsistent values in shadow regs");
+
+ /* Write a value zero to spare registers to take effect of MBW conf */
+ writeq_relaxed(0, regs + T241_SPARE_REG0_OFF);
+ writeq_relaxed(0, regs + T241_SPARE_REG1_OFF);
+}
+
+static void mpam_quirk_post_config_change(struct mpam_msc_ris *ris, u16 partid,
+ struct mpam_config *cfg)
+{
+ if (mpam_has_quirk(T241_SCRUB_SHADOW_REGS, ris->vmsc->msc))
+ mpam_apply_t241_erratum(ris, partid);
+}
+
/* Called via IPI. Call while holding an SRCU reference */
static void mpam_reprogram_ris_partid(struct mpam_msc_ris *ris, u16 partid,
struct mpam_config *cfg)
@@ -1461,6 +1547,8 @@ static void mpam_reprogram_ris_partid(struct mpam_msc_ris *ris, u16 partid,
mpam_write_partsel_reg(msc, PRI, pri_val);
}
+ mpam_quirk_post_config_change(ris, partid, cfg);
+
mutex_unlock(&msc->part_sel_lock);
}
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index 60e445e94ee6..508cc03d0453 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -130,6 +130,9 @@ struct mpam_msc {
void __iomem *mapped_hwpage;
size_t mapped_hwpage_sz;
+ /* Values only used on some platforms for quirks */
+ u32 t241_id;
+
struct mpam_garbage garbage;
};
@@ -220,6 +223,7 @@ struct mpam_props {
/* Workaround bits for msc->quirks */
enum mpam_device_quirks {
+ T241_SCRUB_SHADOW_REGS,
MPAM_QUIRK_LAST
};
@@ -240,6 +244,11 @@ struct mpam_quirk {
FIELD_PREP_CONST(MPAMF_IIDR_REVISION, 0xf) | \
FIELD_PREP_CONST(MPAMF_IIDR_IMPLEMENTER, 0xfff))
+#define MPAM_IIDR_NVIDIA_T241 (FIELD_PREP_CONST(MPAMF_IIDR_PRODUCTID, 0x241) | \
+ FIELD_PREP_CONST(MPAMF_IIDR_VARIANT, 0) | \
+ FIELD_PREP_CONST(MPAMF_IIDR_REVISION, 0) | \
+ FIELD_PREP_CONST(MPAMF_IIDR_IMPLEMENTER, 0x36b))
+
/* The values for MSMON_CFG_MBWU_FLT.RWBW */
enum mon_filter_options {
COUNT_BOTH = 0,
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 38/41] arm_mpam: Add workaround for T241-MPAM-4
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (36 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 37/41] arm_mpam: Add workaround for T241-MPAM-1 Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-03-01 17:28 ` Fenghua Yu
2026-02-24 17:57 ` [PATCH v5 39/41] arm_mpam: Add workaround for T241-MPAM-6 Ben Horgan
` (5 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: Shanker Donthineni <sdonthineni@nvidia.com>
In the T241 implementation of memory-bandwidth partitioning, in the absence
of contention for bandwidth, the minimum bandwidth setting can affect the
amount of achieved bandwidth. Specifically, the achieved bandwidth in the
absence of contention can settle to any value between the values of
MPAMCFG_MBW_MIN and MPAMCFG_MBW_MAX. Also, if MPAMCFG_MBW_MIN is set
zero (below 0.78125%), once a core enters a throttled state, it will never
leave that state.
The first issue is not a concern if the MPAM software allows to program
MPAMCFG_MBW_MIN through the sysfs interface. This patch ensures program
MBW_MIN=1 (0.78125%) whenever MPAMCFG_MBW_MIN=0 is programmed.
In the scenario where the resctrl doesn't support the MBW_MIN interface via
sysfs, to achieve bandwidth closer to MBW_MAX in the absence of contention,
software should configure a relatively narrow gap between MBW_MIN and
MBW_MAX. The recommendation is to use a 5% gap to mitigate the problem.
Clear the feature MBW_MIN feature from the class to ensure we don't
accidentally change behaviour when resctrl adds support for a MBW_MIN
interface.
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
[ morse: Added as second quirk, adapted to use the new intermediate values
in mpam_extend_config() ]
Changes since rfc:
MPAM_IIDR_NVIDIA_T421 -> MPAM_IIDR_NVIDIA_T241
Handling when reset_mbw_min is set
Changes since v3:
Move the 5% gap policy back here
Clear mbw_min feature in class
---
Documentation/arch/arm64/silicon-errata.rst | 2 +
drivers/resctrl/mpam_devices.c | 50 +++++++++++++++++++--
drivers/resctrl/mpam_internal.h | 1 +
3 files changed, 50 insertions(+), 3 deletions(-)
diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
index a65620f98e3a..a4b246655e37 100644
--- a/Documentation/arch/arm64/silicon-errata.rst
+++ b/Documentation/arch/arm64/silicon-errata.rst
@@ -249,6 +249,8 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| NVIDIA | T241 MPAM | T241-MPAM-1 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
+| NVIDIA | T241 MPAM | T241-MPAM-4 | N/A |
++----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
+----------------+-----------------+-----------------+-----------------------------+
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index 08cb080592d9..8f44e9dee207 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -679,6 +679,12 @@ static const struct mpam_quirk mpam_quirks[] = {
.iidr_mask = MPAM_IIDR_MATCH_ONE,
.workaround = T241_SCRUB_SHADOW_REGS,
},
+ {
+ /* NVIDIA t241 erratum T241-MPAM-4 */
+ .iidr = MPAM_IIDR_NVIDIA_T241,
+ .iidr_mask = MPAM_IIDR_MATCH_ONE,
+ .workaround = T241_FORCE_MBW_MIN_TO_ONE,
+ },
{ NULL } /* Sentinel */
};
@@ -1464,6 +1470,31 @@ static void mpam_quirk_post_config_change(struct mpam_msc_ris *ris, u16 partid,
mpam_apply_t241_erratum(ris, partid);
}
+static u16 mpam_wa_t241_force_mbw_min_to_one(struct mpam_props *props)
+{
+ u16 max_hw_value, min_hw_granule, res0_bits;
+
+ res0_bits = 16 - props->bwa_wd;
+ max_hw_value = ((1 << props->bwa_wd) - 1) << res0_bits;
+ min_hw_granule = ~max_hw_value;
+
+ return min_hw_granule + 1;
+}
+
+static u16 mpam_wa_t241_calc_min_from_max(struct mpam_config *cfg)
+{
+ u16 val = 0;
+
+ if (mpam_has_feature(mpam_feat_mbw_max, cfg)) {
+ u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
+
+ if (cfg->mbw_max > delta)
+ val = cfg->mbw_max - delta;
+ }
+
+ return val;
+}
+
/* Called via IPI. Call while holding an SRCU reference */
static void mpam_reprogram_ris_partid(struct mpam_msc_ris *ris, u16 partid,
struct mpam_config *cfg)
@@ -1506,9 +1537,19 @@ static void mpam_reprogram_ris_partid(struct mpam_msc_ris *ris, u16 partid,
mpam_write_partsel_reg(msc, MBW_PBM, cfg->mbw_pbm);
}
- if (mpam_has_feature(mpam_feat_mbw_min, rprops) &&
- mpam_has_feature(mpam_feat_mbw_min, cfg))
- mpam_write_partsel_reg(msc, MBW_MIN, 0);
+ if (mpam_has_feature(mpam_feat_mbw_min, rprops)) {
+ u16 val = 0;
+
+ if (mpam_has_quirk(T241_FORCE_MBW_MIN_TO_ONE, msc)) {
+ u16 min = mpam_wa_t241_force_mbw_min_to_one(rprops);
+
+ val = mpam_wa_t241_calc_min_from_max(cfg);
+ if (val < min)
+ val = min;
+ }
+
+ mpam_write_partsel_reg(msc, MBW_MIN, val);
+ }
if (mpam_has_feature(mpam_feat_mbw_max, rprops) &&
mpam_has_feature(mpam_feat_mbw_max, cfg)) {
@@ -2304,6 +2345,9 @@ static void mpam_enable_merge_class_features(struct mpam_component *comp)
list_for_each_entry(vmsc, &comp->vmsc, comp_list)
__class_props_mismatch(class, vmsc);
+
+ if (mpam_has_quirk(T241_FORCE_MBW_MIN_TO_ONE, class))
+ mpam_clear_feature(mpam_feat_mbw_min, &class->props);
}
/*
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index 508cc03d0453..9f92fd49a61c 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -224,6 +224,7 @@ struct mpam_props {
/* Workaround bits for msc->quirks */
enum mpam_device_quirks {
T241_SCRUB_SHADOW_REGS,
+ T241_FORCE_MBW_MIN_TO_ONE,
MPAM_QUIRK_LAST
};
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 38/41] arm_mpam: Add workaround for T241-MPAM-4
2026-02-24 17:57 ` [PATCH v5 38/41] arm_mpam: Add workaround for T241-MPAM-4 Ben Horgan
@ 2026-03-01 17:28 ` Fenghua Yu
2026-03-02 17:11 ` Ben Horgan
0 siblings, 1 reply; 75+ messages in thread
From: Fenghua Yu @ 2026-03-01 17:28 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, gshan, james.morse, jonathan.cameron, kobak, lcherian,
linux-arm-kernel, linux-kernel, peternewman, punit.agrawal,
quic_jiles, reinette.chatre, rohit.mathew, scott, sdonthineni,
tan.shaopeng, xhao, catalin.marinas, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
Hi, Ben,
On 2/24/26 09:57, Ben Horgan wrote:
> From: Shanker Donthineni <sdonthineni@nvidia.com>
>
> In the T241 implementation of memory-bandwidth partitioning, in the absence
> of contention for bandwidth, the minimum bandwidth setting can affect the
> amount of achieved bandwidth. Specifically, the achieved bandwidth in the
> absence of contention can settle to any value between the values of
> MPAMCFG_MBW_MIN and MPAMCFG_MBW_MAX. Also, if MPAMCFG_MBW_MIN is set
> zero (below 0.78125%), once a core enters a throttled state, it will never
> leave that state.
>
> The first issue is not a concern if the MPAM software allows to program
> MPAMCFG_MBW_MIN through the sysfs interface. This patch ensures program
> MBW_MIN=1 (0.78125%) whenever MPAMCFG_MBW_MIN=0 is programmed.
>
> In the scenario where the resctrl doesn't support the MBW_MIN interface via
> sysfs, to achieve bandwidth closer to MBW_MAX in the absence of contention,
> software should configure a relatively narrow gap between MBW_MIN and
> MBW_MAX. The recommendation is to use a 5% gap to mitigate the problem.
>
> Clear the feature MBW_MIN feature from the class to ensure we don't
> accidentally change behaviour when resctrl adds support for a MBW_MIN
> interface.
>
> Tested-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
Reviewed-by: Fenghua Yu <fenghuay@nvidia.com>
This patch itself is good.
Please check the following comments.
> ---
> [ morse: Added as second quirk, adapted to use the new intermediate values
> in mpam_extend_config() ]
>
> Changes since rfc:
> MPAM_IIDR_NVIDIA_T421 -> MPAM_IIDR_NVIDIA_T241
> Handling when reset_mbw_min is set
>
> Changes since v3:
> Move the 5% gap policy back here
> Clear mbw_min feature in class
> ---
> Documentation/arch/arm64/silicon-errata.rst | 2 +
> drivers/resctrl/mpam_devices.c | 50 +++++++++++++++++++--
> drivers/resctrl/mpam_internal.h | 1 +
> 3 files changed, 50 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
> index a65620f98e3a..a4b246655e37 100644
> --- a/Documentation/arch/arm64/silicon-errata.rst
> +++ b/Documentation/arch/arm64/silicon-errata.rst
> @@ -249,6 +249,8 @@ stable kernels.
> +----------------+-----------------+-----------------+-----------------------------+
> | NVIDIA | T241 MPAM | T241-MPAM-1 | N/A |
> +----------------+-----------------+-----------------+-----------------------------+
> +| NVIDIA | T241 MPAM | T241-MPAM-4 | N/A |
> ++----------------+-----------------+-----------------+-----------------------------+
> +----------------+-----------------+-----------------+-----------------------------+
> | Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
> +----------------+-----------------+-----------------+-----------------------------+
> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
> index 08cb080592d9..8f44e9dee207 100644
> --- a/drivers/resctrl/mpam_devices.c
> +++ b/drivers/resctrl/mpam_devices.c
> @@ -679,6 +679,12 @@ static const struct mpam_quirk mpam_quirks[] = {
> .iidr_mask = MPAM_IIDR_MATCH_ONE,
> .workaround = T241_SCRUB_SHADOW_REGS,
> },
> + {
> + /* NVIDIA t241 erratum T241-MPAM-4 */
> + .iidr = MPAM_IIDR_NVIDIA_T241,
> + .iidr_mask = MPAM_IIDR_MATCH_ONE,
> + .workaround = T241_FORCE_MBW_MIN_TO_ONE,
> + },
> { NULL } /* Sentinel */
> };
>
> @@ -1464,6 +1470,31 @@ static void mpam_quirk_post_config_change(struct mpam_msc_ris *ris, u16 partid,
> mpam_apply_t241_erratum(ris, partid);
> }
>
> +static u16 mpam_wa_t241_force_mbw_min_to_one(struct mpam_props *props)
> +{
> + u16 max_hw_value, min_hw_granule, res0_bits;
> +
> + res0_bits = 16 - props->bwa_wd;
> + max_hw_value = ((1 << props->bwa_wd) - 1) << res0_bits;
> + min_hw_granule = ~max_hw_value;
> +
> + return min_hw_granule + 1;
> +}
> +
> +static u16 mpam_wa_t241_calc_min_from_max(struct mpam_config *cfg)
> +{
> + u16 val = 0;
> +
> + if (mpam_has_feature(mpam_feat_mbw_max, cfg)) {
But the problem is mpam_feat_mbw_max feature is NOT set in cfg.
> + u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
> +
> + if (cfg->mbw_max > delta)
> + val = cfg->mbw_max - delta;
> + }
> +
> + return val;
So 0 is always returned.
The workaround will set mbw_min as 1% which is too small and will cause
performance degradation, e.g. about 20% degradation on some benchmarks.
This patch itself doesn't have any issue.
The issue is the mbw_max feature bit in cfg is not set.
This is a legacy issue, not introduced by this patch set.
Here is a fix patch for the issue:
https://lore.kernel.org/lkml/20260301171829.1357886-1-fenghuay@nvidia.com/T/#u
If the fix patch is good, could you please add it into the next version
of this series?
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 38/41] arm_mpam: Add workaround for T241-MPAM-4
2026-03-01 17:28 ` Fenghua Yu
@ 2026-03-02 17:11 ` Ben Horgan
2026-03-09 17:39 ` Fenghua Yu
0 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-03-02 17:11 UTC (permalink / raw)
To: Fenghua Yu
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, gshan, james.morse, jonathan.cameron, kobak, lcherian,
linux-arm-kernel, linux-kernel, peternewman, punit.agrawal,
quic_jiles, reinette.chatre, rohit.mathew, scott, sdonthineni,
tan.shaopeng, xhao, catalin.marinas, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
Hi Fenghua,
On 3/1/26 17:28, Fenghua Yu wrote:
> Hi, Ben,
>
> On 2/24/26 09:57, Ben Horgan wrote:
>> From: Shanker Donthineni <sdonthineni@nvidia.com>
>>
>> In the T241 implementation of memory-bandwidth partitioning, in the
>> absence
>> of contention for bandwidth, the minimum bandwidth setting can affect the
>> amount of achieved bandwidth. Specifically, the achieved bandwidth in the
>> absence of contention can settle to any value between the values of
>> MPAMCFG_MBW_MIN and MPAMCFG_MBW_MAX. Also, if MPAMCFG_MBW_MIN is set
>> zero (below 0.78125%), once a core enters a throttled state, it will
>> never
>> leave that state.
>>
>> The first issue is not a concern if the MPAM software allows to program
>> MPAMCFG_MBW_MIN through the sysfs interface. This patch ensures program
>> MBW_MIN=1 (0.78125%) whenever MPAMCFG_MBW_MIN=0 is programmed.
>>
>> In the scenario where the resctrl doesn't support the MBW_MIN
>> interface via
>> sysfs, to achieve bandwidth closer to MBW_MAX in the absence of
>> contention,
>> software should configure a relatively narrow gap between MBW_MIN and
>> MBW_MAX. The recommendation is to use a 5% gap to mitigate the problem.
>>
>> Clear the feature MBW_MIN feature from the class to ensure we don't
>> accidentally change behaviour when resctrl adds support for a MBW_MIN
>> interface.
>>
>> Tested-by: Gavin Shan <gshan@redhat.com>
>> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>> Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
>> Signed-off-by: James Morse <james.morse@arm.com>
>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>
> Reviewed-by: Fenghua Yu <fenghuay@nvidia.com>
>
> This patch itself is good.
>
> Please check the following comments.
>
>> ---
>> [ morse: Added as second quirk, adapted to use the new intermediate
>> values
>> in mpam_extend_config() ]
>>
>> Changes since rfc:
>> MPAM_IIDR_NVIDIA_T421 -> MPAM_IIDR_NVIDIA_T241
>> Handling when reset_mbw_min is set
>>
>> Changes since v3:
>> Move the 5% gap policy back here
>> Clear mbw_min feature in class
>> ---
>> Documentation/arch/arm64/silicon-errata.rst | 2 +
>> drivers/resctrl/mpam_devices.c | 50 +++++++++++++++++++--
>> drivers/resctrl/mpam_internal.h | 1 +
>> 3 files changed, 50 insertions(+), 3 deletions(-)
>>
>> diff --git a/Documentation/arch/arm64/silicon-errata.rst b/
>> Documentation/arch/arm64/silicon-errata.rst
>> index a65620f98e3a..a4b246655e37 100644
>> --- a/Documentation/arch/arm64/silicon-errata.rst
>> +++ b/Documentation/arch/arm64/silicon-errata.rst
>> @@ -249,6 +249,8 @@ stable kernels.
>> +----------------+-----------------+-----------------
>> +-----------------------------+
>> | NVIDIA | T241 MPAM | T241-MPAM-1 | N/
>> A |
>> +----------------+-----------------+-----------------
>> +-----------------------------+
>> +| NVIDIA | T241 MPAM | T241-MPAM-4 | N/
>> A |
>> ++----------------+-----------------+-----------------
>> +-----------------------------+
>> +----------------+-----------------+-----------------
>> +-----------------------------+
>> | Freescale/NXP | LS2080A/LS1043A | A-008585 |
>> FSL_ERRATUM_A008585 |
>> +----------------+-----------------+-----------------
>> +-----------------------------+
>> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/
>> mpam_devices.c
>> index 08cb080592d9..8f44e9dee207 100644
>> --- a/drivers/resctrl/mpam_devices.c
>> +++ b/drivers/resctrl/mpam_devices.c
>> @@ -679,6 +679,12 @@ static const struct mpam_quirk mpam_quirks[] = {
>> .iidr_mask = MPAM_IIDR_MATCH_ONE,
>> .workaround = T241_SCRUB_SHADOW_REGS,
>> },
>> + {
>> + /* NVIDIA t241 erratum T241-MPAM-4 */
>> + .iidr = MPAM_IIDR_NVIDIA_T241,
>> + .iidr_mask = MPAM_IIDR_MATCH_ONE,
>> + .workaround = T241_FORCE_MBW_MIN_TO_ONE,
>> + },
>> { NULL } /* Sentinel */
>> };
>> @@ -1464,6 +1470,31 @@ static void
>> mpam_quirk_post_config_change(struct mpam_msc_ris *ris, u16 partid,
>> mpam_apply_t241_erratum(ris, partid);
>> }
>> +static u16 mpam_wa_t241_force_mbw_min_to_one(struct mpam_props *props)
>> +{
>> + u16 max_hw_value, min_hw_granule, res0_bits;
>> +
>> + res0_bits = 16 - props->bwa_wd;
>> + max_hw_value = ((1 << props->bwa_wd) - 1) << res0_bits;
>> + min_hw_granule = ~max_hw_value;
>> +
>> + return min_hw_granule + 1;
>> +}
>> +
>> +static u16 mpam_wa_t241_calc_min_from_max(struct mpam_config *cfg)
>> +{
>> + u16 val = 0;
>> +
>> + if (mpam_has_feature(mpam_feat_mbw_max, cfg)) {
>
> But the problem is mpam_feat_mbw_max feature is NOT set in cfg.
>
>> + u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
>> +
>> + if (cfg->mbw_max > delta)
>> + val = cfg->mbw_max - delta;
>> + }
>> +
>> + return val;
>
> So 0 is always returned.
>
> The workaround will set mbw_min as 1% which is too small and will cause
> performance degradation, e.g. about 20% degradation on some benchmarks.
>
> This patch itself doesn't have any issue.
>
> The issue is the mbw_max feature bit in cfg is not set.
This is intended behaviour as the reset is done independently
from the value set in the config. The value is there so that
resctrl can display the expected values.
> This is a legacy issue, not introduced by this patch set.
> > Here is a fix patch for the issue:
> https://lore.kernel.org/lkml/20260301171829.1357886-1-
> fenghuay@nvidia.com/T/#u
I've commented on that patch. I think it's best to fix it in the context
of the erratum.
Does the below solve your performance problems?
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index 236f78ab9163..60d3d3e2193f 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -1515,16 +1515,20 @@ static u16 mpam_wa_t241_force_mbw_min_to_one(struct mpam_props *props)
return min_hw_granule + 1;
}
-static u16 mpam_wa_t241_calc_min_from_max(struct mpam_config *cfg)
+static u16 mpam_wa_t241_calc_min_from_max(struct mpam_props *props,
+ struct mpam_config *cfg)
{
u16 val = 0;
+ u16 max;
+ u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
- if (mpam_has_feature(mpam_feat_mbw_max, cfg)) {
- u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
+ if (mpam_has_feature(mpam_feat_mbw_max, cfg))
+ max = cfg->mbw_max;
+ else
+ max = GENMASK(15, 16 - cprops->bwa_wd);
- if (cfg->mbw_max > delta)
- val = cfg->mbw_max - delta;
- }
+ if (max > delta)
+ val = max - delta;
return val;
}
@@ -1577,9 +1581,8 @@ static void mpam_reprogram_ris_partid(struct mpam_msc_ris *ris, u16 partid,
if (mpam_has_quirk(T241_FORCE_MBW_MIN_TO_ONE, msc)) {
u16 min = mpam_wa_t241_force_mbw_min_to_one(rprops);
- val = mpam_wa_t241_calc_min_from_max(cfg);
- if (val < min)
- val = min;
+ val = mpam_wa_t241_calc_min_from_max(rprops, cfg);
+ val = max(val, min);
}
mpam_write_partsel_reg(msc, MBW_MIN, val);
>
> If the fix patch is good, could you please add it into the next version
> of this series?
>
> Thanks.
>
> -Fenghua
Thanks,
Ben
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 38/41] arm_mpam: Add workaround for T241-MPAM-4
2026-03-02 17:11 ` Ben Horgan
@ 2026-03-09 17:39 ` Fenghua Yu
2026-03-10 11:26 ` Ben Horgan
0 siblings, 1 reply; 75+ messages in thread
From: Fenghua Yu @ 2026-03-09 17:39 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, gshan, james.morse, jonathan.cameron, kobak, lcherian,
linux-arm-kernel, linux-kernel, peternewman, punit.agrawal,
quic_jiles, reinette.chatre, rohit.mathew, scott, sdonthineni,
tan.shaopeng, xhao, catalin.marinas, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
Hi, Ben,
On 3/2/26 09:11, Ben Horgan wrote:
> Hi Fenghua,
>
> On 3/1/26 17:28, Fenghua Yu wrote:
>> Hi, Ben,
>>
>> On 2/24/26 09:57, Ben Horgan wrote:
>>> From: Shanker Donthineni <sdonthineni@nvidia.com>
>>>
>>> In the T241 implementation of memory-bandwidth partitioning, in the
>>> absence
>>> of contention for bandwidth, the minimum bandwidth setting can affect the
>>> amount of achieved bandwidth. Specifically, the achieved bandwidth in the
>>> absence of contention can settle to any value between the values of
>>> MPAMCFG_MBW_MIN and MPAMCFG_MBW_MAX. Also, if MPAMCFG_MBW_MIN is set
>>> zero (below 0.78125%), once a core enters a throttled state, it will
>>> never
>>> leave that state.
>>>
>>> The first issue is not a concern if the MPAM software allows to program
>>> MPAMCFG_MBW_MIN through the sysfs interface. This patch ensures program
>>> MBW_MIN=1 (0.78125%) whenever MPAMCFG_MBW_MIN=0 is programmed.
>>>
>>> In the scenario where the resctrl doesn't support the MBW_MIN
>>> interface via
>>> sysfs, to achieve bandwidth closer to MBW_MAX in the absence of
>>> contention,
>>> software should configure a relatively narrow gap between MBW_MIN and
>>> MBW_MAX. The recommendation is to use a 5% gap to mitigate the problem.
>>>
>>> Clear the feature MBW_MIN feature from the class to ensure we don't
>>> accidentally change behaviour when resctrl adds support for a MBW_MIN
>>> interface.
>>>
>>> Tested-by: Gavin Shan <gshan@redhat.com>
>>> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>>> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>>> Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
>>> Signed-off-by: James Morse <james.morse@arm.com>
>>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>>
>> Reviewed-by: Fenghua Yu <fenghuay@nvidia.com>
>>
>> This patch itself is good.
>>
>> Please check the following comments.
>>
>>> ---
>>> [ morse: Added as second quirk, adapted to use the new intermediate
>>> values
>>> in mpam_extend_config() ]
>>>
>>> Changes since rfc:
>>> MPAM_IIDR_NVIDIA_T421 -> MPAM_IIDR_NVIDIA_T241
>>> Handling when reset_mbw_min is set
>>>
>>> Changes since v3:
>>> Move the 5% gap policy back here
>>> Clear mbw_min feature in class
>>> ---
>>> Documentation/arch/arm64/silicon-errata.rst | 2 +
>>> drivers/resctrl/mpam_devices.c | 50 +++++++++++++++++++--
>>> drivers/resctrl/mpam_internal.h | 1 +
>>> 3 files changed, 50 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/Documentation/arch/arm64/silicon-errata.rst b/
>>> Documentation/arch/arm64/silicon-errata.rst
>>> index a65620f98e3a..a4b246655e37 100644
>>> --- a/Documentation/arch/arm64/silicon-errata.rst
>>> +++ b/Documentation/arch/arm64/silicon-errata.rst
>>> @@ -249,6 +249,8 @@ stable kernels.
>>> +----------------+-----------------+-----------------
>>> +-----------------------------+
>>> | NVIDIA | T241 MPAM | T241-MPAM-1 | N/
>>> A |
>>> +----------------+-----------------+-----------------
>>> +-----------------------------+
>>> +| NVIDIA | T241 MPAM | T241-MPAM-4 | N/
>>> A |
>>> ++----------------+-----------------+-----------------
>>> +-----------------------------+
>>> +----------------+-----------------+-----------------
>>> +-----------------------------+
>>> | Freescale/NXP | LS2080A/LS1043A | A-008585 |
>>> FSL_ERRATUM_A008585 |
>>> +----------------+-----------------+-----------------
>>> +-----------------------------+
>>> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/
>>> mpam_devices.c
>>> index 08cb080592d9..8f44e9dee207 100644
>>> --- a/drivers/resctrl/mpam_devices.c
>>> +++ b/drivers/resctrl/mpam_devices.c
>>> @@ -679,6 +679,12 @@ static const struct mpam_quirk mpam_quirks[] = {
>>> .iidr_mask = MPAM_IIDR_MATCH_ONE,
>>> .workaround = T241_SCRUB_SHADOW_REGS,
>>> },
>>> + {
>>> + /* NVIDIA t241 erratum T241-MPAM-4 */
>>> + .iidr = MPAM_IIDR_NVIDIA_T241,
>>> + .iidr_mask = MPAM_IIDR_MATCH_ONE,
>>> + .workaround = T241_FORCE_MBW_MIN_TO_ONE,
>>> + },
>>> { NULL } /* Sentinel */
>>> };
>>> @@ -1464,6 +1470,31 @@ static void
>>> mpam_quirk_post_config_change(struct mpam_msc_ris *ris, u16 partid,
>>> mpam_apply_t241_erratum(ris, partid);
>>> }
>>> +static u16 mpam_wa_t241_force_mbw_min_to_one(struct mpam_props *props)
>>> +{
>>> + u16 max_hw_value, min_hw_granule, res0_bits;
>>> +
>>> + res0_bits = 16 - props->bwa_wd;
>>> + max_hw_value = ((1 << props->bwa_wd) - 1) << res0_bits;
>>> + min_hw_granule = ~max_hw_value;
>>> +
>>> + return min_hw_granule + 1;
>>> +}
>>> +
>>> +static u16 mpam_wa_t241_calc_min_from_max(struct mpam_config *cfg)
>>> +{
>>> + u16 val = 0;
>>> +
>>> + if (mpam_has_feature(mpam_feat_mbw_max, cfg)) {
>>
>> But the problem is mpam_feat_mbw_max feature is NOT set in cfg.
>>
>>> + u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
>>> +
>>> + if (cfg->mbw_max > delta)
>>> + val = cfg->mbw_max - delta;
>>> + }
>>> +
>>> + return val;
>>
>> So 0 is always returned.
>>
>> The workaround will set mbw_min as 1% which is too small and will cause
>> performance degradation, e.g. about 20% degradation on some benchmarks.
>>
>> This patch itself doesn't have any issue.
>>
>> The issue is the mbw_max feature bit in cfg is not set.
>
> This is intended behaviour as the reset is done independently
> from the value set in the config. The value is there so that
> resctrl can display the expected values.
>
>> This is a legacy issue, not introduced by this patch set.
>>> Here is a fix patch for the issue:
>> https://lore.kernel.org/lkml/20260301171829.1357886-1-
>> fenghuay@nvidia.com/T/#u
>
> I've commented on that patch. I think it's best to fix it in the context
> of the erratum.
>
> Does the below solve your performance problems?
>
> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
> index 236f78ab9163..60d3d3e2193f 100644
> --- a/drivers/resctrl/mpam_devices.c
> +++ b/drivers/resctrl/mpam_devices.c
> @@ -1515,16 +1515,20 @@ static u16 mpam_wa_t241_force_mbw_min_to_one(struct mpam_props *props)
> return min_hw_granule + 1;
> }
>
> -static u16 mpam_wa_t241_calc_min_from_max(struct mpam_config *cfg)
> +static u16 mpam_wa_t241_calc_min_from_max(struct mpam_props *props,
> + struct mpam_config *cfg)
> {
> u16 val = 0;
> + u16 max;
> + u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
>
> - if (mpam_has_feature(mpam_feat_mbw_max, cfg)) {
> - u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
> + if (mpam_has_feature(mpam_feat_mbw_max, cfg))
> + max = cfg->mbw_max;
> + else
> + max = GENMASK(15, 16 - cprops->bwa_wd);
>
> - if (cfg->mbw_max > delta)
> - val = cfg->mbw_max - delta;
> - }
Could you please add some comments on this piece of code? It's worth to
comment on why there are different values on cfg and props.
> + if (max > delta)
> + val = max - delta;
>
> return val;
> }
> @@ -1577,9 +1581,8 @@ static void mpam_reprogram_ris_partid(struct mpam_msc_ris *ris, u16 partid,
> if (mpam_has_quirk(T241_FORCE_MBW_MIN_TO_ONE, msc)) {
> u16 min = mpam_wa_t241_force_mbw_min_to_one(rprops);
>
> - val = mpam_wa_t241_calc_min_from_max(cfg);
> - if (val < min)
> - val = min;
> + val = mpam_wa_t241_calc_min_from_max(rprops, cfg);
> + val = max(val, min);
> }
>
> mpam_write_partsel_reg(msc, MBW_MIN, val);
>
Otherwise, this change looks good to me.
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 38/41] arm_mpam: Add workaround for T241-MPAM-4
2026-03-09 17:39 ` Fenghua Yu
@ 2026-03-10 11:26 ` Ben Horgan
0 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-03-10 11:26 UTC (permalink / raw)
To: Fenghua Yu
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, gshan, james.morse, jonathan.cameron, kobak, lcherian,
linux-arm-kernel, linux-kernel, peternewman, punit.agrawal,
quic_jiles, reinette.chatre, rohit.mathew, scott, sdonthineni,
tan.shaopeng, xhao, catalin.marinas, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
Hi Fenghua,
On 3/9/26 17:39, Fenghua Yu wrote:
> Hi, Ben,
>
> On 3/2/26 09:11, Ben Horgan wrote:
>> Hi Fenghua,
>>
>> On 3/1/26 17:28, Fenghua Yu wrote:
>>> Hi, Ben,
>>>
>>> On 2/24/26 09:57, Ben Horgan wrote:
>>>> From: Shanker Donthineni <sdonthineni@nvidia.com>
>>>>
>>>> In the T241 implementation of memory-bandwidth partitioning, in the
>>>> absence
>>>> of contention for bandwidth, the minimum bandwidth setting can
>>>> affect the
>>>> amount of achieved bandwidth. Specifically, the achieved bandwidth
>>>> in the
>>>> absence of contention can settle to any value between the values of
>>>> MPAMCFG_MBW_MIN and MPAMCFG_MBW_MAX. Also, if MPAMCFG_MBW_MIN is set
>>>> zero (below 0.78125%), once a core enters a throttled state, it will
>>>> never
>>>> leave that state.
>>>>
>>>> The first issue is not a concern if the MPAM software allows to program
>>>> MPAMCFG_MBW_MIN through the sysfs interface. This patch ensures program
>>>> MBW_MIN=1 (0.78125%) whenever MPAMCFG_MBW_MIN=0 is programmed.
>>>>
>>>> In the scenario where the resctrl doesn't support the MBW_MIN
>>>> interface via
>>>> sysfs, to achieve bandwidth closer to MBW_MAX in the absence of
>>>> contention,
>>>> software should configure a relatively narrow gap between MBW_MIN and
>>>> MBW_MAX. The recommendation is to use a 5% gap to mitigate the problem.
>>>>
>>>> Clear the feature MBW_MIN feature from the class to ensure we don't
>>>> accidentally change behaviour when resctrl adds support for a MBW_MIN
>>>> interface.
>>>>
>>>> Tested-by: Gavin Shan <gshan@redhat.com>
>>>> Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>>>> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
>>>> Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
>>>> Signed-off-by: James Morse <james.morse@arm.com>
>>>> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
>>>
>>> Reviewed-by: Fenghua Yu <fenghuay@nvidia.com>
>>>
>>> This patch itself is good.
>>>
>>> Please check the following comments.
>>>
>>>> ---
>>>> [ morse: Added as second quirk, adapted to use the new intermediate
>>>> values
>>>> in mpam_extend_config() ]
>>>>
>>>> Changes since rfc:
>>>> MPAM_IIDR_NVIDIA_T421 -> MPAM_IIDR_NVIDIA_T241
>>>> Handling when reset_mbw_min is set
>>>>
>>>> Changes since v3:
>>>> Move the 5% gap policy back here
>>>> Clear mbw_min feature in class
>>>> ---
>>>> Documentation/arch/arm64/silicon-errata.rst | 2 +
>>>> drivers/resctrl/mpam_devices.c | 50 ++++++++++++++++
>>>> +++--
>>>> drivers/resctrl/mpam_internal.h | 1 +
>>>> 3 files changed, 50 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/Documentation/arch/arm64/silicon-errata.rst b/
>>>> Documentation/arch/arm64/silicon-errata.rst
>>>> index a65620f98e3a..a4b246655e37 100644
>>>> --- a/Documentation/arch/arm64/silicon-errata.rst
>>>> +++ b/Documentation/arch/arm64/silicon-errata.rst
>>>> @@ -249,6 +249,8 @@ stable kernels.
>>>> +----------------+-----------------+-----------------
>>>> +-----------------------------+
>>>> | NVIDIA | T241 MPAM | T241-MPAM-1 | N/
>>>> A |
>>>> +----------------+-----------------+-----------------
>>>> +-----------------------------+
>>>> +| NVIDIA | T241 MPAM | T241-MPAM-4 | N/
>>>> A |
>>>> ++----------------+-----------------+-----------------
>>>> +-----------------------------+
>>>> +----------------+-----------------+-----------------
>>>> +-----------------------------+
>>>> | Freescale/NXP | LS2080A/LS1043A | A-008585 |
>>>> FSL_ERRATUM_A008585 |
>>>> +----------------+-----------------+-----------------
>>>> +-----------------------------+
>>>> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/
>>>> mpam_devices.c
>>>> index 08cb080592d9..8f44e9dee207 100644
>>>> --- a/drivers/resctrl/mpam_devices.c
>>>> +++ b/drivers/resctrl/mpam_devices.c
>>>> @@ -679,6 +679,12 @@ static const struct mpam_quirk mpam_quirks[] = {
>>>> .iidr_mask = MPAM_IIDR_MATCH_ONE,
>>>> .workaround = T241_SCRUB_SHADOW_REGS,
>>>> },
>>>> + {
>>>> + /* NVIDIA t241 erratum T241-MPAM-4 */
>>>> + .iidr = MPAM_IIDR_NVIDIA_T241,
>>>> + .iidr_mask = MPAM_IIDR_MATCH_ONE,
>>>> + .workaround = T241_FORCE_MBW_MIN_TO_ONE,
>>>> + },
>>>> { NULL } /* Sentinel */
>>>> };
>>>> @@ -1464,6 +1470,31 @@ static void
>>>> mpam_quirk_post_config_change(struct mpam_msc_ris *ris, u16 partid,
>>>> mpam_apply_t241_erratum(ris, partid);
>>>> }
>>>> +static u16 mpam_wa_t241_force_mbw_min_to_one(struct mpam_props
>>>> *props)
>>>> +{
>>>> + u16 max_hw_value, min_hw_granule, res0_bits;
>>>> +
>>>> + res0_bits = 16 - props->bwa_wd;
>>>> + max_hw_value = ((1 << props->bwa_wd) - 1) << res0_bits;
>>>> + min_hw_granule = ~max_hw_value;
>>>> +
>>>> + return min_hw_granule + 1;
>>>> +}
>>>> +
>>>> +static u16 mpam_wa_t241_calc_min_from_max(struct mpam_config *cfg)
>>>> +{
>>>> + u16 val = 0;
>>>> +
>>>> + if (mpam_has_feature(mpam_feat_mbw_max, cfg)) {
>>>
>>> But the problem is mpam_feat_mbw_max feature is NOT set in cfg.
>>>
>>>> + u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
>>>> +
>>>> + if (cfg->mbw_max > delta)
>>>> + val = cfg->mbw_max - delta;
>>>> + }
>>>> +
>>>> + return val;
>>>
>>> So 0 is always returned.
>>>
>>> The workaround will set mbw_min as 1% which is too small and will cause
>>> performance degradation, e.g. about 20% degradation on some benchmarks.
>>>
>>> This patch itself doesn't have any issue.
>>>
>>> The issue is the mbw_max feature bit in cfg is not set.
>>
>> This is intended behaviour as the reset is done independently
>> from the value set in the config. The value is there so that
>> resctrl can display the expected values.
>>
>>> This is a legacy issue, not introduced by this patch set.
>>>> Here is a fix patch for the issue:
>>> https://lore.kernel.org/lkml/20260301171829.1357886-1-
>>> fenghuay@nvidia.com/T/#u
>>
>> I've commented on that patch. I think it's best to fix it in the context
>> of the erratum.
>>
>> Does the below solve your performance problems?
>>
>> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/
>> mpam_devices.c
>> index 236f78ab9163..60d3d3e2193f 100644
>> --- a/drivers/resctrl/mpam_devices.c
>> +++ b/drivers/resctrl/mpam_devices.c
>> @@ -1515,16 +1515,20 @@ static u16
>> mpam_wa_t241_force_mbw_min_to_one(struct mpam_props *props)
>> return min_hw_granule + 1;
>> }
>> -static u16 mpam_wa_t241_calc_min_from_max(struct mpam_config *cfg)
>> +static u16 mpam_wa_t241_calc_min_from_max(struct mpam_props *props,
>> + struct mpam_config *cfg)
>> {
>> u16 val = 0;
>> + u16 max;
>> + u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
>> - if (mpam_has_feature(mpam_feat_mbw_max, cfg)) {
>> - u16 delta = ((5 * MPAMCFG_MBW_MAX_MAX) / 100) - 1;
>> + if (mpam_has_feature(mpam_feat_mbw_max, cfg))
>> + max = cfg->mbw_max;
>> + else
>> + max = GENMASK(15, 16 - cprops->bwa_wd);
>> - if (cfg->mbw_max > delta)
>> - val = cfg->mbw_max - delta;
>> - }
>
> Could you please add some comments on this piece of code? It's worth to
> comment on why there are different values on cfg and props.
Sure, how about this?
} else {
/* Resetting so use the ris specific default. */
max = GENMASK(15, 16 - props->bwa_wd);
}
>> + if (max > delta)
>> + val = max - delta;
>> return val;
>> }
>> @@ -1577,9 +1581,8 @@ static void mpam_reprogram_ris_partid(struct
>> mpam_msc_ris *ris, u16 partid,
>> if (mpam_has_quirk(T241_FORCE_MBW_MIN_TO_ONE, msc)) {
>> u16 min =
>> mpam_wa_t241_force_mbw_min_to_one(rprops);
>> - val = mpam_wa_t241_calc_min_from_max(cfg);
>> - if (val < min)
>> - val = min;
>> + val = mpam_wa_t241_calc_min_from_max(rprops,
>> cfg);
>> + val = max(val, min);
>> }
>> mpam_write_partsel_reg(msc, MBW_MIN, val);
>>
>
> Otherwise, this change looks good to me.
Did you get a chance to confirm if it behaves as expected on your harware?
>
> Thanks.
>
> -Fenghua
Thanks,
Ben
^ permalink raw reply [flat|nested] 75+ messages in thread
* [PATCH v5 39/41] arm_mpam: Add workaround for T241-MPAM-6
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (37 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 38/41] arm_mpam: Add workaround for T241-MPAM-4 Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 40/41] arm_mpam: Quirk CMN-650's CSU NRDY behaviour Ben Horgan
` (4 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
From: Shanker Donthineni <sdonthineni@nvidia.com>
The registers MSMON_MBWU_L and MSMON_MBWU return the number of requests
rather than the number of bytes transferred.
Bandwidth resource monitoring is performed at the last level cache, where
each request arrive in 64Byte granularity. The current implementation
returns the number of transactions received at the last level cache but
does not provide the value in bytes. Scaling by 64 gives an accurate byte
count to match the MPAM specification for the MSMON_MBWU and MSMON_MBWU_L
registers. This patch fixes the issue by reporting the actual number of
bytes instead of the number of transactions from __ris_msmon_read().
Tested-by: Gavin Shan <gshan@redhat.com>
Tested-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since rfc:
MPAM_IIDR_NVIDIA_T421 -> MPAM_IIDR_NVIDIA_T241
Don't apply workaround to MSMON_MBWU_LWD
---
Documentation/arch/arm64/silicon-errata.rst | 2 ++
drivers/resctrl/mpam_devices.c | 26 +++++++++++++++++++--
drivers/resctrl/mpam_internal.h | 1 +
3 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
index a4b246655e37..1aa3326bb320 100644
--- a/Documentation/arch/arm64/silicon-errata.rst
+++ b/Documentation/arch/arm64/silicon-errata.rst
@@ -251,6 +251,8 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| NVIDIA | T241 MPAM | T241-MPAM-4 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
+| NVIDIA | T241 MPAM | T241-MPAM-6 | N/A |
++----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
+----------------+-----------------+-----------------+-----------------------------+
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index 8f44e9dee207..48a233875a9a 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -685,6 +685,12 @@ static const struct mpam_quirk mpam_quirks[] = {
.iidr_mask = MPAM_IIDR_MATCH_ONE,
.workaround = T241_FORCE_MBW_MIN_TO_ONE,
},
+ {
+ /* NVIDIA t241 erratum T241-MPAM-6 */
+ .iidr = MPAM_IIDR_NVIDIA_T241,
+ .iidr_mask = MPAM_IIDR_MATCH_ONE,
+ .workaround = T241_MBW_COUNTER_SCALE_64,
+ },
{ NULL } /* Sentinel */
};
@@ -1146,7 +1152,7 @@ static void write_msmon_ctl_flt_vals(struct mon_read *m, u32 ctl_val,
}
}
-static u64 mpam_msmon_overflow_val(enum mpam_device_features type)
+static u64 __mpam_msmon_overflow_val(enum mpam_device_features type)
{
/* TODO: implement scaling counters */
switch (type) {
@@ -1161,6 +1167,18 @@ static u64 mpam_msmon_overflow_val(enum mpam_device_features type)
}
}
+static u64 mpam_msmon_overflow_val(enum mpam_device_features type,
+ struct mpam_msc *msc)
+{
+ u64 overflow_val = __mpam_msmon_overflow_val(type);
+
+ if (mpam_has_quirk(T241_MBW_COUNTER_SCALE_64, msc) &&
+ type != mpam_feat_msmon_mbwu_63counter)
+ overflow_val *= 64;
+
+ return overflow_val;
+}
+
static void __ris_msmon_read(void *arg)
{
u64 now;
@@ -1251,13 +1269,17 @@ static void __ris_msmon_read(void *arg)
now = FIELD_GET(MSMON___VALUE, now);
}
+ if (mpam_has_quirk(T241_MBW_COUNTER_SCALE_64, msc) &&
+ m->type != mpam_feat_msmon_mbwu_63counter)
+ now *= 64;
+
if (nrdy)
break;
mbwu_state = &ris->mbwu_state[ctx->mon];
if (overflow)
- mbwu_state->correction += mpam_msmon_overflow_val(m->type);
+ mbwu_state->correction += mpam_msmon_overflow_val(m->type, msc);
/*
* Include bandwidth consumed before the last hardware reset and
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index 9f92fd49a61c..1443a1dd996e 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -225,6 +225,7 @@ struct mpam_props {
enum mpam_device_quirks {
T241_SCRUB_SHADOW_REGS,
T241_FORCE_MBW_MIN_TO_ONE,
+ T241_MBW_COUNTER_SCALE_64,
MPAM_QUIRK_LAST
};
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 40/41] arm_mpam: Quirk CMN-650's CSU NRDY behaviour
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (38 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 39/41] arm_mpam: Add workaround for T241-MPAM-6 Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-24 17:57 ` [PATCH v5 41/41] arm64: mpam: Add initial MPAM documentation Ben Horgan
` (3 subsequent siblings)
43 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc
From: James Morse <james.morse@arm.com>
CMN-650 is afflicted with an erratum where the CSU NRDY bit never clears.
This tells us the monitor never finishes scanning the cache. The erratum
document says to wait the maximum time, then ignore the field.
Add a flag to indicate whether this is the final attempt to read the
counter, and when this quirk is applied, ignore the NRDY field.
This means accesses to this counter will always retry, even if the counter
was previously programmed to the same values.
The counter value is not expected to be stable, it drifts up and down with
each allocation and eviction. The CSU register provides the value for a
point in time.
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes since v3:
parentheses in macro
---
Documentation/arch/arm64/silicon-errata.rst | 3 +++
drivers/resctrl/mpam_devices.c | 12 ++++++++++++
drivers/resctrl/mpam_internal.h | 6 ++++++
3 files changed, 21 insertions(+)
diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
index 1aa3326bb320..65ed6ea33751 100644
--- a/Documentation/arch/arm64/silicon-errata.rst
+++ b/Documentation/arch/arm64/silicon-errata.rst
@@ -214,6 +214,9 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| ARM | SI L1 | #4311569 | ARM64_ERRATUM_4311569 |
+----------------+-----------------+-----------------+-----------------------------+
+| ARM | CMN-650 | #3642720 | N/A |
++----------------+-----------------+-----------------+-----------------------------+
++----------------+-----------------+-----------------+-----------------------------+
| Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_845719 |
+----------------+-----------------+-----------------+-----------------------------+
| Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_843419 |
diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
index 48a233875a9a..9182c8fcf003 100644
--- a/drivers/resctrl/mpam_devices.c
+++ b/drivers/resctrl/mpam_devices.c
@@ -691,6 +691,12 @@ static const struct mpam_quirk mpam_quirks[] = {
.iidr_mask = MPAM_IIDR_MATCH_ONE,
.workaround = T241_MBW_COUNTER_SCALE_64,
},
+ {
+ /* ARM CMN-650 CSU erratum 3642720 */
+ .iidr = MPAM_IIDR_ARM_CMN_650,
+ .iidr_mask = MPAM_IIDR_MATCH_ONE,
+ .workaround = IGNORE_CSU_NRDY,
+ },
{ NULL } /* Sentinel */
};
@@ -1003,6 +1009,7 @@ struct mon_read {
enum mpam_device_features type;
u64 *val;
int err;
+ bool waited_timeout;
};
static bool mpam_ris_has_mbwu_long_counter(struct mpam_msc_ris *ris)
@@ -1249,6 +1256,10 @@ static void __ris_msmon_read(void *arg)
if (mpam_has_feature(mpam_feat_msmon_csu_hw_nrdy, rprops))
nrdy = now & MSMON___NRDY;
now = FIELD_GET(MSMON___VALUE, now);
+
+ if (mpam_has_quirk(IGNORE_CSU_NRDY, msc) && m->waited_timeout)
+ nrdy = false;
+
break;
case mpam_feat_msmon_mbwu_31counter:
case mpam_feat_msmon_mbwu_44counter:
@@ -1386,6 +1397,7 @@ int mpam_msmon_read(struct mpam_component *comp, struct mon_cfg *ctx,
.ctx = ctx,
.type = type,
.val = val,
+ .waited_timeout = true,
};
*val = 0;
diff --git a/drivers/resctrl/mpam_internal.h b/drivers/resctrl/mpam_internal.h
index 1443a1dd996e..195ab821cc52 100644
--- a/drivers/resctrl/mpam_internal.h
+++ b/drivers/resctrl/mpam_internal.h
@@ -226,6 +226,7 @@ enum mpam_device_quirks {
T241_SCRUB_SHADOW_REGS,
T241_FORCE_MBW_MIN_TO_ONE,
T241_MBW_COUNTER_SCALE_64,
+ IGNORE_CSU_NRDY,
MPAM_QUIRK_LAST
};
@@ -251,6 +252,11 @@ struct mpam_quirk {
FIELD_PREP_CONST(MPAMF_IIDR_REVISION, 0) | \
FIELD_PREP_CONST(MPAMF_IIDR_IMPLEMENTER, 0x36b))
+#define MPAM_IIDR_ARM_CMN_650 (FIELD_PREP_CONST(MPAMF_IIDR_PRODUCTID, 0) | \
+ FIELD_PREP_CONST(MPAMF_IIDR_VARIANT, 0) | \
+ FIELD_PREP_CONST(MPAMF_IIDR_REVISION, 0) | \
+ FIELD_PREP_CONST(MPAMF_IIDR_IMPLEMENTER, 0x43b))
+
/* The values for MSMON_CFG_MBWU_FLT.RWBW */
enum mon_filter_options {
COUNT_BOTH = 0,
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* [PATCH v5 41/41] arm64: mpam: Add initial MPAM documentation
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (39 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 40/41] arm_mpam: Quirk CMN-650's CSU NRDY behaviour Ben Horgan
@ 2026-02-24 17:57 ` Ben Horgan
2026-02-25 11:01 ` Jonathan Cameron
2026-02-25 21:10 ` [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (2 subsequent siblings)
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-24 17:57 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc, Shaopeng Tan
MPAM (Memory Partitioning and Monitoring) is now exposed to user-space via
resctrl. Add some documentation so the user knows what features to expect.
Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ben Horgan <ben.horgan@arm.com>
---
Changes by Ben:
Some tidying, update for current heuristics
Changes from v4:
Fix unusual indentation
---
Documentation/arch/arm64/index.rst | 1 +
Documentation/arch/arm64/mpam.rst | 94 ++++++++++++++++++++++++++++++
2 files changed, 95 insertions(+)
create mode 100644 Documentation/arch/arm64/mpam.rst
diff --git a/Documentation/arch/arm64/index.rst b/Documentation/arch/arm64/index.rst
index af52edc8c0ac..98052b4ef4a1 100644
--- a/Documentation/arch/arm64/index.rst
+++ b/Documentation/arch/arm64/index.rst
@@ -23,6 +23,7 @@ ARM64 Architecture
memory
memory-tagging-extension
mops
+ mpam
perf
pointer-authentication
ptdump
diff --git a/Documentation/arch/arm64/mpam.rst b/Documentation/arch/arm64/mpam.rst
new file mode 100644
index 000000000000..6dc3de54ec9a
--- /dev/null
+++ b/Documentation/arch/arm64/mpam.rst
@@ -0,0 +1,94 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+====
+MPAM
+====
+
+What is MPAM
+============
+MPAM (Memory Partitioning and Monitoring) is a feature in the CPUs and memory
+system components such as the caches or memory controllers that allow memory
+traffic to be labelled, partitioned and monitored.
+
+Traffic is labelled by the CPU, based on the control or monitor group the
+current task is assigned to using resctrl. Partitioning policy can be set
+using the schemata file in resctrl, and monitor values read via resctrl.
+See Documentation/filesystems/resctrl.rst for more details.
+
+This allows tasks that share memory system resources, such as caches, to be
+isolated from each other according to the partitioning policy (so called noisy
+neighbours).
+
+Supported Platforms
+===================
+Use of this feature requires CPU support, support in the memory system
+components, and a description from firmware of where the MPAM device controls
+are in the MMIO address space. (e.g. the 'MPAM' ACPI table).
+
+The MMIO device that provides MPAM controls/monitors for a memory system
+component is called a memory system component. (MSC).
+
+Because the user interface to MPAM is via resctrl, only MPAM features that are
+compatible with resctrl can be exposed to user-space.
+
+MSC are considered as a group based on the topology. MSC that correspond with
+the L3 cache are considered together, it is not possible to mix MSC between L2
+and L3 to 'cover' a resctrl schema.
+
+The supported features are:
+
+* Cache portion bitmap controls (CPOR) on the L2 or L3 caches. To expose
+ CPOR at L2 or L3, every CPU must have a corresponding CPU cache at this
+ level that also supports the feature. Mismatched big/little platforms are
+ not supported as resctrl's controls would then also depend on task
+ placement.
+
+* Memory bandwidth maximum controls (MBW_MAX) on or after the L3 cache.
+ resctrl uses the L3 cache-id to identify where the memory bandwidth
+ control is applied. For this reason the platform must have an L3 cache
+ with cache-id's supplied by firmware. (It doesn't need to support MPAM.)
+
+ To be exported as the 'MB' schema, the topology of the group of MSC chosen
+ must match the topology of the L3 cache so that the cache-id's can be
+ repainted. For example: Platforms with Memory bandwidth maximum controls
+ on CPU-less NUMA nodes cannot expose the 'MB' schema to resctrl as these
+ nodes do not have a corresponding L3 cache. If the memory bandwidth
+ control is on the memory rather than the L3 then there must be a single
+ global L3 as otherwise it is unknown which L3 the traffic came from.
+
+ When the MPAM driver finds multiple groups of MSC it can use for the 'MB'
+ schema, it prefers the group closest to the L3 cache.
+
+* Cache Storage Usage (CSU) counters can expose the 'llc_occupancy' provided
+ there is at least one CSU monitor on each MSC that makes up the L3 group.
+ Exposing CSU counters from other caches or devices is not supported.
+
+* Memory Bandwidth Usage (MBWU) on or after the L3 cache. resctrl uses the
+ L3 cache-id to identify where the memory bandwidth is measured. For this
+ reason the platform must have an L3 cache with cache-id's supplied by
+ firmware. (It doesn't need to support MPAM.)
+
+ Memory bandwidth monitoring makes use of MBWU monitors in each MSC that
+ makes up the L3 group. If there are more monitors than the maximum number
+ of control and monitor groups, these will be allocated and configured at
+ boot. Otherwise, the monitors will not be usable as they are required to
+ be free running. If the memory bandwidth monitoring is on the memory
+ rather than the L3 then there must be a single global L3 as otherwise it
+ is unknown which L3 the traffic came from.
+
+ To expose 'mbm_total_bytes', the topology of the group of MSC chosen must
+ match the topology of the L3 cache so that the cache-id's can be
+ repainted. For example: Platforms with Memory bandwidth monitors on
+ CPU-less NUMA nodes cannot expose 'mbm_total_bytes' as these nodes do not
+ have a corresponding L3 cache. 'mbm_local_bytes' is not exposed as MPAM
+ cannot distinguish local traffic from global traffic.
+
+Feature emulation
+=================
+MPAM will emulate the Code Data Prioritisation (CDP) feature on all platforms.
+
+Reporting Bugs
+==============
+If you are not seeing the counters or controls you expect please share the
+debug messages produced when enabling dynamic debug and booting with:
+dyndbg="file mpam_resctrl.c +pl"
--
2.43.0
^ permalink raw reply related [flat|nested] 75+ messages in thread* Re: [PATCH v5 41/41] arm64: mpam: Add initial MPAM documentation
2026-02-24 17:57 ` [PATCH v5 41/41] arm64: mpam: Add initial MPAM documentation Ben Horgan
@ 2026-02-25 11:01 ` Jonathan Cameron
0 siblings, 0 replies; 75+ messages in thread
From: Jonathan Cameron @ 2026-02-25 11:01 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, kobak, lcherian,
linux-arm-kernel, linux-kernel, peternewman, punit.agrawal,
quic_jiles, reinette.chatre, rohit.mathew, scott, sdonthineni,
tan.shaopeng, xhao, catalin.marinas, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc,
Shaopeng Tan
On Tue, 24 Feb 2026 17:57:20 +0000
Ben Horgan <ben.horgan@arm.com> wrote:
> MPAM (Memory Partitioning and Monitoring) is now exposed to user-space via
> resctrl. Add some documentation so the user knows what features to expect.
>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@jp.fujitsu.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Ben Horgan <ben.horgan@arm.com>
LGTM
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (40 preceding siblings ...)
2026-02-24 17:57 ` [PATCH v5 41/41] arm64: mpam: Add initial MPAM documentation Ben Horgan
@ 2026-02-25 21:10 ` Ben Horgan
2026-02-27 17:04 ` Catalin Marinas
2026-02-26 7:34 ` Zeng Heng
2026-03-03 20:18 ` Punit Agrawal
43 siblings, 1 reply; 75+ messages in thread
From: Ben Horgan @ 2026-02-25 21:10 UTC (permalink / raw)
To: ben.horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc
On 2/24/26 17:56, Ben Horgan wrote:
> The main change in this version of the mpam missing pieces series is to
> update the cdp emulation to match the resctrl interface. L2 and L3
> resources can now enable cdp separately. Cdp can't be hidden correctly for
> memory bandwidth allocation, as max per partid can't be emulated with more
> partids, and so we hide this completely when cdp is enabled. There is a little
> restructuring and a few smaller changes.
>
> Changelogs in patches
>
> It would be great to get this series merged this cycle. For that we'll need
> more testing and reviewing. Thanks!
>
There is a small build conflict with resctrl abmc precursors, [1]. The
last patch of that series applies on top of this series and if the abmc
precursors go first that patch should go with this series to fix the
build. Alternatively, if it's obvious ahead of time it can be squashed
into pathc 33 with the other empty resctrl arch hooks.
[1]
https://lore.kernel.org/lkml/20260225201905.3568624-5-ben.horgan@arm.com/
Thanks,
Ben
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code
2026-02-25 21:10 ` [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
@ 2026-02-27 17:04 ` Catalin Marinas
0 siblings, 0 replies; 75+ messages in thread
From: Catalin Marinas @ 2026-02-27 17:04 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, will, corbet, maz, oupton,
joey.gouly, suzuki.poulose, kvmarm, zengheng4, linux-doc
On Wed, Feb 25, 2026 at 09:10:08PM +0000, Ben Horgan wrote:
> On 2/24/26 17:56, Ben Horgan wrote:
> > The main change in this version of the mpam missing pieces series is to
> > update the cdp emulation to match the resctrl interface. L2 and L3
> > resources can now enable cdp separately. Cdp can't be hidden correctly for
> > memory bandwidth allocation, as max per partid can't be emulated with more
> > partids, and so we hide this completely when cdp is enabled. There is a little
> > restructuring and a few smaller changes.
> >
> > Changelogs in patches
> >
> > It would be great to get this series merged this cycle. For that we'll need
> > more testing and reviewing. Thanks!
> >
>
> There is a small build conflict with resctrl abmc precursors, [1]. The
> last patch of that series applies on top of this series and if the abmc
> precursors go first that patch should go with this series to fix the
> build. Alternatively, if it's obvious ahead of time it can be squashed
> into pathc 33 with the other empty resctrl arch hooks.
>
> [1]
> https://lore.kernel.org/lkml/20260225201905.3568624-5-ben.horgan@arm.com/
Typically we resolve these by having the first three patches in the
above series on a stable branch (could be tip as it touches x86) and we
can base the 41 patches here on top, together with the last one from the
abmc series.
Alternatively, happy to take them all via the arm64 tree as long as
Reinette/Tony are ok with this and ack the abmc patches.
--
Catalin
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (41 preceding siblings ...)
2026-02-25 21:10 ` [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
@ 2026-02-26 7:34 ` Zeng Heng
2026-03-03 20:18 ` Punit Agrawal
43 siblings, 0 replies; 75+ messages in thread
From: Zeng Heng @ 2026-02-26 7:34 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, linux-doc
Hi Ben,
On 2026/2/25 1:56, Ben Horgan wrote:
> The main change in this version of the mpam missing pieces series is to
> update the cdp emulation to match the resctrl interface. L2 and L3
> resources can now enable cdp separately. Cdp can't be hidden correctly for
> memory bandwidth allocation, as max per partid can't be emulated with more
> partids, and so we hide this completely when cdp is enabled. There is a little
> restructuring and a few smaller changes.
>
> Changelogs in patches
>
> It would be great to get this series merged this cycle. For that we'll need
> more testing and reviewing. Thanks!
>
>>From James' cover letter:
>
> This is the missing piece to make MPAM usable resctrl in user-space. This has
> shed its debugfs code and the read/write 'event configuration' for the monitors
> to make the series smaller.
>
> This adds the arch code and KVM support first. I anticipate the whole thing
> going via arm64, but if goes via tip instead, the an immutable branch with those
> patches should be easy to do.
>
> Generally the resctrl glue code works by picking what MPAM features it can expose
> from the MPAM drive, then configuring the structs that back the resctrl helpers.
> If your platform is sufficiently Xeon shaped, you should be able to get L2/L3 CPOR
> bitmaps exposed via resctrl. CSU counters work if they are on/after the L3. MBWU
> counters are considerably more hairy, and depend on hueristics around the topology,
> and a bunch of stuff trying to emulate ABMC.
> If it didn't pick what you wanted it to, please share the debug messages produced
> when enabling dynamic debug and booting with:
> | dyndbg="file mpam_resctrl.c +pl"
>
> I've not found a platform that can test all the behaviours around the monitors,
> so this is where I'd expect the most bugs.
>
> The MPAM spec that describes all the system and MMIO registers can be found here:
> https://developer.arm.com/documentation/ddi0598/db/?lang=en
> (Ignored the 'RETIRED' warning - that is just arm moving the documentation around.
> This document has the best overview)
>
I have completed retesting based on glue v5. The latest boot logs are
provided below:
# dmesg | grep -i mpam
[ 0.000000] ACPI: MPAM 0x000000007FF34018 003024 (v01 HISI HIP12
00000000 HISI 20151124)
[ 0.000000] Kernel command line:
BOOT_IMAGE=/vmlinuz-7.0.0-rc1-g4288ec146462
root=UUID=e0c69d2c-35e2-4ed0-9b5b-338fe4e689e8 ro cgroup_disable=files
apparmor=0 crashkernel=1024M,high smmu.bypassdev=0x1000:0x17
smmu.bypassdev=0x1000:0x15 arm64.nopauth console=ttyAMA0,115200
net.ifnames=0
modprobe.blacklist=hibmc_drm,ipmi_ssif,ipmi_devintf,ipmi_si selinux=0
arm64.mpam nokaslr "dyndbg=file mpam_resctrl.c +p"
[ 0.000000] Unknown kernel command line parameters "apparmor=0
selinux=0 dyndbg=file mpam_resctrl.c +p", will be passed to user space.
[ 17.707273] mpam_msc mpam_msc.254: Merging features for
vmsc:0xffff08009b3aaba0 |= ris:0xffff0800a1d52c98
[ 17.707277] mpam_msc mpam_msc.252: Merging features for
vmsc:0xffff08009b3aac20 |= ris:0xffff0800a1d53098
[ 17.707279] mpam_msc mpam_msc.250: Merging features for
vmsc:0xffff08009b3aaca0 |= ris:0xffff0800a1d53498
[ 17.707280] mpam_msc mpam_msc.248: Merging features for
vmsc:0xffff08009b3aad20 |= ris:0xffff0800a1d53898
[ 17.707281] mpam_msc mpam_msc.246: Merging features for
vmsc:0xffff08009b3aada0 |= ris:0xffff0800a1d53c98
[ 17.707282] mpam_msc mpam_msc.244: Merging features for
vmsc:0xffff08009b3aae20 |= ris:0xffff0800a1d3d098
[ 17.707283] mpam_msc mpam_msc.242: Merging features for
vmsc:0xffff08009b3aaea0 |= ris:0xffff0800a1d3d498
[ 17.707284] mpam_msc mpam_msc.240: Merging features for
vmsc:0xffff08009b3aaf20 |= ris:0xffff0800a1d3d898
[ 17.707285] mpam_msc mpam_msc.238: Merging features for
vmsc:0xffff08009b3aafa0 |= ris:0xffff0800a1d3dc98
[ 17.707286] mpam_msc mpam_msc.236: Merging features for
vmsc:0xffff08009b3ab020 |= ris:0xffff0800a1d3e098
[ 17.707287] mpam_msc mpam_msc.234: Merging features for
vmsc:0xffff08009b3ab0a0 |= ris:0xffff0800a1d3e498
[ 17.707287] mpam_msc mpam_msc.232: Merging features for
vmsc:0xffff08009b3ab120 |= ris:0xffff0800a1d3e898
[ 17.707288] mpam_msc mpam_msc.230: Merging features for
vmsc:0xffff08009b3ab1a0 |= ris:0xffff0800a1d3ec98
[ 17.707289] mpam_msc mpam_msc.228: Merging features for
vmsc:0xffff08009b3ab220 |= ris:0xffff0800a1d3f098
[ 17.707290] mpam_msc mpam_msc.226: Merging features for
vmsc:0xffff08009b3ab2a0 |= ris:0xffff0800a1d3f498
[ 17.707291] mpam_msc mpam_msc.224: Merging features for
vmsc:0xffff08009b3ab320 |= ris:0xffff0800a1d3f898
[ 17.707292] mpam_msc mpam_msc.222: Merging features for
vmsc:0xffff08009b3ab3a0 |= ris:0xffff0800a1d3fc98
[ 17.707293] mpam_msc mpam_msc.220: Merging features for
vmsc:0xffff08009b3ab420 |= ris:0xffff0800a1d50098
[ 17.707294] mpam_msc mpam_msc.218: Merging features for
vmsc:0xffff08009b3ab4a0 |= ris:0xffff0800a1d50498
[ 17.707294] mpam_msc mpam_msc.216: Merging features for
vmsc:0xffff08009b3ab520 |= ris:0xffff0800a1d39898
[ 17.707295] mpam_msc mpam_msc.214: Merging features for
vmsc:0xffff08009b3ab5a0 |= ris:0xffff0800a1d39c98
[ 17.707296] mpam_msc mpam_msc.212: Merging features for
vmsc:0xffff08009b3ab620 |= ris:0xffff0800a1d3a098
[ 17.707297] mpam_msc mpam_msc.210: Merging features for
vmsc:0xffff08009b3ab6a0 |= ris:0xffff0800a1d3a498
[ 17.707298] mpam_msc mpam_msc.208: Merging features for
vmsc:0xffff08009b3ab720 |= ris:0xffff0800a1d3a898
[ 17.707299] mpam_msc mpam_msc.206: Merging features for
vmsc:0xffff08009b3ab7a0 |= ris:0xffff0800a1d3ac98
[ 17.707300] mpam_msc mpam_msc.204: Merging features for
vmsc:0xffff08009b3ab820 |= ris:0xffff0800a1d3b098
[ 17.707301] mpam_msc mpam_msc.202: Merging features for
vmsc:0xffff08009b3ab8a0 |= ris:0xffff0800a1d3b498
[ 17.707302] mpam_msc mpam_msc.200: Merging features for
vmsc:0xffff08009b3ab920 |= ris:0xffff0800a1d3b898
[ 17.707303] mpam_msc mpam_msc.198: Merging features for
vmsc:0xffff08009b3ab9a0 |= ris:0xffff0800a1d3bc98
[ 17.707304] mpam_msc mpam_msc.196: Merging features for
vmsc:0xffff08009b3aba20 |= ris:0xffff0800a1d3c098
[ 17.707305] mpam_msc mpam_msc.194: Merging features for
vmsc:0xffff08009b3abaa0 |= ris:0xffff0800a1d3c498
[ 17.707305] mpam_msc mpam_msc.192: Merging features for
vmsc:0xffff08009b3abb20 |= ris:0xffff0800a1d3c898
[ 17.707306] mpam_msc mpam_msc.190: Merging features for
vmsc:0xffff08009b3abba0 |= ris:0xffff0800a1d3cc98
[ 17.707307] mpam_msc mpam_msc.188: Merging features for
vmsc:0xffff08009b3abc20 |= ris:0xffff0800a1d2e098
[ 17.707308] mpam_msc mpam_msc.186: Merging features for
vmsc:0xffff08009b3abca0 |= ris:0xffff0800a1d2e498
[ 17.707309] mpam_msc mpam_msc.184: Merging features for
vmsc:0xffff08009b3abd20 |= ris:0xffff0800a1d2e898
[ 17.707310] mpam_msc mpam_msc.182: Merging features for
vmsc:0xffff08009b3abda0 |= ris:0xffff0800a1d2ec98
[ 17.707311] mpam_msc mpam_msc.180: Merging features for
vmsc:0xffff08009b3abe20 |= ris:0xffff0800a1d2f098
[ 17.707312] mpam_msc mpam_msc.178: Merging features for
vmsc:0xffff08009b3abea0 |= ris:0xffff0800a1d2f498
[ 17.707313] mpam_msc mpam_msc.176: Merging features for
vmsc:0xffff08009b3abf20 |= ris:0xffff0800a1d2f898
[ 17.707314] mpam_msc mpam_msc.174: Merging features for
vmsc:0xffff08009b3abfa0 |= ris:0xffff0800a1d2fc98
[ 17.707315] mpam_msc mpam_msc.172: Merging features for
vmsc:0xffff08009b318420 |= ris:0xffff0800a1d38098
[ 17.707316] mpam_msc mpam_msc.170: Merging features for
vmsc:0xffff08009b3184a0 |= ris:0xffff0800a1d38498
[ 17.707317] mpam_msc mpam_msc.168: Merging features for
vmsc:0xffff08009b318520 |= ris:0xffff0800a1d38898
[ 17.707318] mpam_msc mpam_msc.166: Merging features for
vmsc:0xffff08009b3185a0 |= ris:0xffff0800a1d38c98
[ 17.707318] mpam_msc mpam_msc.164: Merging features for
vmsc:0xffff08009b318620 |= ris:0xffff0800a1d39098
[ 17.707319] mpam_msc mpam_msc.162: Merging features for
vmsc:0xffff08009b3186a0 |= ris:0xffff0800a1d39498
[ 17.707320] mpam_msc mpam_msc.160: Merging features for
vmsc:0xffff08009b318720 |= ris:0xffff0800a1d2a898
[ 17.707321] mpam_msc mpam_msc.158: Merging features for
vmsc:0xffff08009b3187a0 |= ris:0xffff0800a1d2ac98
[ 17.707322] mpam_msc mpam_msc.156: Merging features for
vmsc:0xffff08009b318820 |= ris:0xffff0800a1d2b098
[ 17.707323] mpam_msc mpam_msc.154: Merging features for
vmsc:0xffff08009b3188a0 |= ris:0xffff0800a1d2b498
[ 17.707324] mpam_msc mpam_msc.152: Merging features for
vmsc:0xffff08009b318920 |= ris:0xffff0800a1d2b898
[ 17.707325] mpam_msc mpam_msc.150: Merging features for
vmsc:0xffff08009b3189a0 |= ris:0xffff0800a1d2bc98
[ 17.707326] mpam_msc mpam_msc.148: Merging features for
vmsc:0xffff08009b318a20 |= ris:0xffff0800a1d2c098
[ 17.707327] mpam_msc mpam_msc.146: Merging features for
vmsc:0xffff08009b318aa0 |= ris:0xffff0800a1d2c498
[ 17.707327] mpam_msc mpam_msc.144: Merging features for
vmsc:0xffff08009b318b20 |= ris:0xffff0800a1d2c898
[ 17.707328] mpam_msc mpam_msc.142: Merging features for
vmsc:0xffff08009b318ba0 |= ris:0xffff0800a1d2cc98
[ 17.707329] mpam_msc mpam_msc.140: Merging features for
vmsc:0xffff08009b318c20 |= ris:0xffff0800a1d2d098
[ 17.707330] mpam_msc mpam_msc.138: Merging features for
vmsc:0xffff08009b318ca0 |= ris:0xffff0800a1d2d498
[ 17.707331] mpam_msc mpam_msc.136: Merging features for
vmsc:0xffff08009b318d20 |= ris:0xffff0800a1d2d898
[ 17.707332] mpam_msc mpam_msc.134: Merging features for
vmsc:0xffff08009b318da0 |= ris:0xffff0800a1d2dc98
[ 17.707332] mpam_msc mpam_msc.132: Merging features for
vmsc:0xffff08009b318e20 |= ris:0xffff0800a1cd7098
[ 17.707333] mpam_msc mpam_msc.130: Merging features for
vmsc:0xffff08009b318ea0 |= ris:0xffff0800a1cd7498
[ 17.707334] mpam_msc mpam_msc.128: Merging features for
vmsc:0xffff08009b318f20 |= ris:0xffff0800a1cd7898
[ 17.707335] mpam_msc mpam_msc.126: Merging features for
vmsc:0xffff08009b318fa0 |= ris:0xffff0800a1cd7c98
[ 17.707336] mpam_msc mpam_msc.124: Merging features for
vmsc:0xffff08009b319020 |= ris:0xffff0800a1d28098
[ 17.707337] mpam_msc mpam_msc.122: Merging features for
vmsc:0xffff08009b3190a0 |= ris:0xffff0800a1d28498
[ 17.707338] mpam_msc mpam_msc.120: Merging features for
vmsc:0xffff08009b319120 |= ris:0xffff0800a1d28898
[ 17.707339] mpam_msc mpam_msc.118: Merging features for
vmsc:0xffff08009b319220 |= ris:0xffff0800a1d28c98
[ 17.707340] mpam_msc mpam_msc.116: Merging features for
vmsc:0xffff08009b3192a0 |= ris:0xffff0800a1d29098
[ 17.707340] mpam_msc mpam_msc.114: Merging features for
vmsc:0xffff08009b319320 |= ris:0xffff0800a1d29498
[ 17.707341] mpam_msc mpam_msc.112: Merging features for
vmsc:0xffff08009b3193a0 |= ris:0xffff0800a1d29898
[ 17.707342] mpam_msc mpam_msc.110: Merging features for
vmsc:0xffff08009b319420 |= ris:0xffff0800a1d29c98
[ 17.707343] mpam_msc mpam_msc.108: Merging features for
vmsc:0xffff08009b3194a0 |= ris:0xffff0800a1d2a098
[ 17.707344] mpam_msc mpam_msc.106: Merging features for
vmsc:0xffff08009b319520 |= ris:0xffff0800a1d2a498
[ 17.707345] mpam_msc mpam_msc.104: Merging features for
vmsc:0xffff08009b3195a0 |= ris:0xffff0800a1cd3898
[ 17.707346] mpam_msc mpam_msc.102: Merging features for
vmsc:0xffff08009b319620 |= ris:0xffff0800a1cd3c98
[ 17.707346] mpam_msc mpam_msc.100: Merging features for
vmsc:0xffff08009b3196a0 |= ris:0xffff0800a1cd4098
[ 17.707347] mpam_msc mpam_msc.98: Merging features for
vmsc:0xffff08009b319720 |= ris:0xffff0800a1cd4498
[ 17.707348] mpam_msc mpam_msc.96: Merging features for
vmsc:0xffff08009b3197a0 |= ris:0xffff0800a1cd4898
[ 17.707349] mpam_msc mpam_msc.94: Merging features for
vmsc:0xffff08009b319820 |= ris:0xffff0800a1cd4c98
[ 17.707350] mpam_msc mpam_msc.92: Merging features for
vmsc:0xffff08009b3198a0 |= ris:0xffff0800a1cd5098
[ 17.707351] mpam_msc mpam_msc.90: Merging features for
vmsc:0xffff08009b319920 |= ris:0xffff0800a1cd5498
[ 17.707352] mpam_msc mpam_msc.88: Merging features for
vmsc:0xffff08009b3199a0 |= ris:0xffff0800a1cd5898
[ 17.707353] mpam_msc mpam_msc.86: Merging features for
vmsc:0xffff08009b319a20 |= ris:0xffff0800a1cd5c98
[ 17.707354] mpam_msc mpam_msc.84: Merging features for
vmsc:0xffff08009b319aa0 |= ris:0xffff0800a1cd6098
[ 17.707354] mpam_msc mpam_msc.82: Merging features for
vmsc:0xffff08009b319b20 |= ris:0xffff0800a1cd6498
[ 17.707355] mpam_msc mpam_msc.80: Merging features for
vmsc:0xffff08009b319ba0 |= ris:0xffff0800a1cd6898
[ 17.707356] mpam_msc mpam_msc.78: Merging features for
vmsc:0xffff08009b319c20 |= ris:0xffff0800a1cd6c98
[ 17.707357] mpam_msc mpam_msc.76: Merging features for
vmsc:0xffff08009b319ca0 |= ris:0xffff0800a1cd0098
[ 17.707358] mpam_msc mpam_msc.74: Merging features for
vmsc:0xffff08009b319d20 |= ris:0xffff0800a1cd0498
[ 17.707359] mpam_msc mpam_msc.72: Merging features for
vmsc:0xffff08009b319da0 |= ris:0xffff0800a1cd0898
[ 17.707359] mpam_msc mpam_msc.70: Merging features for
vmsc:0xffff08009b319e20 |= ris:0xffff0800a1cd0c98
[ 17.707361] mpam_msc mpam_msc.68: Merging features for
vmsc:0xffff08009b319ea0 |= ris:0xffff0800a1cd1098
[ 17.707361] mpam_msc mpam_msc.66: Merging features for
vmsc:0xffff08009b319f20 |= ris:0xffff0800a1cd1498
[ 17.707362] mpam_msc mpam_msc.64: Merging features for
vmsc:0xffff08009b319fa0 |= ris:0xffff0800a1cd1898
[ 17.707363] mpam_msc mpam_msc.254: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aaba0
[ 17.707364] mpam_msc mpam_msc.252: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aac20
[ 17.707365] mpam_msc mpam_msc.250: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aaca0
[ 17.707366] mpam_msc mpam_msc.248: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aad20
[ 17.707367] mpam_msc mpam_msc.246: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aada0
[ 17.707367] mpam_msc mpam_msc.244: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aae20
[ 17.707368] mpam_msc mpam_msc.242: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aaea0
[ 17.707369] mpam_msc mpam_msc.240: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aaf20
[ 17.707370] mpam_msc mpam_msc.238: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aafa0
[ 17.707370] mpam_msc mpam_msc.236: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab020
[ 17.707371] mpam_msc mpam_msc.234: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab0a0
[ 17.707372] mpam_msc mpam_msc.232: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab120
[ 17.707373] mpam_msc mpam_msc.230: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab1a0
[ 17.707373] mpam_msc mpam_msc.228: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab220
[ 17.707374] mpam_msc mpam_msc.226: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab2a0
[ 17.707375] mpam_msc mpam_msc.224: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab320
[ 17.707376] mpam_msc mpam_msc.222: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab3a0
[ 17.707376] mpam_msc mpam_msc.220: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab420
[ 17.707377] mpam_msc mpam_msc.218: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab4a0
[ 17.707378] mpam_msc mpam_msc.216: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab520
[ 17.707379] mpam_msc mpam_msc.214: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab5a0
[ 17.707379] mpam_msc mpam_msc.212: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab620
[ 17.707380] mpam_msc mpam_msc.210: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab6a0
[ 17.707381] mpam_msc mpam_msc.208: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab720
[ 17.707382] mpam_msc mpam_msc.206: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab7a0
[ 17.707384] mpam_msc mpam_msc.204: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab820
[ 17.707385] mpam_msc mpam_msc.202: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab8a0
[ 17.707385] mpam_msc mpam_msc.200: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab920
[ 17.707386] mpam_msc mpam_msc.198: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3ab9a0
[ 17.707387] mpam_msc mpam_msc.196: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3aba20
[ 17.707388] mpam_msc mpam_msc.194: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abaa0
[ 17.707388] mpam_msc mpam_msc.192: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abb20
[ 17.707389] mpam_msc mpam_msc.190: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abba0
[ 17.707390] mpam_msc mpam_msc.188: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abc20
[ 17.707391] mpam_msc mpam_msc.186: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abca0
[ 17.707391] mpam_msc mpam_msc.184: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abd20
[ 17.707392] mpam_msc mpam_msc.182: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abda0
[ 17.707393] mpam_msc mpam_msc.180: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abe20
[ 17.707394] mpam_msc mpam_msc.178: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abea0
[ 17.707394] mpam_msc mpam_msc.176: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abf20
[ 17.707395] mpam_msc mpam_msc.174: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3abfa0
[ 17.707396] mpam_msc mpam_msc.172: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318420
[ 17.707397] mpam_msc mpam_msc.170: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3184a0
[ 17.707398] mpam_msc mpam_msc.168: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318520
[ 17.707398] mpam_msc mpam_msc.166: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3185a0
[ 17.707399] mpam_msc mpam_msc.164: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318620
[ 17.707400] mpam_msc mpam_msc.162: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3186a0
[ 17.707401] mpam_msc mpam_msc.160: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318720
[ 17.707401] mpam_msc mpam_msc.158: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3187a0
[ 17.707402] mpam_msc mpam_msc.156: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318820
[ 17.707403] mpam_msc mpam_msc.154: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3188a0
[ 17.707404] mpam_msc mpam_msc.152: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318920
[ 17.707404] mpam_msc mpam_msc.150: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3189a0
[ 17.707405] mpam_msc mpam_msc.148: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318a20
[ 17.707406] mpam_msc mpam_msc.146: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318aa0
[ 17.707407] mpam_msc mpam_msc.144: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318b20
[ 17.707407] mpam_msc mpam_msc.142: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318ba0
[ 17.707408] mpam_msc mpam_msc.140: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318c20
[ 17.707409] mpam_msc mpam_msc.138: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318ca0
[ 17.707410] mpam_msc mpam_msc.136: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318d20
[ 17.707410] mpam_msc mpam_msc.134: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318da0
[ 17.707411] mpam_msc mpam_msc.132: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318e20
[ 17.707412] mpam_msc mpam_msc.130: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318ea0
[ 17.707412] mpam_msc mpam_msc.128: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318f20
[ 17.707413] mpam_msc mpam_msc.126: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b318fa0
[ 17.707414] mpam_msc mpam_msc.124: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319020
[ 17.707415] mpam_msc mpam_msc.122: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3190a0
[ 17.707416] mpam_msc mpam_msc.120: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319120
[ 17.707416] mpam_msc mpam_msc.118: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319220
[ 17.707417] mpam_msc mpam_msc.116: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3192a0
[ 17.707418] mpam_msc mpam_msc.114: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319320
[ 17.707418] mpam_msc mpam_msc.112: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3193a0
[ 17.707419] mpam_msc mpam_msc.110: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319420
[ 17.707420] mpam_msc mpam_msc.108: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3194a0
[ 17.707421] mpam_msc mpam_msc.106: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319520
[ 17.707422] mpam_msc mpam_msc.104: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3195a0
[ 17.707422] mpam_msc mpam_msc.102: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319620
[ 17.707423] mpam_msc mpam_msc.100: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3196a0
[ 17.707424] mpam_msc mpam_msc.98: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319720
[ 17.707424] mpam_msc mpam_msc.96: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3197a0
[ 17.707425] mpam_msc mpam_msc.94: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319820
[ 17.707426] mpam_msc mpam_msc.92: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3198a0
[ 17.707427] mpam_msc mpam_msc.90: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319920
[ 17.707427] mpam_msc mpam_msc.88: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b3199a0
[ 17.707428] mpam_msc mpam_msc.86: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319a20
[ 17.707429] mpam_msc mpam_msc.84: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319aa0
[ 17.707430] mpam_msc mpam_msc.82: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319b20
[ 17.707430] mpam_msc mpam_msc.80: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319ba0
[ 17.707431] mpam_msc mpam_msc.78: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319c20
[ 17.707432] mpam_msc mpam_msc.76: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319ca0
[ 17.707433] mpam_msc mpam_msc.74: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319d20
[ 17.707433] mpam_msc mpam_msc.72: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319da0
[ 17.707434] mpam_msc mpam_msc.70: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319e20
[ 17.707435] mpam_msc mpam_msc.68: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319ea0
[ 17.707436] mpam_msc mpam_msc.66: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319f20
[ 17.707436] mpam_msc mpam_msc.64: Merging features for
class:0xffff08009b233e50 &= vmsc:0xffff08009b319fa0
[ 17.707437] mpam_msc mpam_msc.62: Merging features for
vmsc:0xffff08009b3aa020 |= ris:0xffff0800a1cd1c98
[ 17.707438] mpam_msc mpam_msc.60: Merging features for
vmsc:0xffff08009b3aa0a0 |= ris:0xffff0800a1cd2098
[ 17.707439] mpam_msc mpam_msc.58: Merging features for
vmsc:0xffff08009b3aa120 |= ris:0xffff0800a1cd2498
[ 17.707440] mpam_msc mpam_msc.56: Merging features for
vmsc:0xffff08009b3aa1a0 |= ris:0xffff0800a1cd2898
[ 17.707441] mpam_msc mpam_msc.54: Merging features for
vmsc:0xffff08009aeb6620 |= ris:0xffff0800a1cd2c98
[ 17.707442] mpam_msc mpam_msc.52: Merging features for
vmsc:0xffff08009aeb66a0 |= ris:0xffff0800a1cd3098
[ 17.707443] mpam_msc mpam_msc.50: Merging features for
vmsc:0xffff08009aeb6720 |= ris:0xffff0800a1cd3498
[ 17.707444] mpam_msc mpam_msc.48: Merging features for
vmsc:0xffff08009aeb67a0 |= ris:0xffff0800a1bb4898
[ 17.707444] mpam_msc mpam_msc.46: Merging features for
vmsc:0xffff08009aeb6820 |= ris:0xffff0800a1bb4c98
[ 17.707445] mpam_msc mpam_msc.44: Merging features for
vmsc:0xffff08009aeb68a0 |= ris:0xffff0800a1bb5098
[ 17.707446] mpam_msc mpam_msc.42: Merging features for
vmsc:0xffff08009aeb6920 |= ris:0xffff0800a1bb5498
[ 17.707447] mpam_msc mpam_msc.40: Merging features for
vmsc:0xffff08009aeb69a0 |= ris:0xffff0800a1bb5898
[ 17.707448] mpam_msc mpam_msc.38: Merging features for
vmsc:0xffff08009aeb6a20 |= ris:0xffff0800a1bb5c98
[ 17.707449] mpam_msc mpam_msc.36: Merging features for
vmsc:0xffff08009aeb6aa0 |= ris:0xffff0800a1bb6098
[ 17.707449] mpam_msc mpam_msc.34: Merging features for
vmsc:0xffff08009aeb6b20 |= ris:0xffff0800a1bb6498
[ 17.707450] mpam_msc mpam_msc.32: Merging features for
vmsc:0xffff08009aeb6ba0 |= ris:0xffff0800a1bb6898
[ 17.707451] mpam_msc mpam_msc.30: Merging features for
vmsc:0xffff08009aeb6c20 |= ris:0xffff0800a1bb6c98
[ 17.707452] mpam_msc mpam_msc.28: Merging features for
vmsc:0xffff08009aeb6ca0 |= ris:0xffff0800a1bb7098
[ 17.707453] mpam_msc mpam_msc.26: Merging features for
vmsc:0xffff08009aeb6d20 |= ris:0xffff0800a1bb7498
[ 17.707454] mpam_msc mpam_msc.24: Merging features for
vmsc:0xffff08009aeb6da0 |= ris:0xffff0800a1bb7898
[ 17.707454] mpam_msc mpam_msc.22: Merging features for
vmsc:0xffff08009aeb6e20 |= ris:0xffff0800a1bb7c98
[ 17.707455] mpam_msc mpam_msc.20: Merging features for
vmsc:0xffff08009aeb6ea0 |= ris:0xffff0800a1bb1098
[ 17.707456] mpam_msc mpam_msc.18: Merging features for
vmsc:0xffff08009aeb6f20 |= ris:0xffff0800a1bb1498
[ 17.707457] mpam_msc mpam_msc.16: Merging features for
vmsc:0xffff08009aeb6fa0 |= ris:0xffff0800a1bb1898
[ 17.707457] mpam_msc mpam_msc.62: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009b3aa020
[ 17.707458] mpam_msc mpam_msc.60: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009b3aa0a0
[ 17.707459] mpam_msc mpam_msc.58: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009b3aa120
[ 17.707460] mpam_msc mpam_msc.56: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009b3aa1a0
[ 17.707461] mpam_msc mpam_msc.54: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6620
[ 17.707461] mpam_msc mpam_msc.52: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb66a0
[ 17.707462] mpam_msc mpam_msc.50: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6720
[ 17.707463] mpam_msc mpam_msc.48: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb67a0
[ 17.707463] mpam_msc mpam_msc.46: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6820
[ 17.707464] mpam_msc mpam_msc.44: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb68a0
[ 17.707465] mpam_msc mpam_msc.42: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6920
[ 17.707466] mpam_msc mpam_msc.40: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb69a0
[ 17.707466] mpam_msc mpam_msc.38: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6a20
[ 17.707467] mpam_msc mpam_msc.36: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6aa0
[ 17.707468] mpam_msc mpam_msc.34: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6b20
[ 17.707469] mpam_msc mpam_msc.32: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6ba0
[ 17.707469] mpam_msc mpam_msc.30: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6c20
[ 17.707470] mpam_msc mpam_msc.28: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6ca0
[ 17.707471] mpam_msc mpam_msc.26: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6d20
[ 17.707472] mpam_msc mpam_msc.24: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6da0
[ 17.707472] mpam_msc mpam_msc.22: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6e20
[ 17.707473] mpam_msc mpam_msc.20: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6ea0
[ 17.707474] mpam_msc mpam_msc.18: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6f20
[ 17.707475] mpam_msc mpam_msc.16: Merging features for
class:0xffff08009b230050 &= vmsc:0xffff08009aeb6fa0
[ 17.707475] mpam_msc mpam_msc.14: Merging features for
vmsc:0xffff08009aeb7020 |= ris:0xffff0800a1bb1c98
[ 17.707476] mpam_msc mpam_msc.12: Merging features for
vmsc:0xffff08009aeb70a0 |= ris:0xffff0800a1bb2098
[ 17.707477] mpam_msc mpam_msc.10: Merging features for
vmsc:0xffff08009aeb7120 |= ris:0xffff0800a1bb2498
[ 17.707478] mpam_msc mpam_msc.8: Merging features for
vmsc:0xffff08009aeb71a0 |= ris:0xffff0800a1bb2898
[ 17.707479] mpam_msc mpam_msc.6: Merging features for
vmsc:0xffff08009aeb7220 |= ris:0xffff0800a1bb2c98
[ 17.707480] mpam_msc mpam_msc.4: Merging features for
vmsc:0xffff08009aeb72a0 |= ris:0xffff0800a1bb3098
[ 17.707480] mpam_msc mpam_msc.2: Merging features for
vmsc:0xffff08009aeb7320 |= ris:0xffff0800a1bb3498
[ 17.707481] mpam_msc mpam_msc.0: Merging features for
vmsc:0xffff08009aeb73a0 |= ris:0xffff0800a1bb3898
[ 17.707482] mpam_msc mpam_msc.14: Merging features for
class:0xffff08009b231150 &= vmsc:0xffff08009aeb7020
[ 17.707483] mpam_msc mpam_msc.12: Merging features for
class:0xffff08009b231150 &= vmsc:0xffff08009aeb70a0
[ 17.707483] mpam_msc mpam_msc.10: Merging features for
class:0xffff08009b231150 &= vmsc:0xffff08009aeb7120
[ 17.707484] mpam_msc mpam_msc.8: Merging features for
class:0xffff08009b231150 &= vmsc:0xffff08009aeb71a0
[ 17.707485] mpam_msc mpam_msc.6: Merging features for
class:0xffff08009b231150 &= vmsc:0xffff08009aeb7220
[ 17.707485] mpam_msc mpam_msc.4: Merging features for
class:0xffff08009b231150 &= vmsc:0xffff08009aeb72a0
[ 17.707486] mpam_msc mpam_msc.2: Merging features for
class:0xffff08009b231150 &= vmsc:0xffff08009aeb7320
[ 17.707487] mpam_msc mpam_msc.0: Merging features for
class:0xffff08009b231150 &= vmsc:0xffff08009aeb73a0
[ 22.876035] mpam:mpam_resctrl_pick_caches: class 255 is not a cache
[ 22.876039] mpam:mpam_resctrl_pick_mba: class 2 is a cache but not the L3
[ 22.876040] mpam:mpam_resctrl_pick_mba: class 3 has no bandwidth control
[ 22.878500] mpam:topology_matches_l3: class 255 component 0 has
Mismatched CPU mask with L3 equivalent
[ 22.878503] mpam:mpam_resctrl_pick_mba: class 255 topology doesn't
match L3
[ 22.878505] mpam:mpam_resctrl_pick_counters: class 2 is a cache but
not the L3
[ 22.878505] mpam:mpam_resctrl_pick_counters: class 3 has usable CSU
[ 22.878506] mpam:counter_update_class: Updating event 1 to use class 3
[ 22.878508] mpam:class_has_usable_mbwu: monitors usable in
free-running mode
[ 22.880995] mpam:topology_matches_l3: class 255 component 0 has
Mismatched CPU mask with L3 equivalent
[ 22.900111] WARNING: drivers/resctrl/mpam_resctrl.c:1495 at
mpam_resctrl_domain_insert+0x74/0x80, CPU#2: cpuhp/2/25
[ 29.755844] pc : mpam_resctrl_domain_insert+0x74/0x80
[ 29.760886] lr : mpam_resctrl_domain_insert+0x34/0x80
[ 29.842897] mpam_resctrl_domain_insert+0x74/0x80 (P)
[ 29.847938] mpam_resctrl_online_cpu+0x2b4/0x428
[ 29.852544] mpam_cpu_online+0x274/0x298
[ 29.941348] MPAM enabled with 32 PARTIDs and 4 PMGs
[ 29.977840] dyndbg=file mpam_resctrl.c +p
With the exception of the issue previously raised in patch 26, all other
aspects meet expectations. Please apply my reviewed-by tag to this patch
series once the patch 26 issue is addressed as mentioned.
+ Reviewed-by: Zeng Heng <zengheng4@huawei.com>
Thanks,
Zeng Heng
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code
2026-02-24 17:56 [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code Ben Horgan
` (42 preceding siblings ...)
2026-02-26 7:34 ` Zeng Heng
@ 2026-03-03 20:18 ` Punit Agrawal
2026-03-04 9:42 ` Ben Horgan
43 siblings, 1 reply; 75+ messages in thread
From: Punit Agrawal @ 2026-03-03 20:18 UTC (permalink / raw)
To: Ben Horgan
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman,
punit.agrawal, quic_jiles, reinette.chatre, rohit.mathew, scott,
sdonthineni, tan.shaopeng, xhao, catalin.marinas, will, corbet,
maz, oupton, joey.gouly, suzuki.poulose, kvmarm, zengheng4,
linux-doc
Hi Ben,
Ben Horgan <ben.horgan@arm.com> writes:
> The main change in this version of the mpam missing pieces series is to
> update the cdp emulation to match the resctrl interface. L2 and L3
> resources can now enable cdp separately. Cdp can't be hidden correctly for
> memory bandwidth allocation, as max per partid can't be emulated with more
> partids, and so we hide this completely when cdp is enabled. There is a little
> restructuring and a few smaller changes.
>
> Changelogs in patches
>
> It would be great to get this series merged this cycle. For that we'll need
> more testing and reviewing. Thanks!
>
> From James' cover letter:
>
> This is the missing piece to make MPAM usable resctrl in user-space. This has
> shed its debugfs code and the read/write 'event configuration' for the monitors
> to make the series smaller.
>
> This adds the arch code and KVM support first. I anticipate the whole thing
> going via arm64, but if goes via tip instead, the an immutable branch with those
> patches should be easy to do.
>
> Generally the resctrl glue code works by picking what MPAM features it can expose
> from the MPAM drive, then configuring the structs that back the resctrl helpers.
> If your platform is sufficiently Xeon shaped, you should be able to get L2/L3 CPOR
> bitmaps exposed via resctrl. CSU counters work if they are on/after the L3. MBWU
> counters are considerably more hairy, and depend on hueristics around the topology,
> and a bunch of stuff trying to emulate ABMC.
> If it didn't pick what you wanted it to, please share the debug messages produced
> when enabling dynamic debug and booting with:
> | dyndbg="file mpam_resctrl.c +pl"
>
> I've not found a platform that can test all the behaviours around the monitors,
> so this is where I'd expect the most bugs.
>
> The MPAM spec that describes all the system and MMIO registers can be found here:
> https://developer.arm.com/documentation/ddi0598/db/?lang=en
> (Ignored the 'RETIRED' warning - that is just arm moving the documentation around.
> This document has the best overview)
>
>
> Based on v7.0-rc1
>
> The series can be retrieved from:
> https://gitlab.arm.com/linux-arm/linux-bh.git mpam_resctrl_glue_v5
I booted with the series applied on an MPAM capable platform. The driver
is able to probe the L2 attached MSCs.
In terms of features, bit-mapped based cache portion partitioning works
as expected. The platform also supports additional controls (cache
capacity and priority partitioning) and monitors (memory bandwidth and
cache storage). The ones supported in MPAM driver probe OK but don't
seem to be exposed. E.g.,
mpam:mpam_resctrl_pick_counters: class 2 is a cache but not the L3
It looks like some of it is due to an impedance mismatch with resctrl
expectations but hopefully we can get to it with the basics in-place.
Feel free to add
Tested-by: Punit Agrawal <punit.agrawal@oss.qualcomm.com>
Thanks,
Punit
^ permalink raw reply [flat|nested] 75+ messages in thread* Re: [PATCH v5 00/41] arm_mpam: Add KVM/arm64 and resctrl glue code
2026-03-03 20:18 ` Punit Agrawal
@ 2026-03-04 9:42 ` Ben Horgan
0 siblings, 0 replies; 75+ messages in thread
From: Ben Horgan @ 2026-03-04 9:42 UTC (permalink / raw)
To: Punit Agrawal
Cc: amitsinght, baisheng.gao, baolin.wang, carl, dave.martin, david,
dfustini, fenghuay, gshan, james.morse, jonathan.cameron, kobak,
lcherian, linux-arm-kernel, linux-kernel, peternewman, quic_jiles,
reinette.chatre, rohit.mathew, scott, sdonthineni, tan.shaopeng,
xhao, catalin.marinas, will, corbet, maz, oupton, joey.gouly,
suzuki.poulose, kvmarm, zengheng4, linux-doc
Hi Punit,
On 3/3/26 20:18, Punit Agrawal wrote:
> Hi Ben,
>
> Ben Horgan <ben.horgan@arm.com> writes:
>
>> The main change in this version of the mpam missing pieces series is to
>> update the cdp emulation to match the resctrl interface. L2 and L3
>> resources can now enable cdp separately. Cdp can't be hidden correctly for
>> memory bandwidth allocation, as max per partid can't be emulated with more
>> partids, and so we hide this completely when cdp is enabled. There is a little
>> restructuring and a few smaller changes.
>>
>> Changelogs in patches
>>
>> It would be great to get this series merged this cycle. For that we'll need
>> more testing and reviewing. Thanks!
>>
>> From James' cover letter:
>>
>> This is the missing piece to make MPAM usable resctrl in user-space. This has
>> shed its debugfs code and the read/write 'event configuration' for the monitors
>> to make the series smaller.
>>
>> This adds the arch code and KVM support first. I anticipate the whole thing
>> going via arm64, but if goes via tip instead, the an immutable branch with those
>> patches should be easy to do.
>>
>> Generally the resctrl glue code works by picking what MPAM features it can expose
>> from the MPAM drive, then configuring the structs that back the resctrl helpers.
>> If your platform is sufficiently Xeon shaped, you should be able to get L2/L3 CPOR
>> bitmaps exposed via resctrl. CSU counters work if they are on/after the L3. MBWU
>> counters are considerably more hairy, and depend on hueristics around the topology,
>> and a bunch of stuff trying to emulate ABMC.
>> If it didn't pick what you wanted it to, please share the debug messages produced
>> when enabling dynamic debug and booting with:
>> | dyndbg="file mpam_resctrl.c +pl"
>>
>> I've not found a platform that can test all the behaviours around the monitors,
>> so this is where I'd expect the most bugs.
>>
>> The MPAM spec that describes all the system and MMIO registers can be found here:
>> https://developer.arm.com/documentation/ddi0598/db/?lang=en
>> (Ignored the 'RETIRED' warning - that is just arm moving the documentation around.
>> This document has the best overview)
>>
>>
>> Based on v7.0-rc1
>>
>> The series can be retrieved from:
>> https://gitlab.arm.com/linux-arm/linux-bh.git mpam_resctrl_glue_v5
>
> I booted with the series applied on an MPAM capable platform. The driver
> is able to probe the L2 attached MSCs.
>
> In terms of features, bit-mapped based cache portion partitioning works
> as expected. The platform also supports additional controls (cache
> capacity and priority partitioning) and monitors (memory bandwidth and
> cache storage). The ones supported in MPAM driver probe OK but don't
> seem to be exposed. E.g.,
>
> mpam:mpam_resctrl_pick_counters: class 2 is a cache but not the L3
>
>
> It looks like some of it is due to an impedance mismatch with resctrl
Yes, what you describe is expected behaviour. There is no support yet
for cache capacity (CMAX) or bandwidth priority partitioning and
monitors are only exposed on the L3.
> expectations but hopefully we can get to it with the basics in-place.
I hope so. The CMAX and the bandwidth priority partitioning should be
easy to add once there is a generic way of adding new schema. There is a
plan/discussion here [1] and I don't expect adding monitoring on L2 to
be hard.
[1] https://lore.kernel.org/lkml/aPtfMFfLV1l%2FRB0L@e133380.arm.com/
>
> Feel free to add
>
> Tested-by: Punit Agrawal <punit.agrawal@oss.qualcomm.com>
Thanks for testing!
>
> Thanks,
> Punit
>
Thanks,
Ben
^ permalink raw reply [flat|nested] 75+ messages in thread