public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995)
@ 2026-03-23 16:24 Catalin Marinas
  2026-03-23 16:24 ` [PATCH v3 1/5] arm64: tlb: Introduce __tlbi_sync_s1ish_{kernel,batch}() for TLB maintenance Catalin Marinas
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Catalin Marinas @ 2026-03-23 16:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Oliver Upton, Lorenzo Pieralisi,
	Sudeep Holla, James Morse, Mark Rutland, Mark Brown, kvmarm

Here's version 3 of the workaround for C1-Pro erratum 4193714. Version 2 was
posted here:

https://lore.kernel.org/r/20260318191918.2653160-1-catalin.marinas@arm.com

Main changes since v2:

- Renamed the config option, cpucap and SMCCC macro to include the
  '4193714' suffix instead of 'SME_DVMSYNC'

- Pushed smp_processor_id() into sme_{set,clear}_active()

- Updated silicon-errata.rst

- Moved the CPU part definition to a separate patch

Erratum description:

Arm C1-Pro prior to r1p3 has an erratum (4193714) where a TLBI+DSB
sequence might fail to ensure the completion of all outstanding SME
(Scalable Matrix Extension) memory accesses. The DVMSync message is
acknowledged before the SME accesses have fully completed, potentially
allowing pages to be reused before all in-flight accesses are done.

The workaround consists of executing a DSB locally (via IPI)
on all affected CPUs running with SME enabled, after the TLB
invalidation. This ensures the SME accesses have completed before the
IPI is acknowledged.

This has been assigned CVE-2026-0995:

https://developer.arm.com/documentation/111823/latest/

Catalin Marinas (4):
  arm64: tlb: Introduce __tlbi_sync_s1ish_{kernel,batch}() for TLB
    maintenance
  arm64: tlb: Pass the corresponding mm to __tlbi_sync_s1ish()
  arm64: cputype: Add C1-Pro definitions
  arm64: errata: Work around early CME DVMSync acknowledgement

James Morse (1):
  KVM: arm64: Add SMC hook for SME dvmsync erratum

 Documentation/arch/arm64/silicon-errata.rst |  2 +
 arch/arm64/Kconfig                          | 12 +++
 arch/arm64/include/asm/cpucaps.h            |  2 +
 arch/arm64/include/asm/cputype.h            |  2 +
 arch/arm64/include/asm/fpsimd.h             | 21 +++++
 arch/arm64/include/asm/mmu.h                |  1 +
 arch/arm64/include/asm/tlbflush.h           | 50 ++++++++++--
 arch/arm64/kernel/cpu_errata.c              | 30 +++++++
 arch/arm64/kernel/entry-common.c            |  3 +
 arch/arm64/kernel/fpsimd.c                  | 89 +++++++++++++++++++++
 arch/arm64/kernel/process.c                 |  7 ++
 arch/arm64/kernel/sys_compat.c              |  2 +-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c       | 17 ++++
 arch/arm64/tools/cpucaps                    |  1 +
 include/linux/arm-smccc.h                   |  6 ++
 15 files changed, 236 insertions(+), 9 deletions(-)



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/5] arm64: tlb: Introduce __tlbi_sync_s1ish_{kernel,batch}() for TLB maintenance
  2026-03-23 16:24 [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Catalin Marinas
@ 2026-03-23 16:24 ` Catalin Marinas
  2026-03-23 16:24 ` [PATCH v3 2/5] arm64: tlb: Pass the corresponding mm to __tlbi_sync_s1ish() Catalin Marinas
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Catalin Marinas @ 2026-03-23 16:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Oliver Upton, Lorenzo Pieralisi,
	Sudeep Holla, James Morse, Mark Rutland, Mark Brown, kvmarm

Add __tlbi_sync_s1ish_kernel() similar to __tlbi_sync_s1ish() and use it
for kernel TLB maintenance. Also use this function in flush_tlb_all()
which is only used in relation to kernel mappings. Subsequent patches
can differentiate between workarounds that apply to user only or both
user and kernel.

A subsequent patch will add mm_struct to __tlbi_sync_s1ish(). Since
arch_tlbbatch_flush() is not specific to an mm, add a corresponding
__tlbi_sync_s1ish_batch() helper.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/include/asm/tlbflush.h | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 1416e652612b..f41eebf00990 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -191,6 +191,18 @@ static inline void __tlbi_sync_s1ish(void)
 	__repeat_tlbi_sync(vale1is, 0);
 }
 
+static inline void __tlbi_sync_s1ish_batch(void)
+{
+	dsb(ish);
+	__repeat_tlbi_sync(vale1is, 0);
+}
+
+static inline void __tlbi_sync_s1ish_kernel(void)
+{
+	dsb(ish);
+	__repeat_tlbi_sync(vale1is, 0);
+}
+
 /*
  * Complete broadcast TLB maintenance issued by hyp code which invalidates
  * stage 1 translation information in any translation regime.
@@ -299,7 +311,7 @@ static inline void flush_tlb_all(void)
 {
 	dsb(ishst);
 	__tlbi(vmalle1is);
-	__tlbi_sync_s1ish();
+	__tlbi_sync_s1ish_kernel();
 	isb();
 }
 
@@ -385,7 +397,7 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
  */
 static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 {
-	__tlbi_sync_s1ish();
+	__tlbi_sync_s1ish_batch();
 }
 
 /*
@@ -568,7 +580,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
 	dsb(ishst);
 	__flush_tlb_range_op(vaale1is, start, pages, stride, 0,
 			     TLBI_TTL_UNKNOWN, false, lpa2_is_enabled());
-	__tlbi_sync_s1ish();
+	__tlbi_sync_s1ish_kernel();
 	isb();
 }
 
@@ -582,7 +594,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)
 
 	dsb(ishst);
 	__tlbi(vaae1is, addr);
-	__tlbi_sync_s1ish();
+	__tlbi_sync_s1ish_kernel();
 	isb();
 }
 


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/5] arm64: tlb: Pass the corresponding mm to __tlbi_sync_s1ish()
  2026-03-23 16:24 [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Catalin Marinas
  2026-03-23 16:24 ` [PATCH v3 1/5] arm64: tlb: Introduce __tlbi_sync_s1ish_{kernel,batch}() for TLB maintenance Catalin Marinas
@ 2026-03-23 16:24 ` Catalin Marinas
  2026-03-23 16:24 ` [PATCH v3 3/5] arm64: cputype: Add C1-Pro definitions Catalin Marinas
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Catalin Marinas @ 2026-03-23 16:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Oliver Upton, Lorenzo Pieralisi,
	Sudeep Holla, James Morse, Mark Rutland, Mark Brown, kvmarm

The mm structure will be used for workarounds that need limiting to
specific tasks.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/include/asm/tlbflush.h | 8 ++++----
 arch/arm64/kernel/sys_compat.c    | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index f41eebf00990..262791191935 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -185,7 +185,7 @@ do {										\
  * Complete broadcast TLB maintenance issued by the host which invalidates
  * stage 1 information in the host's own translation regime.
  */
-static inline void __tlbi_sync_s1ish(void)
+static inline void __tlbi_sync_s1ish(struct mm_struct *mm)
 {
 	dsb(ish);
 	__repeat_tlbi_sync(vale1is, 0);
@@ -323,7 +323,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 	asid = __TLBI_VADDR(0, ASID(mm));
 	__tlbi(aside1is, asid);
 	__tlbi_user(aside1is, asid);
-	__tlbi_sync_s1ish();
+	__tlbi_sync_s1ish(mm);
 	mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
 }
 
@@ -377,7 +377,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
 				  unsigned long uaddr)
 {
 	flush_tlb_page_nosync(vma, uaddr);
-	__tlbi_sync_s1ish();
+	__tlbi_sync_s1ish(vma->vm_mm);
 }
 
 static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
@@ -532,7 +532,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
 {
 	__flush_tlb_range_nosync(vma->vm_mm, start, end, stride,
 				 last_level, tlb_level);
-	__tlbi_sync_s1ish();
+	__tlbi_sync_s1ish(vma->vm_mm);
 }
 
 static inline void local_flush_tlb_contpte(struct vm_area_struct *vma,
diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
index b9d4998c97ef..03fde2677d5b 100644
--- a/arch/arm64/kernel/sys_compat.c
+++ b/arch/arm64/kernel/sys_compat.c
@@ -37,7 +37,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
 			 * We pick the reserved-ASID to minimise the impact.
 			 */
 			__tlbi(aside1is, __TLBI_VADDR(0, 0));
-			__tlbi_sync_s1ish();
+			__tlbi_sync_s1ish(current->mm);
 		}
 
 		ret = caches_clean_inval_user_pou(start, start + chunk);


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 3/5] arm64: cputype: Add C1-Pro definitions
  2026-03-23 16:24 [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Catalin Marinas
  2026-03-23 16:24 ` [PATCH v3 1/5] arm64: tlb: Introduce __tlbi_sync_s1ish_{kernel,batch}() for TLB maintenance Catalin Marinas
  2026-03-23 16:24 ` [PATCH v3 2/5] arm64: tlb: Pass the corresponding mm to __tlbi_sync_s1ish() Catalin Marinas
@ 2026-03-23 16:24 ` Catalin Marinas
  2026-03-23 16:24 ` [PATCH v3 4/5] arm64: errata: Work around early CME DVMSync acknowledgement Catalin Marinas
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Catalin Marinas @ 2026-03-23 16:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Oliver Upton, Lorenzo Pieralisi,
	Sudeep Holla, James Morse, Mark Rutland, Mark Brown, kvmarm

Add cputype definitions for C1-Pro. These will be used for errata
detection in subsequent patches.

These values can be found in "Table A-303: MIDR_EL1 bit descriptions" in
issue 07 of the C1-Pro TRM:

  https://documentation-service.arm.com/static/6930126730f8f55a656570af

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
---
 arch/arm64/include/asm/cputype.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 08860d482e60..7b518e81dd15 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -98,6 +98,7 @@
 #define ARM_CPU_PART_CORTEX_A725	0xD87
 #define ARM_CPU_PART_CORTEX_A720AE	0xD89
 #define ARM_CPU_PART_NEOVERSE_N3	0xD8E
+#define ARM_CPU_PART_C1_PRO		0xD8B
 
 #define APM_CPU_PART_XGENE		0x000
 #define APM_CPU_VAR_POTENZA		0x00
@@ -189,6 +190,7 @@
 #define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725)
 #define MIDR_CORTEX_A720AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A720AE)
 #define MIDR_NEOVERSE_N3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N3)
+#define MIDR_C1_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_C1_PRO)
 #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 4/5] arm64: errata: Work around early CME DVMSync acknowledgement
  2026-03-23 16:24 [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Catalin Marinas
                   ` (2 preceding siblings ...)
  2026-03-23 16:24 ` [PATCH v3 3/5] arm64: cputype: Add C1-Pro definitions Catalin Marinas
@ 2026-03-23 16:24 ` Catalin Marinas
  2026-03-27 19:15   ` Catalin Marinas
  2026-03-23 16:24 ` [PATCH v3 5/5] KVM: arm64: Add SMC hook for SME dvmsync erratum Catalin Marinas
  2026-03-23 17:53 ` [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Mark Rutland
  5 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2026-03-23 16:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Oliver Upton, Lorenzo Pieralisi,
	Sudeep Holla, James Morse, Mark Rutland, Mark Brown, kvmarm

C1-Pro acknowledges DVMSync messages before completing the SME/CME
memory accesses. Work around this by issuing an IPI to the affected CPUs
if they are running in EL0 with SME enabled.

Note that we avoid the local DSB in the IPI handler as the kernel runs
with SCTLR_EL1.IESB=1 This is sufficient to complete SME memory accesses
at EL0 on taking an exception to EL1. On the return to user path, no
barrier is necessary either. See the comment in sme_set_active() and the
more detailed explanation in the link below.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/ablEXwhfKyJW1i7l@J2N7QTR9R3
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Brown <broonie@kernel.org>
---
 Documentation/arch/arm64/silicon-errata.rst |  2 +
 arch/arm64/Kconfig                          | 12 +++
 arch/arm64/include/asm/cpucaps.h            |  2 +
 arch/arm64/include/asm/fpsimd.h             | 21 +++++
 arch/arm64/include/asm/mmu.h                |  1 +
 arch/arm64/include/asm/tlbflush.h           | 22 +++++
 arch/arm64/kernel/cpu_errata.c              | 30 +++++++
 arch/arm64/kernel/entry-common.c            |  3 +
 arch/arm64/kernel/fpsimd.c                  | 89 +++++++++++++++++++++
 arch/arm64/kernel/process.c                 |  7 ++
 arch/arm64/tools/cpucaps                    |  1 +
 11 files changed, 190 insertions(+)

diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
index 4c300caad901..282ad4257983 100644
--- a/Documentation/arch/arm64/silicon-errata.rst
+++ b/Documentation/arch/arm64/silicon-errata.rst
@@ -202,6 +202,8 @@ stable kernels.
 +----------------+-----------------+-----------------+-----------------------------+
 | ARM            | Neoverse-V3AE   | #3312417        | ARM64_ERRATUM_3194386       |
 +----------------+-----------------+-----------------+-----------------------------+
+| ARM            | C1-Pro          | #4193714        | ARM64_ERRATUM_4193714       |
++----------------+-----------------+-----------------+-----------------------------+
 | ARM            | MMU-500         | #841119,826419  | ARM_SMMU_MMU_500_CPRE_ERRATA|
 |                |                 | #562869,1047329 |                             |
 +----------------+-----------------+-----------------+-----------------------------+
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 38dba5f7e4d2..9b419f1a9ae6 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1175,6 +1175,18 @@ config ARM64_ERRATUM_4311569
 
 	  If unsure, say Y.
 
+config ARM64_ERRATUM_4193714
+	bool "C1-Pro: 4193714: SME DVMSync early acknowledgement"
+	depends on ARM64_SME
+	default y
+	help
+	  Enable workaround for C1-Pro acknowledging the DVMSync before
+	  the SME memory accesses are complete. This will cause TLB
+	  maintenance for processes using SME to also issue an IPI to
+	  the affected CPUs.
+
+	  If unsure, say Y.
+
 config CAVIUM_ERRATUM_22375
 	bool "Cavium erratum 22375, 24313"
 	default y
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 177c691914f8..0b1b78a4c03e 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -64,6 +64,8 @@ cpucap_is_possible(const unsigned int cap)
 		return IS_ENABLED(CONFIG_ARM64_WORKAROUND_REPEAT_TLBI);
 	case ARM64_WORKAROUND_SPECULATIVE_SSBS:
 		return IS_ENABLED(CONFIG_ARM64_ERRATUM_3194386);
+	case ARM64_WORKAROUND_4193714:
+		return IS_ENABLED(CONFIG_ARM64_ERRATUM_4193714);
 	case ARM64_MPAM:
 		/*
 		 * KVM MPAM support doesn't rely on the host kernel supporting MPAM.
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index 1d2e33559bd5..d9d00b45ab11 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -428,6 +428,24 @@ static inline size_t sme_state_size(struct task_struct const *task)
 	return __sme_state_size(task_get_sme_vl(task));
 }
 
+void sme_enable_dvmsync(void);
+void sme_set_active(void);
+void sme_clear_active(void);
+
+static inline void sme_enter_from_user_mode(void)
+{
+	if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714) &&
+	    test_thread_flag(TIF_SME))
+		sme_clear_active();
+}
+
+static inline void sme_exit_to_user_mode(void)
+{
+	if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714) &&
+	    test_thread_flag(TIF_SME))
+		sme_set_active();
+}
+
 #else
 
 static inline void sme_user_disable(void) { BUILD_BUG(); }
@@ -456,6 +474,9 @@ static inline size_t sme_state_size(struct task_struct const *task)
 	return 0;
 }
 
+static inline void sme_enter_from_user_mode(void) { }
+static inline void sme_exit_to_user_mode(void) { }
+
 #endif /* ! CONFIG_ARM64_SME */
 
 /* For use by EFI runtime services calls only */
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 137a173df1ff..ec6003db4d20 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -8,6 +8,7 @@
 #include <asm/cputype.h>
 
 #define MMCF_AARCH32	0x1	/* mm context flag for AArch32 executables */
+#define MMCF_SME_DVMSYNC 0x2	/* force DVMSync via IPI for SME completion */
 #define USER_ASID_BIT	48
 #define USER_ASID_FLAG	(UL(1) << USER_ASID_BIT)
 #define TTBR_ASID_MASK	(UL(0xffff) << 48)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 262791191935..80a623f9684a 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -80,6 +80,26 @@ static inline unsigned long get_trans_granule(void)
 	}
 }
 
+void sme_do_dvmsync(void);
+
+static inline void sme_dvmsync(struct mm_struct *mm)
+{
+	if (!alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714))
+		return;
+	if (!test_bit(ilog2(MMCF_SME_DVMSYNC), &mm->context.flags))
+		return;
+
+	sme_do_dvmsync();
+}
+
+static inline void sme_dvmsync_batch(void)
+{
+	if (!alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714))
+		return;
+
+	sme_do_dvmsync();
+}
+
 /*
  * Level-based TLBI operations.
  *
@@ -189,12 +209,14 @@ static inline void __tlbi_sync_s1ish(struct mm_struct *mm)
 {
 	dsb(ish);
 	__repeat_tlbi_sync(vale1is, 0);
+	sme_dvmsync(mm);
 }
 
 static inline void __tlbi_sync_s1ish_batch(void)
 {
 	dsb(ish);
 	__repeat_tlbi_sync(vale1is, 0);
+	sme_dvmsync_batch();
 }
 
 static inline void __tlbi_sync_s1ish_kernel(void)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 5c0ab6bfd44a..5377e4c2eba2 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -11,6 +11,7 @@
 #include <asm/cpu.h>
 #include <asm/cputype.h>
 #include <asm/cpufeature.h>
+#include <asm/fpsimd.h>
 #include <asm/kvm_asm.h>
 #include <asm/smp_plat.h>
 
@@ -575,6 +576,23 @@ static const struct midr_range erratum_spec_ssbs_list[] = {
 };
 #endif
 
+#ifdef CONFIG_ARM64_ERRATUM_4193714
+static bool has_sme_dvmsync_erratum(const struct arm64_cpu_capabilities *entry,
+				    int scope)
+{
+	if (!id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1)))
+		return false;
+
+	return is_affected_midr_range(entry, scope);
+}
+
+static void cpu_enable_sme_dvmsync(const struct arm64_cpu_capabilities *__unused)
+{
+	if (this_cpu_has_cap(ARM64_WORKAROUND_4193714))
+		sme_enable_dvmsync();
+}
+#endif
+
 #ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38
 static const struct midr_range erratum_ac03_cpu_38_list[] = {
 	MIDR_ALL_VERSIONS(MIDR_AMPERE1),
@@ -901,6 +919,18 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
 		.matches = need_arm_si_l1_workaround_4311569,
 	},
 #endif
+#ifdef CONFIG_ARM64_ERRATUM_4193714
+	{
+		.desc = "C1-Pro SME DVMSync early acknowledgement",
+		.capability = ARM64_WORKAROUND_4193714,
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = has_sme_dvmsync_erratum,
+		.cpu_enable = cpu_enable_sme_dvmsync,
+		/* C1-Pro r0p0 - r1p2 (the latter only when REVIDR_EL1[0]==0) */
+		.midr_range = MIDR_RANGE(MIDR_C1_PRO, 0, 0, 1, 2),
+		MIDR_FIXED(MIDR_CPU_VAR_REV(1, 2), BIT(0)),
+	},
+#endif
 #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
 	{
 		.desc = "ARM errata 2966298, 3117295",
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 3625797e9ee8..fb1e374af622 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -21,6 +21,7 @@
 #include <asm/daifflags.h>
 #include <asm/esr.h>
 #include <asm/exception.h>
+#include <asm/fpsimd.h>
 #include <asm/irq_regs.h>
 #include <asm/kprobes.h>
 #include <asm/mmu.h>
@@ -67,6 +68,7 @@ static __always_inline void arm64_enter_from_user_mode(struct pt_regs *regs)
 {
 	enter_from_user_mode(regs);
 	mte_disable_tco_entry(current);
+	sme_enter_from_user_mode();
 }
 
 /*
@@ -80,6 +82,7 @@ static __always_inline void arm64_exit_to_user_mode(struct pt_regs *regs)
 	local_irq_disable();
 	exit_to_user_mode_prepare_legacy(regs);
 	local_daif_mask();
+	sme_exit_to_user_mode();
 	mte_check_tfsr_exit();
 	exit_to_user_mode();
 }
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 9de1d8a604cb..f22c1b9b1c91 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -15,6 +15,7 @@
 #include <linux/compiler.h>
 #include <linux/cpu.h>
 #include <linux/cpu_pm.h>
+#include <linux/cpumask.h>
 #include <linux/ctype.h>
 #include <linux/kernel.h>
 #include <linux/linkage.h>
@@ -28,6 +29,7 @@
 #include <linux/sched/task_stack.h>
 #include <linux/signal.h>
 #include <linux/slab.h>
+#include <linux/smp.h>
 #include <linux/stddef.h>
 #include <linux/sysctl.h>
 #include <linux/swab.h>
@@ -1358,6 +1360,93 @@ void do_sve_acc(unsigned long esr, struct pt_regs *regs)
 	put_cpu_fpsimd_context();
 }
 
+#ifdef CONFIG_ARM64_ERRATUM_4193714
+
+/*
+ * SME/CME erratum handling
+ */
+static cpumask_var_t sme_dvmsync_cpus;
+static cpumask_var_t sme_active_cpus;
+
+/*
+ * These helpers are only called from non-preemptible contexts, so
+ * smp_processor_id() is safe here.
+ */
+void sme_set_active(void)
+{
+	unsigned int cpu = smp_processor_id();
+
+	if (!cpumask_test_cpu(cpu, sme_dvmsync_cpus))
+		return;
+
+	if (!test_bit(ilog2(MMCF_SME_DVMSYNC), &current->mm->context.flags))
+		set_bit(ilog2(MMCF_SME_DVMSYNC), &current->mm->context.flags);
+
+	cpumask_set_cpu(cpu, sme_active_cpus);
+
+	/*
+	 * A subsequent (post ERET) SME access may use a stale address
+	 * translation. On C1-Pro, a TLBI+DSB on a different CPU will wait for
+	 * the completion of set_bit() and cpumask_set_cpu() above as they
+	 * appear in program order before the SME access. The post-TLBI+DSB
+	 * read of the flag and cpumask will lead to the IPI being issued.
+	 *
+	 * https://lore.kernel.org/r/ablEXwhfKyJW1i7l@J2N7QTR9R3
+	 */
+}
+
+void sme_clear_active(void)
+{
+	unsigned int cpu = smp_processor_id();
+
+	if (!cpumask_test_cpu(cpu, sme_dvmsync_cpus))
+		return;
+
+	/*
+	 * With SCTLR_EL1.IESB enabled, the SME memory transactions are
+	 * completed on entering EL1.
+	 */
+	cpumask_clear_cpu(cpu, sme_active_cpus);
+}
+
+static void sme_dvmsync_ipi(void *unused)
+{
+	/*
+	 * With SCTLR_EL1.IESB on, taking an exception is sufficient to ensure
+	 * the completion of the SME memory accesses, so no need for an
+	 * explicit DSB.
+	 */
+}
+
+void sme_do_dvmsync(void)
+{
+	/*
+	 * This is called from the TLB maintenance functions after the DSB ISH
+	 * to send hardware DVMSync message. If this CPU sees the mask as
+	 * empty, the remote CPU executing sme_set_active() would have seen
+	 * the DVMSync and no IPI required.
+	 */
+	if (cpumask_empty(sme_active_cpus))
+		return;
+
+	preempt_disable();
+	smp_call_function_many(sme_active_cpus, sme_dvmsync_ipi, NULL, true);
+	preempt_enable();
+}
+
+void sme_enable_dvmsync(void)
+{
+	if ((!cpumask_available(sme_dvmsync_cpus) &&
+	     !zalloc_cpumask_var(&sme_dvmsync_cpus, GFP_ATOMIC)) ||
+	    (!cpumask_available(sme_active_cpus) &&
+	     !zalloc_cpumask_var(&sme_active_cpus, GFP_ATOMIC)))
+		panic("Unable to allocate the cpumasks for SME DVMSync erratum");
+
+	cpumask_set_cpu(smp_processor_id(), sme_dvmsync_cpus);
+}
+
+#endif /* CONFIG_ARM64_ERRATUM_4193714 */
+
 /*
  * Trapped SME access
  *
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 489554931231..0b44e0710971 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -471,6 +471,13 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
 				ret = copy_thread_za(p, current);
 				if (ret)
 					return ret;
+				/*
+				 * Disable the SME DVMSync workaround for the
+				 * new process, it will be enabled on return
+				 * to user if TIF_SME is set.
+				 */
+				if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714))
+					p->mm->context.flags &= ~MMCF_SME_DVMSYNC;
 			} else {
 				p->thread.tpidr2_el0 = 0;
 				WARN_ON_ONCE(p->thread.svcr & SVCR_ZA_MASK);
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index 7261553b644b..8946be60a409 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -105,6 +105,7 @@ WORKAROUND_2077057
 WORKAROUND_2457168
 WORKAROUND_2645198
 WORKAROUND_2658417
+WORKAROUND_4193714
 WORKAROUND_4311569
 WORKAROUND_AMPERE_AC03_CPU_38
 WORKAROUND_AMPERE_AC04_CPU_23


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 5/5] KVM: arm64: Add SMC hook for SME dvmsync erratum
  2026-03-23 16:24 [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Catalin Marinas
                   ` (3 preceding siblings ...)
  2026-03-23 16:24 ` [PATCH v3 4/5] arm64: errata: Work around early CME DVMSync acknowledgement Catalin Marinas
@ 2026-03-23 16:24 ` Catalin Marinas
  2026-03-24 10:14   ` Vincent Donnefort
  2026-03-23 17:53 ` [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Mark Rutland
  5 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2026-03-23 16:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Oliver Upton, Lorenzo Pieralisi,
	Sudeep Holla, James Morse, Mark Rutland, Mark Brown, kvmarm

From: James Morse <james.morse@arm.com>

C1-Pro cores with SME have an erratum where TLBI+DSB does not complete
all outstanding SME accesses. Instead a DSB needs to be executed on the
affecteed CPUs. The implication is pages cannot be unmapped from the
host stage2 then provided to the guest. Host SME accesses may occur
after this point.

This erratum breaks pKVM's guarantees, and the workaround is hard to
implement as EL2 and EL1 share a security state meaning EL1 can mask
IPI sent by EL2, leading to interrupt blackouts.

Instead, do this in EL3. This has the advantage of a separate security
state, meaning lower EL cannot mask the IPI. It is also simpler for EL3
to know about CPUs that are off or in PSCI's CPU_SUSPEND.

Add the needed hook.

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oupton@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Lorenzo Pieralisi <lpieralisi@kernel.org>
Cc: Sudeep Holla <sudeep.holla@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 17 +++++++++++++++++
 include/linux/arm-smccc.h             |  6 ++++++
 2 files changed, 23 insertions(+)

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index 38f66a56a766..ef8afbdd421b 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -5,6 +5,8 @@
  */
 
 #include <linux/kvm_host.h>
+#include <linux/arm-smccc.h>
+
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
@@ -28,6 +30,15 @@ static struct hyp_pool host_s2_pool;
 static DEFINE_PER_CPU(struct pkvm_hyp_vm *, __current_vm);
 #define current_vm (*this_cpu_ptr(&__current_vm))
 
+static void pkvm_sme_dvmsync_fw_call(void)
+{
+	if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714)) {
+		struct arm_smccc_res res;
+
+		arm_smccc_1_1_smc(ARM_SMCCC_CPU_WORKAROUND_4193714, &res);
+	}
+}
+
 static void guest_lock_component(struct pkvm_hyp_vm *vm)
 {
 	hyp_spin_lock(&vm->lock);
@@ -553,6 +564,12 @@ int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id)
 	if (ret)
 		return ret;
 
+	/*
+	 * After stage2 maintenance has happened, but before the page owner has
+	 * changed.
+	 */
+	pkvm_sme_dvmsync_fw_call();
+
 	/* Don't forget to update the vmemmap tracking for the host */
 	if (owner_id == PKVM_ID_HOST)
 		__host_update_page_state(addr, size, PKVM_PAGE_OWNED);
diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index 50b47eba7d01..e7195750d21b 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -105,6 +105,12 @@
 			   ARM_SMCCC_SMC_32,				\
 			   0, 0x3fff)
 
+/* C1-Pro erratum 4193714: SME DVMSync early acknowledgement */
+#define ARM_SMCCC_CPU_WORKAROUND_4193714				\
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
+			   ARM_SMCCC_SMC_32,				\
+			   ARM_SMCCC_OWNER_CPU, 0x10)
+
 #define ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID				\
 	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
 			   ARM_SMCCC_SMC_32,				\


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995)
  2026-03-23 16:24 [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Catalin Marinas
                   ` (4 preceding siblings ...)
  2026-03-23 16:24 ` [PATCH v3 5/5] KVM: arm64: Add SMC hook for SME dvmsync erratum Catalin Marinas
@ 2026-03-23 17:53 ` Mark Rutland
  5 siblings, 0 replies; 10+ messages in thread
From: Mark Rutland @ 2026-03-23 17:53 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arm-kernel, Will Deacon, Marc Zyngier, Oliver Upton,
	Lorenzo Pieralisi, Sudeep Holla, James Morse, Mark Brown, kvmarm

On Mon, Mar 23, 2026 at 04:24:00PM +0000, Catalin Marinas wrote:
> Here's version 3 of the workaround for C1-Pro erratum 4193714. Version 2 was
> posted here:
> 
> https://lore.kernel.org/r/20260318191918.2653160-1-catalin.marinas@arm.com
> 
> Catalin Marinas (4):
>   arm64: tlb: Introduce __tlbi_sync_s1ish_{kernel,batch}() for TLB
>     maintenance
>   arm64: tlb: Pass the corresponding mm to __tlbi_sync_s1ish()
>   arm64: cputype: Add C1-Pro definitions
>   arm64: errata: Work around early CME DVMSync acknowledgement
> 
> James Morse (1):
>   KVM: arm64: Add SMC hook for SME dvmsync erratum

These all looks good to me.

FWIW, for the series:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 5/5] KVM: arm64: Add SMC hook for SME dvmsync erratum
  2026-03-23 16:24 ` [PATCH v3 5/5] KVM: arm64: Add SMC hook for SME dvmsync erratum Catalin Marinas
@ 2026-03-24 10:14   ` Vincent Donnefort
  2026-03-24 12:56     ` Catalin Marinas
  0 siblings, 1 reply; 10+ messages in thread
From: Vincent Donnefort @ 2026-03-24 10:14 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arm-kernel, Will Deacon, Marc Zyngier, Oliver Upton,
	Lorenzo Pieralisi, Sudeep Holla, James Morse, Mark Rutland,
	Mark Brown, kvmarm

On Mon, Mar 23, 2026 at 04:24:05PM +0000, Catalin Marinas wrote:
> From: James Morse <james.morse@arm.com>
> 
> C1-Pro cores with SME have an erratum where TLBI+DSB does not complete
> all outstanding SME accesses. Instead a DSB needs to be executed on the
> affecteed CPUs. The implication is pages cannot be unmapped from the
> host stage2 then provided to the guest. Host SME accesses may occur
> after this point.
> 
> This erratum breaks pKVM's guarantees, and the workaround is hard to
> implement as EL2 and EL1 share a security state meaning EL1 can mask
> IPI sent by EL2, leading to interrupt blackouts.
> 
> Instead, do this in EL3. This has the advantage of a separate security
> state, meaning lower EL cannot mask the IPI. It is also simpler for EL3
> to know about CPUs that are off or in PSCI's CPU_SUSPEND.
> 
> Add the needed hook.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Oliver Upton <oupton@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Lorenzo Pieralisi <lpieralisi@kernel.org>
> Cc: Sudeep Holla <sudeep.holla@kernel.org>

In case this goes in before Will's p-guest series and with just a small comment
below:

Reviewed-by: Vincent Donnefort <vdonnefort@google.com>

> ---
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 17 +++++++++++++++++
>  include/linux/arm-smccc.h             |  6 ++++++
>  2 files changed, 23 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index 38f66a56a766..ef8afbdd421b 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -5,6 +5,8 @@
>   */
>  
>  #include <linux/kvm_host.h>
> +#include <linux/arm-smccc.h>
> +
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
> @@ -28,6 +30,15 @@ static struct hyp_pool host_s2_pool;
>  static DEFINE_PER_CPU(struct pkvm_hyp_vm *, __current_vm);
>  #define current_vm (*this_cpu_ptr(&__current_vm))
>  
> +static void pkvm_sme_dvmsync_fw_call(void)
> +{
> +	if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714)) {
> +		struct arm_smccc_res res;
> +
> +		arm_smccc_1_1_smc(ARM_SMCCC_CPU_WORKAROUND_4193714, &res);

With hyp tracing in kvmarm/next, this should be hyp_smccc_1_1_smc().

> +	}
> +}
> +
>  static void guest_lock_component(struct pkvm_hyp_vm *vm)
>  {
>  	hyp_spin_lock(&vm->lock);
> @@ -553,6 +564,12 @@ int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id)
>  	if (ret)
>  		return ret;
>  
> +	/*
> +	 * After stage2 maintenance has happened, but before the page owner has
> +	 * changed.
> +	 */
> +	pkvm_sme_dvmsync_fw_call();
> +
>  	/* Don't forget to update the vmemmap tracking for the host */
>  	if (owner_id == PKVM_ID_HOST)
>  		__host_update_page_state(addr, size, PKVM_PAGE_OWNED);
> diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
> index 50b47eba7d01..e7195750d21b 100644
> --- a/include/linux/arm-smccc.h
> +++ b/include/linux/arm-smccc.h
> @@ -105,6 +105,12 @@
>  			   ARM_SMCCC_SMC_32,				\
>  			   0, 0x3fff)
>  
> +/* C1-Pro erratum 4193714: SME DVMSync early acknowledgement */
> +#define ARM_SMCCC_CPU_WORKAROUND_4193714				\
> +	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
> +			   ARM_SMCCC_SMC_32,				\
> +			   ARM_SMCCC_OWNER_CPU, 0x10)
> +
>  #define ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID				\
>  	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
>  			   ARM_SMCCC_SMC_32,				\
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 5/5] KVM: arm64: Add SMC hook for SME dvmsync erratum
  2026-03-24 10:14   ` Vincent Donnefort
@ 2026-03-24 12:56     ` Catalin Marinas
  0 siblings, 0 replies; 10+ messages in thread
From: Catalin Marinas @ 2026-03-24 12:56 UTC (permalink / raw)
  To: Vincent Donnefort
  Cc: linux-arm-kernel, Will Deacon, Marc Zyngier, Oliver Upton,
	Lorenzo Pieralisi, Sudeep Holla, James Morse, Mark Rutland,
	Mark Brown, kvmarm

On Tue, Mar 24, 2026 at 10:14:40AM +0000, Vincent Donnefort wrote:
> On Mon, Mar 23, 2026 at 04:24:05PM +0000, Catalin Marinas wrote:
> > From: James Morse <james.morse@arm.com>
> > 
> > C1-Pro cores with SME have an erratum where TLBI+DSB does not complete
> > all outstanding SME accesses. Instead a DSB needs to be executed on the
> > affecteed CPUs. The implication is pages cannot be unmapped from the
> > host stage2 then provided to the guest. Host SME accesses may occur
> > after this point.
> > 
> > This erratum breaks pKVM's guarantees, and the workaround is hard to
> > implement as EL2 and EL1 share a security state meaning EL1 can mask
> > IPI sent by EL2, leading to interrupt blackouts.
> > 
> > Instead, do this in EL3. This has the advantage of a separate security
> > state, meaning lower EL cannot mask the IPI. It is also simpler for EL3
> > to know about CPUs that are off or in PSCI's CPU_SUSPEND.
> > 
> > Add the needed hook.
> > 
> > Signed-off-by: James Morse <james.morse@arm.com>
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Oliver Upton <oupton@kernel.org>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Lorenzo Pieralisi <lpieralisi@kernel.org>
> > Cc: Sudeep Holla <sudeep.holla@kernel.org>
> 
> In case this goes in before Will's p-guest series and with just a small comment
> below:
> 
> Reviewed-by: Vincent Donnefort <vdonnefort@google.com>

Thanks.

I can leave this patch for later, maybe merge it after -rc1.

> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index 38f66a56a766..ef8afbdd421b 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -5,6 +5,8 @@
> >   */
> >  
> >  #include <linux/kvm_host.h>
> > +#include <linux/arm-smccc.h>
> > +
> >  #include <asm/kvm_emulate.h>
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> > @@ -28,6 +30,15 @@ static struct hyp_pool host_s2_pool;
> >  static DEFINE_PER_CPU(struct pkvm_hyp_vm *, __current_vm);
> >  #define current_vm (*this_cpu_ptr(&__current_vm))
> >  
> > +static void pkvm_sme_dvmsync_fw_call(void)
> > +{
> > +	if (alternative_has_cap_unlikely(ARM64_WORKAROUND_4193714)) {
> > +		struct arm_smccc_res res;
> > +
> > +		arm_smccc_1_1_smc(ARM_SMCCC_CPU_WORKAROUND_4193714, &res);
> 
> With hyp tracing in kvmarm/next, this should be hyp_smccc_1_1_smc().

One more reason to leave it after -rc1.

-- 
Catalin


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 4/5] arm64: errata: Work around early CME DVMSync acknowledgement
  2026-03-23 16:24 ` [PATCH v3 4/5] arm64: errata: Work around early CME DVMSync acknowledgement Catalin Marinas
@ 2026-03-27 19:15   ` Catalin Marinas
  0 siblings, 0 replies; 10+ messages in thread
From: Catalin Marinas @ 2026-03-27 19:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Oliver Upton, Lorenzo Pieralisi,
	Sudeep Holla, James Morse, Mark Rutland, Mark Brown, kvmarm

On Mon, Mar 23, 2026 at 04:24:04PM +0000, Catalin Marinas wrote:
> +void sme_enable_dvmsync(void)
> +{
> +	if ((!cpumask_available(sme_dvmsync_cpus) &&
> +	     !zalloc_cpumask_var(&sme_dvmsync_cpus, GFP_ATOMIC)) ||
> +	    (!cpumask_available(sme_active_cpus) &&
> +	     !zalloc_cpumask_var(&sme_active_cpus, GFP_ATOMIC)))
> +		panic("Unable to allocate the cpumasks for SME DVMSync erratum");
> +
> +	cpumask_set_cpu(smp_processor_id(), sme_dvmsync_cpus);
> +}

Sashiko (correctly) highlighted a race here. This function, even if the
erratum is a local cpu feature, is still called via stop_machine() on
all active cpus when the non-boot features are initialised. It only
matters if CPUMASK_OFFSTACK is enabled. I'll add a fix, most likely a
lock around this to serialise the cpumask initialisation.

-- 
Catalin


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2026-03-27 19:15 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-23 16:24 [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Catalin Marinas
2026-03-23 16:24 ` [PATCH v3 1/5] arm64: tlb: Introduce __tlbi_sync_s1ish_{kernel,batch}() for TLB maintenance Catalin Marinas
2026-03-23 16:24 ` [PATCH v3 2/5] arm64: tlb: Pass the corresponding mm to __tlbi_sync_s1ish() Catalin Marinas
2026-03-23 16:24 ` [PATCH v3 3/5] arm64: cputype: Add C1-Pro definitions Catalin Marinas
2026-03-23 16:24 ` [PATCH v3 4/5] arm64: errata: Work around early CME DVMSync acknowledgement Catalin Marinas
2026-03-27 19:15   ` Catalin Marinas
2026-03-23 16:24 ` [PATCH v3 5/5] KVM: arm64: Add SMC hook for SME dvmsync erratum Catalin Marinas
2026-03-24 10:14   ` Vincent Donnefort
2026-03-24 12:56     ` Catalin Marinas
2026-03-23 17:53 ` [PATCH v3 0/5] arm64: Work around C1-Pro erratum 4193714 (CVE-2026-0995) Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox