* [RFC v3 01/21] hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-10-12 15:06 ` [RFC v3 02/21] hw/arm/smmuv3: Correct SMMUEN field name in CR0 Tao Tang
` (19 subsequent siblings)
20 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
The current definition of the SMMU_CR0_RESERVED mask is incorrect.
It mistakenly treats bit 10 (DPT_WALK_EN) as a reserved bit while
treating bit 9 (RES0) as an implemented bit.
According to the SMMU architecture specification, the layout for CR0 is:
| 31:11| RES0 |
| 10 | DPT_WALK_EN |
| 9 | RES0 |
| 8:6 | VMW |
| 5 | RES0 |
| 4 | ATSCHK |
| 3 | CMDQEN |
| 2 | EVENTQEN |
| 1 | PRIQEN |
| 0 | SMMUEN |
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lists.gnu.org/archive/html/qemu-arm/2025-06/msg00088.html
---
hw/arm/smmuv3-internal.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index b6b7399347..42ac77e654 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -120,7 +120,7 @@ REG32(CR0, 0x20)
FIELD(CR0, EVENTQEN, 2, 1)
FIELD(CR0, CMDQEN, 3, 1)
-#define SMMU_CR0_RESERVED 0xFFFFFC20
+#define SMMU_CR0_RESERVED 0xFFFFFA20
REG32(CR0ACK, 0x24)
REG32(CR1, 0x28)
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* [RFC v3 02/21] hw/arm/smmuv3: Correct SMMUEN field name in CR0
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
2025-10-12 15:06 ` [RFC v3 01/21] hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-10-12 15:06 ` [RFC v3 03/21] hw/arm/smmuv3: Introduce secure registers Tao Tang
` (18 subsequent siblings)
20 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
The FIELD macro for the SMMU enable bit in the CR0 register was
incorrectly named SMMU_ENABLE.
The ARM SMMUv3 Architecture Specification (both older IHI 0070.E.a and
newer IHI 0070.G.b) consistently refers to the SMMU enable bit as SMMUEN.
This change makes our implementation consistent with the manual.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lists.nongnu.org/archive/html/qemu-arm/2025-09/msg01270.html
---
hw/arm/smmuv3-internal.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 42ac77e654..8d631ecf27 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -116,7 +116,7 @@ REG32(IDR5, 0x14)
REG32(IIDR, 0x18)
REG32(AIDR, 0x1c)
REG32(CR0, 0x20)
- FIELD(CR0, SMMU_ENABLE, 0, 1)
+ FIELD(CR0, SMMUEN, 0, 1)
FIELD(CR0, EVENTQEN, 2, 1)
FIELD(CR0, CMDQEN, 3, 1)
@@ -181,7 +181,7 @@ REG32(EVENTQ_IRQ_CFG2, 0xbc)
static inline int smmu_enabled(SMMUv3State *s)
{
- return FIELD_EX32(s->cr[0], CR0, SMMU_ENABLE);
+ return FIELD_EX32(s->cr[0], CR0, SMMUEN);
}
/* Command Queue Entry */
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* [RFC v3 03/21] hw/arm/smmuv3: Introduce secure registers
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
2025-10-12 15:06 ` [RFC v3 01/21] hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register Tao Tang
2025-10-12 15:06 ` [RFC v3 02/21] hw/arm/smmuv3: Correct SMMUEN field name in CR0 Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-11-21 12:47 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 04/21] refactor: Move ARMSecuritySpace to a common header Tao Tang
` (17 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
The Arm SMMUv3 architecture defines a set of registers for managing
secure transactions and context.
This patch introduces the definitions for these secure registers within
the SMMUv3 device model internal header.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3-internal.h | 69 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 68 insertions(+), 1 deletion(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 8d631ecf27..e420c5dc72 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -38,7 +38,7 @@ typedef enum SMMUTranslationClass {
SMMU_CLASS_IN,
} SMMUTranslationClass;
-/* MMIO Registers */
+/* MMIO Registers. Shared by Non-secure/Realm/Root states. */
REG32(IDR0, 0x0)
FIELD(IDR0, S2P, 0 , 1)
@@ -121,6 +121,7 @@ REG32(CR0, 0x20)
FIELD(CR0, CMDQEN, 3, 1)
#define SMMU_CR0_RESERVED 0xFFFFFA20
+#define SMMU_S_CR0_RESERVED 0xFFFFFC12
REG32(CR0ACK, 0x24)
REG32(CR1, 0x28)
@@ -179,6 +180,72 @@ REG32(EVENTQ_IRQ_CFG2, 0xbc)
#define A_IDREGS 0xfd0
+#define SMMU_SECURE_REG_START 0x8000 /* Start of secure-only registers */
+
+REG32(S_IDR0, 0x8000)
+REG32(S_IDR1, 0x8004)
+ FIELD(S_IDR1, S_SIDSIZE, 0 , 6)
+ FIELD(S_IDR1, SEL2, 29, 1)
+ FIELD(S_IDR1, SECURE_IMPL, 31, 1)
+
+REG32(S_IDR2, 0x8008)
+REG32(S_IDR3, 0x800c)
+REG32(S_IDR4, 0x8010)
+
+REG32(S_CR0, 0x8020)
+ FIELD(S_CR0, SMMUEN, 0, 1)
+ FIELD(S_CR0, EVENTQEN, 2, 1)
+ FIELD(S_CR0, CMDQEN, 3, 1)
+
+REG32(S_CR0ACK, 0x8024)
+REG32(S_CR1, 0x8028)
+REG32(S_CR2, 0x802c)
+
+REG32(S_INIT, 0x803c)
+ FIELD(S_INIT, INV_ALL, 0, 1)
+
+REG32(S_GBPA, 0x8044)
+ FIELD(S_GBPA, ABORT, 20, 1)
+ FIELD(S_GBPA, UPDATE, 31, 1)
+
+REG32(S_IRQ_CTRL, 0x8050)
+ FIELD(S_IRQ_CTRL, GERROR_IRQEN, 0, 1)
+ FIELD(S_IRQ_CTRL, EVENTQ_IRQEN, 2, 1)
+
+REG32(S_IRQ_CTRLACK, 0x8054)
+
+REG32(S_GERROR, 0x8060)
+ FIELD(S_GERROR, CMDQ_ERR, 0, 1)
+
+#define SMMU_GERROR_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
+#define SMMU_GERROR_IRQ_CFG2_RESERVED 0x000000000000003F
+
+#define SMMU_STRTAB_BASE_RESERVED 0x40FFFFFFFFFFFFC0
+#define SMMU_QUEUE_BASE_RESERVED 0x40FFFFFFFFFFFFFF
+#define SMMU_EVENTQ_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
+
+REG32(S_GERRORN, 0x8064)
+REG64(S_GERROR_IRQ_CFG0, 0x8068)
+REG32(S_GERROR_IRQ_CFG1, 0x8070)
+REG32(S_GERROR_IRQ_CFG2, 0x8074)
+REG64(S_STRTAB_BASE, 0x8080)
+REG32(S_STRTAB_BASE_CFG, 0x8088)
+ FIELD(S_STRTAB_BASE_CFG, LOG2SIZE, 0, 6)
+ FIELD(S_STRTAB_BASE_CFG, SPLIT, 6, 5)
+ FIELD(S_STRTAB_BASE_CFG, FMT, 16, 2)
+
+REG64(S_CMDQ_BASE, 0x8090)
+REG32(S_CMDQ_PROD, 0x8098)
+REG32(S_CMDQ_CONS, 0x809c)
+ FIELD(S_CMDQ_CONS, ERR, 24, 7)
+
+REG64(S_EVENTQ_BASE, 0x80a0)
+REG32(S_EVENTQ_PROD, 0x80a8)
+REG32(S_EVENTQ_CONS, 0x80ac)
+REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
+REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
+REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
+
static inline int smmu_enabled(SMMUv3State *s)
{
return FIELD_EX32(s->cr[0], CR0, SMMUEN);
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 03/21] hw/arm/smmuv3: Introduce secure registers
2025-10-12 15:06 ` [RFC v3 03/21] hw/arm/smmuv3: Introduce secure registers Tao Tang
@ 2025-11-21 12:47 ` Eric Auger
0 siblings, 0 replies; 67+ messages in thread
From: Eric Auger @ 2025-11-21 12:47 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 10/12/25 5:06 PM, Tao Tang wrote:
> The Arm SMMUv3 architecture defines a set of registers for managing
> secure transactions and context.
>
> This patch introduces the definitions for these secure registers within
> the SMMUv3 device model internal header.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Eric
> ---
> hw/arm/smmuv3-internal.h | 69 +++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 68 insertions(+), 1 deletion(-)
>
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index 8d631ecf27..e420c5dc72 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -38,7 +38,7 @@ typedef enum SMMUTranslationClass {
> SMMU_CLASS_IN,
> } SMMUTranslationClass;
>
> -/* MMIO Registers */
> +/* MMIO Registers. Shared by Non-secure/Realm/Root states. */
>
> REG32(IDR0, 0x0)
> FIELD(IDR0, S2P, 0 , 1)
> @@ -121,6 +121,7 @@ REG32(CR0, 0x20)
> FIELD(CR0, CMDQEN, 3, 1)
>
> #define SMMU_CR0_RESERVED 0xFFFFFA20
> +#define SMMU_S_CR0_RESERVED 0xFFFFFC12
>
> REG32(CR0ACK, 0x24)
> REG32(CR1, 0x28)
> @@ -179,6 +180,72 @@ REG32(EVENTQ_IRQ_CFG2, 0xbc)
>
> #define A_IDREGS 0xfd0
>
> +#define SMMU_SECURE_REG_START 0x8000 /* Start of secure-only registers */
> +
> +REG32(S_IDR0, 0x8000)
> +REG32(S_IDR1, 0x8004)
> + FIELD(S_IDR1, S_SIDSIZE, 0 , 6)
> + FIELD(S_IDR1, SEL2, 29, 1)
> + FIELD(S_IDR1, SECURE_IMPL, 31, 1)
> +
> +REG32(S_IDR2, 0x8008)
> +REG32(S_IDR3, 0x800c)
> +REG32(S_IDR4, 0x8010)
> +
> +REG32(S_CR0, 0x8020)
> + FIELD(S_CR0, SMMUEN, 0, 1)
> + FIELD(S_CR0, EVENTQEN, 2, 1)
> + FIELD(S_CR0, CMDQEN, 3, 1)
> +
> +REG32(S_CR0ACK, 0x8024)
> +REG32(S_CR1, 0x8028)
> +REG32(S_CR2, 0x802c)
> +
> +REG32(S_INIT, 0x803c)
> + FIELD(S_INIT, INV_ALL, 0, 1)
> +
> +REG32(S_GBPA, 0x8044)
> + FIELD(S_GBPA, ABORT, 20, 1)
> + FIELD(S_GBPA, UPDATE, 31, 1)
> +
> +REG32(S_IRQ_CTRL, 0x8050)
> + FIELD(S_IRQ_CTRL, GERROR_IRQEN, 0, 1)
> + FIELD(S_IRQ_CTRL, EVENTQ_IRQEN, 2, 1)
> +
> +REG32(S_IRQ_CTRLACK, 0x8054)
> +
> +REG32(S_GERROR, 0x8060)
> + FIELD(S_GERROR, CMDQ_ERR, 0, 1)
> +
> +#define SMMU_GERROR_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
> +#define SMMU_GERROR_IRQ_CFG2_RESERVED 0x000000000000003F
> +
> +#define SMMU_STRTAB_BASE_RESERVED 0x40FFFFFFFFFFFFC0
> +#define SMMU_QUEUE_BASE_RESERVED 0x40FFFFFFFFFFFFFF
> +#define SMMU_EVENTQ_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
> +
> +REG32(S_GERRORN, 0x8064)
> +REG64(S_GERROR_IRQ_CFG0, 0x8068)
> +REG32(S_GERROR_IRQ_CFG1, 0x8070)
> +REG32(S_GERROR_IRQ_CFG2, 0x8074)
> +REG64(S_STRTAB_BASE, 0x8080)
> +REG32(S_STRTAB_BASE_CFG, 0x8088)
> + FIELD(S_STRTAB_BASE_CFG, LOG2SIZE, 0, 6)
> + FIELD(S_STRTAB_BASE_CFG, SPLIT, 6, 5)
> + FIELD(S_STRTAB_BASE_CFG, FMT, 16, 2)
> +
> +REG64(S_CMDQ_BASE, 0x8090)
> +REG32(S_CMDQ_PROD, 0x8098)
> +REG32(S_CMDQ_CONS, 0x809c)
> + FIELD(S_CMDQ_CONS, ERR, 24, 7)
> +
> +REG64(S_EVENTQ_BASE, 0x80a0)
> +REG32(S_EVENTQ_PROD, 0x80a8)
> +REG32(S_EVENTQ_CONS, 0x80ac)
> +REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
> +REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
> +REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
> +
> static inline int smmu_enabled(SMMUv3State *s)
> {
> return FIELD_EX32(s->cr[0], CR0, SMMUEN);
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 04/21] refactor: Move ARMSecuritySpace to a common header
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (2 preceding siblings ...)
2025-10-12 15:06 ` [RFC v3 03/21] hw/arm/smmuv3: Introduce secure registers Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-11-21 12:49 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 05/21] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state Tao Tang
` (16 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
The ARMSecuritySpace enum and its related helpers were defined in the
target-specific header target/arm/cpu.h. This prevented common,
target-agnostic code like the SMMU model from using these definitions
without triggering "cpu.h included from common code" errors.
To resolve this, this commit introduces a new, lightweight header,
include/hw/arm/arm-security.h, which is safe for inclusion by common
code.
The following change was made:
- The ARMSecuritySpace enum and the arm_space_is_secure() and
arm_secure_to_space() helpers have been moved from target/arm/cpu.h
to the new hw/arm/arm-security.h header.
This refactoring decouples the security state definitions from the core
CPU implementation, allowing common hardware models to correctly handle
security states without pulling in heavyweight, target-specific headers.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lists.nongnu.org/archive/html/qemu-arm/2025-09/msg01288.html
---
include/hw/arm/arm-security.h | 54 +++++++++++++++++++++++++++++++++++
target/arm/cpu.h | 25 +---------------
2 files changed, 55 insertions(+), 24 deletions(-)
create mode 100644 include/hw/arm/arm-security.h
diff --git a/include/hw/arm/arm-security.h b/include/hw/arm/arm-security.h
new file mode 100644
index 0000000000..9664c0f95e
--- /dev/null
+++ b/include/hw/arm/arm-security.h
@@ -0,0 +1,54 @@
+/*
+ * ARM security space helpers
+ *
+ * Provide ARMSecuritySpace and helpers for code that is not tied to CPU.
+ *
+ * Copyright (c) 2003 Fabrice Bellard
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef HW_ARM_ARM_SECURITY_H
+#define HW_ARM_ARM_SECURITY_H
+
+#include <stdbool.h>
+
+/*
+ * ARM v9 security states.
+ * The ordering of the enumeration corresponds to the low 2 bits
+ * of the GPI value, and (except for Root) the concat of NSE:NS.
+ */
+
+ typedef enum ARMSecuritySpace {
+ ARMSS_Secure = 0,
+ ARMSS_NonSecure = 1,
+ ARMSS_Root = 2,
+ ARMSS_Realm = 3,
+} ARMSecuritySpace;
+
+/* Return true if @space is secure, in the pre-v9 sense. */
+static inline bool arm_space_is_secure(ARMSecuritySpace space)
+{
+ return space == ARMSS_Secure || space == ARMSS_Root;
+}
+
+/* Return the ARMSecuritySpace for @secure, assuming !RME or EL[0-2]. */
+static inline ARMSecuritySpace arm_secure_to_space(bool secure)
+{
+ return secure ? ARMSS_Secure : ARMSS_NonSecure;
+}
+
+#endif /* HW_ARM_ARM_SECURITY_H */
+
+
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 1d4e13320c..3336d95c6a 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -31,6 +31,7 @@
#include "exec/page-protection.h"
#include "qapi/qapi-types-common.h"
#include "target/arm/multiprocessing.h"
+#include "hw/arm/arm-security.h"
#include "target/arm/gtimer.h"
#include "target/arm/cpu-sysregs.h"
#include "target/arm/mmuidx.h"
@@ -2098,30 +2099,6 @@ static inline int arm_feature(CPUARMState *env, int feature)
void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp);
-/*
- * ARM v9 security states.
- * The ordering of the enumeration corresponds to the low 2 bits
- * of the GPI value, and (except for Root) the concat of NSE:NS.
- */
-
-typedef enum ARMSecuritySpace {
- ARMSS_Secure = 0,
- ARMSS_NonSecure = 1,
- ARMSS_Root = 2,
- ARMSS_Realm = 3,
-} ARMSecuritySpace;
-
-/* Return true if @space is secure, in the pre-v9 sense. */
-static inline bool arm_space_is_secure(ARMSecuritySpace space)
-{
- return space == ARMSS_Secure || space == ARMSS_Root;
-}
-
-/* Return the ARMSecuritySpace for @secure, assuming !RME or EL[0-2]. */
-static inline ARMSecuritySpace arm_secure_to_space(bool secure)
-{
- return secure ? ARMSS_Secure : ARMSS_NonSecure;
-}
#if !defined(CONFIG_USER_ONLY)
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 04/21] refactor: Move ARMSecuritySpace to a common header
2025-10-12 15:06 ` [RFC v3 04/21] refactor: Move ARMSecuritySpace to a common header Tao Tang
@ 2025-11-21 12:49 ` Eric Auger
0 siblings, 0 replies; 67+ messages in thread
From: Eric Auger @ 2025-11-21 12:49 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:06 PM, Tao Tang wrote:
> The ARMSecuritySpace enum and its related helpers were defined in the
> target-specific header target/arm/cpu.h. This prevented common,
> target-agnostic code like the SMMU model from using these definitions
> without triggering "cpu.h included from common code" errors.
>
> To resolve this, this commit introduces a new, lightweight header,
> include/hw/arm/arm-security.h, which is safe for inclusion by common
> code.
>
> The following change was made:
>
> - The ARMSecuritySpace enum and the arm_space_is_secure() and
> arm_secure_to_space() helpers have been moved from target/arm/cpu.h
> to the new hw/arm/arm-security.h header.
>
> This refactoring decouples the security state definitions from the core
> CPU implementation, allowing common hardware models to correctly handle
> security states without pulling in heavyweight, target-specific headers.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
nit: the commit msg prefix is unusual (refactor). I would rename into
target/arm: Move ARMSecuritySpace to a common header
Eric
> Link: https://lists.nongnu.org/archive/html/qemu-arm/2025-09/msg01288.html
> ---
> include/hw/arm/arm-security.h | 54 +++++++++++++++++++++++++++++++++++
> target/arm/cpu.h | 25 +---------------
> 2 files changed, 55 insertions(+), 24 deletions(-)
> create mode 100644 include/hw/arm/arm-security.h
>
> diff --git a/include/hw/arm/arm-security.h b/include/hw/arm/arm-security.h
> new file mode 100644
> index 0000000000..9664c0f95e
> --- /dev/null
> +++ b/include/hw/arm/arm-security.h
> @@ -0,0 +1,54 @@
> +/*
> + * ARM security space helpers
> + *
> + * Provide ARMSecuritySpace and helpers for code that is not tied to CPU.
> + *
> + * Copyright (c) 2003 Fabrice Bellard
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef HW_ARM_ARM_SECURITY_H
> +#define HW_ARM_ARM_SECURITY_H
> +
> +#include <stdbool.h>
> +
> +/*
> + * ARM v9 security states.
> + * The ordering of the enumeration corresponds to the low 2 bits
> + * of the GPI value, and (except for Root) the concat of NSE:NS.
> + */
> +
> + typedef enum ARMSecuritySpace {
> + ARMSS_Secure = 0,
> + ARMSS_NonSecure = 1,
> + ARMSS_Root = 2,
> + ARMSS_Realm = 3,
> +} ARMSecuritySpace;
> +
> +/* Return true if @space is secure, in the pre-v9 sense. */
> +static inline bool arm_space_is_secure(ARMSecuritySpace space)
> +{
> + return space == ARMSS_Secure || space == ARMSS_Root;
> +}
> +
> +/* Return the ARMSecuritySpace for @secure, assuming !RME or EL[0-2]. */
> +static inline ARMSecuritySpace arm_secure_to_space(bool secure)
> +{
> + return secure ? ARMSS_Secure : ARMSS_NonSecure;
> +}
> +
> +#endif /* HW_ARM_ARM_SECURITY_H */
> +
> +
> diff --git a/target/arm/cpu.h b/target/arm/cpu.h
> index 1d4e13320c..3336d95c6a 100644
> --- a/target/arm/cpu.h
> +++ b/target/arm/cpu.h
> @@ -31,6 +31,7 @@
> #include "exec/page-protection.h"
> #include "qapi/qapi-types-common.h"
> #include "target/arm/multiprocessing.h"
> +#include "hw/arm/arm-security.h"
> #include "target/arm/gtimer.h"
> #include "target/arm/cpu-sysregs.h"
> #include "target/arm/mmuidx.h"
> @@ -2098,30 +2099,6 @@ static inline int arm_feature(CPUARMState *env, int feature)
>
> void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp);
>
> -/*
> - * ARM v9 security states.
> - * The ordering of the enumeration corresponds to the low 2 bits
> - * of the GPI value, and (except for Root) the concat of NSE:NS.
> - */
> -
> -typedef enum ARMSecuritySpace {
> - ARMSS_Secure = 0,
> - ARMSS_NonSecure = 1,
> - ARMSS_Root = 2,
> - ARMSS_Realm = 3,
> -} ARMSecuritySpace;
> -
> -/* Return true if @space is secure, in the pre-v9 sense. */
> -static inline bool arm_space_is_secure(ARMSecuritySpace space)
> -{
> - return space == ARMSS_Secure || space == ARMSS_Root;
> -}
> -
> -/* Return the ARMSecuritySpace for @secure, assuming !RME or EL[0-2]. */
> -static inline ARMSecuritySpace arm_secure_to_space(bool secure)
> -{
> - return secure ? ARMSS_Secure : ARMSS_NonSecure;
> -}
>
> #if !defined(CONFIG_USER_ONLY)
> /**
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 05/21] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (3 preceding siblings ...)
2025-10-12 15:06 ` [RFC v3 04/21] refactor: Move ARMSecuritySpace to a common header Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-11-21 13:02 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 06/21] hw/arm/smmuv3: Thread SEC_SID through helper APIs Tao Tang
` (15 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Rework the SMMUv3 state management by introducing a banked register
structure. This is a purely mechanical refactoring with no functional
changes.
To support multiple security states, a new enum, SMMUSecSID, is
introduced to identify each state, sticking to the spec terminology.
A new structure, SMMUv3RegBank, is then defined to hold the state
for a single security context. The main SMMUv3State now contains an
array of these banks, indexed by SMMUSecSID. This avoids the need for
separate fields for non-secure and future secure registers.
All existing code, which handles only the Non-secure state, is updated
to access its state via s->bank[SMMU_SEC_SID_NS]. A local bank helper
pointer is used where it improves readability.
Function signatures and logic remain untouched in this commit to
isolate the structural changes and simplify review. This is the
foundational step for building multi-security-state support.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3-internal.h | 24 ++-
hw/arm/smmuv3.c | 344 +++++++++++++++++++----------------
include/hw/arm/smmu-common.h | 6 +
include/hw/arm/smmuv3.h | 38 +++-
4 files changed, 239 insertions(+), 173 deletions(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index e420c5dc72..858bc206a2 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -248,7 +248,9 @@ REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
static inline int smmu_enabled(SMMUv3State *s)
{
- return FIELD_EX32(s->cr[0], CR0, SMMUEN);
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
+ return FIELD_EX32(bank->cr[0], CR0, SMMUEN);
}
/* Command Queue Entry */
@@ -276,12 +278,16 @@ static inline uint32_t smmuv3_idreg(int regoffset)
static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s)
{
- return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
+ return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
}
static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
{
- return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
+ return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
}
/* Queue Handling */
@@ -326,17 +332,23 @@ static inline void queue_cons_incr(SMMUQueue *q)
static inline bool smmuv3_cmdq_enabled(SMMUv3State *s)
{
- return FIELD_EX32(s->cr[0], CR0, CMDQEN);
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
+ return FIELD_EX32(bank->cr[0], CR0, CMDQEN);
}
static inline bool smmuv3_eventq_enabled(SMMUv3State *s)
{
- return FIELD_EX32(s->cr[0], CR0, EVENTQEN);
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
+ return FIELD_EX32(bank->cr[0], CR0, EVENTQEN);
}
static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
{
- s->cmdq.cons = FIELD_DP32(s->cmdq.cons, CMDQ_CONS, ERR, err_type);
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
+ bank->cmdq.cons = FIELD_DP32(bank->cmdq.cons, CMDQ_CONS, ERR, err_type);
}
/* Commands */
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index bcf8af8dc7..9c085ac678 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -50,6 +50,8 @@
static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
uint32_t gerror_mask)
{
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
bool pulse = false;
@@ -65,15 +67,15 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
break;
case SMMU_IRQ_GERROR:
{
- uint32_t pending = s->gerror ^ s->gerrorn;
+ uint32_t pending = bank->gerror ^ bank->gerrorn;
uint32_t new_gerrors = ~pending & gerror_mask;
if (!new_gerrors) {
/* only toggle non pending errors */
return;
}
- s->gerror ^= new_gerrors;
- trace_smmuv3_write_gerror(new_gerrors, s->gerror);
+ bank->gerror ^= new_gerrors;
+ trace_smmuv3_write_gerror(new_gerrors, bank->gerror);
pulse = smmuv3_gerror_irq_enabled(s);
break;
@@ -87,8 +89,10 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
{
- uint32_t pending = s->gerror ^ s->gerrorn;
- uint32_t toggled = s->gerrorn ^ new_gerrorn;
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
+ uint32_t pending = bank->gerror ^ bank->gerrorn;
+ uint32_t toggled = bank->gerrorn ^ new_gerrorn;
if (toggled & ~pending) {
qemu_log_mask(LOG_GUEST_ERROR,
@@ -100,9 +104,9 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
* We do not raise any error in case guest toggles bits corresponding
* to not active IRQs (CONSTRAINED UNPREDICTABLE)
*/
- s->gerrorn = new_gerrorn;
+ bank->gerrorn = new_gerrorn;
- trace_smmuv3_write_gerrorn(toggled & pending, s->gerrorn);
+ trace_smmuv3_write_gerrorn(toggled & pending, bank->gerrorn);
}
static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
@@ -144,7 +148,9 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
{
- SMMUQueue *q = &s->eventq;
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
+ SMMUQueue *q = &bank->eventq;
MemTxResult r;
if (!smmuv3_eventq_enabled(s)) {
@@ -259,64 +265,66 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
static void smmuv3_init_regs(SMMUv3State *s)
{
+ SMMUv3RegBank *bk = smmuv3_bank_ns(s);
+
/* Based on sys property, the stages supported in smmu will be advertised.*/
if (s->stage && !strcmp("2", s->stage)) {
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, S2P, 1);
} else if (s->stage && !strcmp("nested", s->stage)) {
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, S1P, 1);
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, S2P, 1);
} else {
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, S1P, 1);
}
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, COHACC, 1); /* IO coherent */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTENDIAN, 2); /* little endian */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STALL_MODEL, 1); /* No stall */
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, COHACC, 1); /* IO coherent */
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, TTENDIAN, 2); /* little endian */
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, STALL_MODEL, 1); /* No stall */
/* terminated transaction will always be aborted/error returned */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TERM_MODEL, 1);
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, TERM_MODEL, 1);
/* 2-level stream table supported */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STLEVEL, 1);
+ bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, STLEVEL, 1);
- s->idr[1] = FIELD_DP32(s->idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
- s->idr[1] = FIELD_DP32(s->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
- s->idr[1] = FIELD_DP32(s->idr[1], IDR1, CMDQS, SMMU_CMDQS);
+ bk->idr[1] = FIELD_DP32(bk->idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
+ bk->idr[1] = FIELD_DP32(bk->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
+ bk->idr[1] = FIELD_DP32(bk->idr[1], IDR1, CMDQS, SMMU_CMDQS);
- s->idr[3] = FIELD_DP32(s->idr[3], IDR3, HAD, 1);
- if (FIELD_EX32(s->idr[0], IDR0, S2P)) {
+ bk->idr[3] = FIELD_DP32(bk->idr[3], IDR3, HAD, 1);
+ if (FIELD_EX32(bk->idr[0], IDR0, S2P)) {
/* XNX is a stage-2-specific feature */
- s->idr[3] = FIELD_DP32(s->idr[3], IDR3, XNX, 1);
+ bk->idr[3] = FIELD_DP32(bk->idr[3], IDR3, XNX, 1);
}
- s->idr[3] = FIELD_DP32(s->idr[3], IDR3, RIL, 1);
- s->idr[3] = FIELD_DP32(s->idr[3], IDR3, BBML, 2);
+ bk->idr[3] = FIELD_DP32(bk->idr[3], IDR3, RIL, 1);
+ bk->idr[3] = FIELD_DP32(bk->idr[3], IDR3, BBML, 2);
- s->idr[5] = FIELD_DP32(s->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44 bits */
+ bk->idr[5] = FIELD_DP32(bk->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44 bits */
/* 4K, 16K and 64K granule support */
- s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
- s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN16K, 1);
- s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN64K, 1);
-
- s->cmdq.base = deposit64(s->cmdq.base, 0, 5, SMMU_CMDQS);
- s->cmdq.prod = 0;
- s->cmdq.cons = 0;
- s->cmdq.entry_size = sizeof(struct Cmd);
- s->eventq.base = deposit64(s->eventq.base, 0, 5, SMMU_EVENTQS);
- s->eventq.prod = 0;
- s->eventq.cons = 0;
- s->eventq.entry_size = sizeof(struct Evt);
-
- s->features = 0;
- s->sid_split = 0;
+ bk->idr[5] = FIELD_DP32(bk->idr[5], IDR5, GRAN4K, 1);
+ bk->idr[5] = FIELD_DP32(bk->idr[5], IDR5, GRAN16K, 1);
+ bk->idr[5] = FIELD_DP32(bk->idr[5], IDR5, GRAN64K, 1);
+
+ bk->cmdq.base = deposit64(bk->cmdq.base, 0, 5, SMMU_CMDQS);
+ bk->cmdq.prod = 0;
+ bk->cmdq.cons = 0;
+ bk->cmdq.entry_size = sizeof(struct Cmd);
+ bk->eventq.base = deposit64(bk->eventq.base, 0, 5, SMMU_EVENTQS);
+ bk->eventq.prod = 0;
+ bk->eventq.cons = 0;
+ bk->eventq.entry_size = sizeof(struct Evt);
+
+ bk->features = 0;
+ bk->sid_split = 0;
s->aidr = 0x1;
- s->cr[0] = 0;
- s->cr0ack = 0;
- s->irq_ctrl = 0;
- s->gerror = 0;
- s->gerrorn = 0;
+ bk->cr[0] = 0;
+ bk->cr0ack = 0;
+ bk->irq_ctrl = 0;
+ bk->gerror = 0;
+ bk->gerrorn = 0;
s->statusr = 0;
- s->gbpa = SMMU_GBPA_RESET_VAL;
+ bk->gbpa = SMMU_GBPA_RESET_VAL;
}
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
@@ -430,7 +438,7 @@ static bool s2_pgtable_config_valid(uint8_t sl0, uint8_t t0sz, uint8_t gran)
static int decode_ste_s2_cfg(SMMUv3State *s, SMMUTransCfg *cfg,
STE *ste)
{
- uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
+ uint8_t oas = FIELD_EX32(smmuv3_bank_ns(s)->idr[5], IDR5, OAS);
if (STE_S2AA64(ste) == 0x0) {
qemu_log_mask(LOG_UNIMP,
@@ -548,7 +556,8 @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
STE *ste, SMMUEventInfo *event)
{
uint32_t config;
- uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
+ /* OAS is shared between S and NS and only present on NS-IDR5 */
+ uint8_t oas = FIELD_EX32(smmuv3_bank_ns(s)->idr[5], IDR5, OAS);
int ret;
if (!STE_VALID(ste)) {
@@ -636,9 +645,11 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
uint32_t log2size;
int strtab_size_shift;
int ret;
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
- trace_smmuv3_find_ste(sid, s->features, s->sid_split);
- log2size = FIELD_EX32(s->strtab_base_cfg, STRTAB_BASE_CFG, LOG2SIZE);
+ trace_smmuv3_find_ste(sid, bank->features, bank->sid_split);
+ log2size = FIELD_EX32(bank->strtab_base_cfg, STRTAB_BASE_CFG, LOG2SIZE);
/*
* Check SID range against both guest-configured and implementation limits
*/
@@ -646,7 +657,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
event->type = SMMU_EVT_C_BAD_STREAMID;
return -EINVAL;
}
- if (s->features & SMMU_FEATURE_2LVL_STE) {
+ if (bank->features & SMMU_FEATURE_2LVL_STE) {
int l1_ste_offset, l2_ste_offset, max_l2_ste, span, i;
dma_addr_t l1ptr, l2ptr;
STEDesc l1std;
@@ -655,11 +666,11 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
* Align strtab base address to table size. For this purpose, assume it
* is not bounded by SMMU_IDR1_SIDSIZE.
*/
- strtab_size_shift = MAX(5, (int)log2size - s->sid_split - 1 + 3);
- strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
+ strtab_size_shift = MAX(5, (int)log2size - bank->sid_split - 1 + 3);
+ strtab_base = bank->strtab_base & SMMU_BASE_ADDR_MASK &
~MAKE_64BIT_MASK(0, strtab_size_shift);
- l1_ste_offset = sid >> s->sid_split;
- l2_ste_offset = sid & ((1 << s->sid_split) - 1);
+ l1_ste_offset = sid >> bank->sid_split;
+ l2_ste_offset = sid & ((1 << bank->sid_split) - 1);
l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
/* TODO: guarantee 64-bit single-copy atomicity */
ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
@@ -688,7 +699,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
}
max_l2_ste = (1 << span) - 1;
l2ptr = l1std_l2ptr(&l1std);
- trace_smmuv3_find_ste_2lvl(s->strtab_base, l1ptr, l1_ste_offset,
+ trace_smmuv3_find_ste_2lvl(bank->strtab_base, l1ptr, l1_ste_offset,
l2ptr, l2_ste_offset, max_l2_ste);
if (l2_ste_offset > max_l2_ste) {
qemu_log_mask(LOG_GUEST_ERROR,
@@ -700,7 +711,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
addr = l2ptr + l2_ste_offset * sizeof(*ste);
} else {
strtab_size_shift = log2size + 5;
- strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
+ strtab_base = bank->strtab_base & SMMU_BASE_ADDR_MASK &
~MAKE_64BIT_MASK(0, strtab_size_shift);
addr = strtab_base + sid * sizeof(*ste);
}
@@ -719,7 +730,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
int i;
SMMUTranslationStatus status;
SMMUTLBEntry *entry;
- uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
+ uint8_t oas = FIELD_EX32(smmuv3_bank_ns(s)->idr[5], IDR5, OAS);
if (!CD_VALID(cd) || !CD_AARCH64(cd)) {
goto bad_cd;
@@ -1041,6 +1052,8 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
SMMUv3State *s = sdev->smmu;
uint32_t sid = smmu_get_sid(sdev);
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
SMMUEventInfo event = {.type = SMMU_EVT_NONE,
.sid = sid,
.inval_ste_allowed = false};
@@ -1058,7 +1071,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
qemu_mutex_lock(&s->mutex);
if (!smmu_enabled(s)) {
- if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
+ if (FIELD_EX32(bank->gbpa, GBPA, ABORT)) {
status = SMMU_TRANS_ABORT;
} else {
status = SMMU_TRANS_DISABLE;
@@ -1282,7 +1295,9 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
{
SMMUState *bs = ARM_SMMU(s);
SMMUCmdError cmd_error = SMMU_CERROR_NONE;
- SMMUQueue *q = &s->cmdq;
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
+ SMMUQueue *q = &bank->cmdq;
SMMUCommandType type = 0;
if (!smmuv3_cmdq_enabled(s)) {
@@ -1296,7 +1311,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
*/
while (!smmuv3_q_empty(q)) {
- uint32_t pending = s->gerror ^ s->gerrorn;
+ uint32_t pending = bank->gerror ^ bank->gerrorn;
Cmd cmd;
trace_smmuv3_cmdq_consume(Q_PROD(q), Q_CONS(q),
@@ -1511,29 +1526,32 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
uint64_t data, MemTxAttrs attrs)
{
+ SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
+
switch (offset) {
case A_GERROR_IRQ_CFG0:
- s->gerror_irq_cfg0 = data;
+ bank->gerror_irq_cfg0 = data;
return MEMTX_OK;
case A_STRTAB_BASE:
- s->strtab_base = data;
+ bank->strtab_base = data;
return MEMTX_OK;
case A_CMDQ_BASE:
- s->cmdq.base = data;
- s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
- if (s->cmdq.log2size > SMMU_CMDQS) {
- s->cmdq.log2size = SMMU_CMDQS;
+ bank->cmdq.base = data;
+ bank->cmdq.log2size = extract64(bank->cmdq.base, 0, 5);
+ if (bank->cmdq.log2size > SMMU_CMDQS) {
+ bank->cmdq.log2size = SMMU_CMDQS;
}
return MEMTX_OK;
case A_EVENTQ_BASE:
- s->eventq.base = data;
- s->eventq.log2size = extract64(s->eventq.base, 0, 5);
- if (s->eventq.log2size > SMMU_EVENTQS) {
- s->eventq.log2size = SMMU_EVENTQS;
+ bank->eventq.base = data;
+ bank->eventq.log2size = extract64(bank->eventq.base, 0, 5);
+ if (bank->eventq.log2size > SMMU_EVENTQS) {
+ bank->eventq.log2size = SMMU_EVENTQS;
}
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0:
- s->eventq_irq_cfg0 = data;
+ bank->eventq_irq_cfg0 = data;
return MEMTX_OK;
default:
qemu_log_mask(LOG_UNIMP,
@@ -1546,21 +1564,24 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
uint64_t data, MemTxAttrs attrs)
{
+ SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
+
switch (offset) {
case A_CR0:
- s->cr[0] = data;
- s->cr0ack = data & ~SMMU_CR0_RESERVED;
+ bank->cr[0] = data;
+ bank->cr0ack = data & ~SMMU_CR0_RESERVED;
/* in case the command queue has been enabled */
smmuv3_cmdq_consume(s);
return MEMTX_OK;
case A_CR1:
- s->cr[1] = data;
+ bank->cr[1] = data;
return MEMTX_OK;
case A_CR2:
- s->cr[2] = data;
+ bank->cr[2] = data;
return MEMTX_OK;
case A_IRQ_CTRL:
- s->irq_ctrl = data;
+ bank->irq_ctrl = data;
return MEMTX_OK;
case A_GERRORN:
smmuv3_write_gerrorn(s, data);
@@ -1571,16 +1592,16 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
smmuv3_cmdq_consume(s);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0: /* 64b */
- s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 0, 32, data);
+ bank->gerror_irq_cfg0 = deposit64(bank->gerror_irq_cfg0, 0, 32, data);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0 + 4:
- s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 32, 32, data);
+ bank->gerror_irq_cfg0 = deposit64(bank->gerror_irq_cfg0, 32, 32, data);
return MEMTX_OK;
case A_GERROR_IRQ_CFG1:
- s->gerror_irq_cfg1 = data;
+ bank->gerror_irq_cfg1 = data;
return MEMTX_OK;
case A_GERROR_IRQ_CFG2:
- s->gerror_irq_cfg2 = data;
+ bank->gerror_irq_cfg2 = data;
return MEMTX_OK;
case A_GBPA:
/*
@@ -1589,66 +1610,66 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
*/
if (data & R_GBPA_UPDATE_MASK) {
/* Ignore update bit as write is synchronous. */
- s->gbpa = data & ~R_GBPA_UPDATE_MASK;
+ bank->gbpa = data & ~R_GBPA_UPDATE_MASK;
}
return MEMTX_OK;
case A_STRTAB_BASE: /* 64b */
- s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
+ bank->strtab_base = deposit64(bank->strtab_base, 0, 32, data);
return MEMTX_OK;
case A_STRTAB_BASE + 4:
- s->strtab_base = deposit64(s->strtab_base, 32, 32, data);
+ bank->strtab_base = deposit64(bank->strtab_base, 32, 32, data);
return MEMTX_OK;
case A_STRTAB_BASE_CFG:
- s->strtab_base_cfg = data;
+ bank->strtab_base_cfg = data;
if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
- s->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
- s->features |= SMMU_FEATURE_2LVL_STE;
+ bank->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
+ bank->features |= SMMU_FEATURE_2LVL_STE;
}
return MEMTX_OK;
case A_CMDQ_BASE: /* 64b */
- s->cmdq.base = deposit64(s->cmdq.base, 0, 32, data);
- s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
- if (s->cmdq.log2size > SMMU_CMDQS) {
- s->cmdq.log2size = SMMU_CMDQS;
+ bank->cmdq.base = deposit64(bank->cmdq.base, 0, 32, data);
+ bank->cmdq.log2size = extract64(bank->cmdq.base, 0, 5);
+ if (bank->cmdq.log2size > SMMU_CMDQS) {
+ bank->cmdq.log2size = SMMU_CMDQS;
}
return MEMTX_OK;
case A_CMDQ_BASE + 4: /* 64b */
- s->cmdq.base = deposit64(s->cmdq.base, 32, 32, data);
+ bank->cmdq.base = deposit64(bank->cmdq.base, 32, 32, data);
return MEMTX_OK;
case A_CMDQ_PROD:
- s->cmdq.prod = data;
+ bank->cmdq.prod = data;
smmuv3_cmdq_consume(s);
return MEMTX_OK;
case A_CMDQ_CONS:
- s->cmdq.cons = data;
+ bank->cmdq.cons = data;
return MEMTX_OK;
case A_EVENTQ_BASE: /* 64b */
- s->eventq.base = deposit64(s->eventq.base, 0, 32, data);
- s->eventq.log2size = extract64(s->eventq.base, 0, 5);
- if (s->eventq.log2size > SMMU_EVENTQS) {
- s->eventq.log2size = SMMU_EVENTQS;
+ bank->eventq.base = deposit64(bank->eventq.base, 0, 32, data);
+ bank->eventq.log2size = extract64(bank->eventq.base, 0, 5);
+ if (bank->eventq.log2size > SMMU_EVENTQS) {
+ bank->eventq.log2size = SMMU_EVENTQS;
}
return MEMTX_OK;
case A_EVENTQ_BASE + 4:
- s->eventq.base = deposit64(s->eventq.base, 32, 32, data);
+ bank->eventq.base = deposit64(bank->eventq.base, 32, 32, data);
return MEMTX_OK;
case A_EVENTQ_PROD:
- s->eventq.prod = data;
+ bank->eventq.prod = data;
return MEMTX_OK;
case A_EVENTQ_CONS:
- s->eventq.cons = data;
+ bank->eventq.cons = data;
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0: /* 64b */
- s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 0, 32, data);
+ bank->eventq_irq_cfg0 = deposit64(bank->eventq_irq_cfg0, 0, 32, data);
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0 + 4:
- s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 32, 32, data);
+ bank->eventq_irq_cfg0 = deposit64(bank->eventq_irq_cfg0, 32, 32, data);
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG1:
- s->eventq_irq_cfg1 = data;
+ bank->eventq_irq_cfg1 = data;
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG2:
- s->eventq_irq_cfg2 = data;
+ bank->eventq_irq_cfg2 = data;
return MEMTX_OK;
default:
qemu_log_mask(LOG_UNIMP,
@@ -1687,18 +1708,21 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
uint64_t *data, MemTxAttrs attrs)
{
+ SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
+
switch (offset) {
case A_GERROR_IRQ_CFG0:
- *data = s->gerror_irq_cfg0;
+ *data = bank->gerror_irq_cfg0;
return MEMTX_OK;
case A_STRTAB_BASE:
- *data = s->strtab_base;
+ *data = bank->strtab_base;
return MEMTX_OK;
case A_CMDQ_BASE:
- *data = s->cmdq.base;
+ *data = bank->cmdq.base;
return MEMTX_OK;
case A_EVENTQ_BASE:
- *data = s->eventq.base;
+ *data = bank->eventq.base;
return MEMTX_OK;
default:
*data = 0;
@@ -1712,12 +1736,15 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
uint64_t *data, MemTxAttrs attrs)
{
+ SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
+ SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
+
switch (offset) {
case A_IDREGS ... A_IDREGS + 0x2f:
*data = smmuv3_idreg(offset - A_IDREGS);
return MEMTX_OK;
case A_IDR0 ... A_IDR5:
- *data = s->idr[(offset - A_IDR0) / 4];
+ *data = bank->idr[(offset - A_IDR0) / 4];
return MEMTX_OK;
case A_IIDR:
*data = s->iidr;
@@ -1726,77 +1753,77 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
*data = s->aidr;
return MEMTX_OK;
case A_CR0:
- *data = s->cr[0];
+ *data = bank->cr[0];
return MEMTX_OK;
case A_CR0ACK:
- *data = s->cr0ack;
+ *data = bank->cr0ack;
return MEMTX_OK;
case A_CR1:
- *data = s->cr[1];
+ *data = bank->cr[1];
return MEMTX_OK;
case A_CR2:
- *data = s->cr[2];
+ *data = bank->cr[2];
return MEMTX_OK;
case A_STATUSR:
*data = s->statusr;
return MEMTX_OK;
case A_GBPA:
- *data = s->gbpa;
+ *data = bank->gbpa;
return MEMTX_OK;
case A_IRQ_CTRL:
case A_IRQ_CTRL_ACK:
- *data = s->irq_ctrl;
+ *data = bank->irq_ctrl;
return MEMTX_OK;
case A_GERROR:
- *data = s->gerror;
+ *data = bank->gerror;
return MEMTX_OK;
case A_GERRORN:
- *data = s->gerrorn;
+ *data = bank->gerrorn;
return MEMTX_OK;
case A_GERROR_IRQ_CFG0: /* 64b */
- *data = extract64(s->gerror_irq_cfg0, 0, 32);
+ *data = extract64(bank->gerror_irq_cfg0, 0, 32);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0 + 4:
- *data = extract64(s->gerror_irq_cfg0, 32, 32);
+ *data = extract64(bank->gerror_irq_cfg0, 32, 32);
return MEMTX_OK;
case A_GERROR_IRQ_CFG1:
- *data = s->gerror_irq_cfg1;
+ *data = bank->gerror_irq_cfg1;
return MEMTX_OK;
case A_GERROR_IRQ_CFG2:
- *data = s->gerror_irq_cfg2;
+ *data = bank->gerror_irq_cfg2;
return MEMTX_OK;
case A_STRTAB_BASE: /* 64b */
- *data = extract64(s->strtab_base, 0, 32);
+ *data = extract64(bank->strtab_base, 0, 32);
return MEMTX_OK;
case A_STRTAB_BASE + 4: /* 64b */
- *data = extract64(s->strtab_base, 32, 32);
+ *data = extract64(bank->strtab_base, 32, 32);
return MEMTX_OK;
case A_STRTAB_BASE_CFG:
- *data = s->strtab_base_cfg;
+ *data = bank->strtab_base_cfg;
return MEMTX_OK;
case A_CMDQ_BASE: /* 64b */
- *data = extract64(s->cmdq.base, 0, 32);
+ *data = extract64(bank->cmdq.base, 0, 32);
return MEMTX_OK;
case A_CMDQ_BASE + 4:
- *data = extract64(s->cmdq.base, 32, 32);
+ *data = extract64(bank->cmdq.base, 32, 32);
return MEMTX_OK;
case A_CMDQ_PROD:
- *data = s->cmdq.prod;
+ *data = bank->cmdq.prod;
return MEMTX_OK;
case A_CMDQ_CONS:
- *data = s->cmdq.cons;
+ *data = bank->cmdq.cons;
return MEMTX_OK;
case A_EVENTQ_BASE: /* 64b */
- *data = extract64(s->eventq.base, 0, 32);
+ *data = extract64(bank->eventq.base, 0, 32);
return MEMTX_OK;
case A_EVENTQ_BASE + 4: /* 64b */
- *data = extract64(s->eventq.base, 32, 32);
+ *data = extract64(bank->eventq.base, 32, 32);
return MEMTX_OK;
case A_EVENTQ_PROD:
- *data = s->eventq.prod;
+ *data = bank->eventq.prod;
return MEMTX_OK;
case A_EVENTQ_CONS:
- *data = s->eventq.cons;
+ *data = bank->eventq.cons;
return MEMTX_OK;
default:
*data = 0;
@@ -1916,9 +1943,10 @@ static const VMStateDescription vmstate_smmuv3_queue = {
static bool smmuv3_gbpa_needed(void *opaque)
{
SMMUv3State *s = opaque;
+ SMMUv3RegBank *bank = smmuv3_bank_ns(s);
/* Only migrate GBPA if it has different reset value. */
- return s->gbpa != SMMU_GBPA_RESET_VAL;
+ return bank->gbpa != SMMU_GBPA_RESET_VAL;
}
static const VMStateDescription vmstate_gbpa = {
@@ -1927,7 +1955,7 @@ static const VMStateDescription vmstate_gbpa = {
.minimum_version_id = 1,
.needed = smmuv3_gbpa_needed,
.fields = (const VMStateField[]) {
- VMSTATE_UINT32(gbpa, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gbpa, SMMUv3State),
VMSTATE_END_OF_LIST()
}
};
@@ -1938,27 +1966,29 @@ static const VMStateDescription vmstate_smmuv3 = {
.minimum_version_id = 1,
.priority = MIG_PRI_IOMMU,
.fields = (const VMStateField[]) {
- VMSTATE_UINT32(features, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].features, SMMUv3State),
VMSTATE_UINT8(sid_size, SMMUv3State),
- VMSTATE_UINT8(sid_split, SMMUv3State),
+ VMSTATE_UINT8(bank[SMMU_SEC_SID_NS].sid_split, SMMUv3State),
- VMSTATE_UINT32_ARRAY(cr, SMMUv3State, 3),
- VMSTATE_UINT32(cr0ack, SMMUv3State),
+ VMSTATE_UINT32_ARRAY(bank[SMMU_SEC_SID_NS].cr, SMMUv3State, 3),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].cr0ack, SMMUv3State),
VMSTATE_UINT32(statusr, SMMUv3State),
- VMSTATE_UINT32(irq_ctrl, SMMUv3State),
- VMSTATE_UINT32(gerror, SMMUv3State),
- VMSTATE_UINT32(gerrorn, SMMUv3State),
- VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3State),
- VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3State),
- VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3State),
- VMSTATE_UINT64(strtab_base, SMMUv3State),
- VMSTATE_UINT32(strtab_base_cfg, SMMUv3State),
- VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3State),
- VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3State),
- VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3State),
-
- VMSTATE_STRUCT(cmdq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
- VMSTATE_STRUCT(eventq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].irq_ctrl, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gerror, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gerrorn, SMMUv3State),
+ VMSTATE_UINT64(bank[SMMU_SEC_SID_NS].gerror_irq_cfg0, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gerror_irq_cfg1, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gerror_irq_cfg2, SMMUv3State),
+ VMSTATE_UINT64(bank[SMMU_SEC_SID_NS].strtab_base, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].strtab_base_cfg, SMMUv3State),
+ VMSTATE_UINT64(bank[SMMU_SEC_SID_NS].eventq_irq_cfg0, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].eventq_irq_cfg1, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].eventq_irq_cfg2, SMMUv3State),
+
+ VMSTATE_STRUCT(bank[SMMU_SEC_SID_NS].cmdq, SMMUv3State, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
+ VMSTATE_STRUCT(bank[SMMU_SEC_SID_NS].eventq, SMMUv3State, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
VMSTATE_END_OF_LIST(),
},
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index 80d0fecfde..2dd6cfa895 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -40,6 +40,12 @@
#define CACHED_ENTRY_TO_ADDR(ent, addr) ((ent)->entry.translated_addr + \
((addr) & (ent)->entry.addr_mask))
+/* StreamID Security state */
+typedef enum SMMUSecSID {
+ SMMU_SEC_SID_NS = 0,
+ SMMU_SEC_SID_NUM,
+} SMMUSecSID;
+
/*
* Page table walk error types
*/
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
index d183a62766..e9012fcdb0 100644
--- a/include/hw/arm/smmuv3.h
+++ b/include/hw/arm/smmuv3.h
@@ -32,19 +32,13 @@ typedef struct SMMUQueue {
uint8_t log2size;
} SMMUQueue;
-struct SMMUv3State {
- SMMUState smmu_state;
-
+typedef struct SMMUv3RegBank {
uint32_t features;
- uint8_t sid_size;
uint8_t sid_split;
uint32_t idr[6];
- uint32_t iidr;
- uint32_t aidr;
uint32_t cr[3];
uint32_t cr0ack;
- uint32_t statusr;
uint32_t gbpa;
uint32_t irq_ctrl;
uint32_t gerror;
@@ -58,7 +52,19 @@ struct SMMUv3State {
uint32_t eventq_irq_cfg1;
uint32_t eventq_irq_cfg2;
- SMMUQueue eventq, cmdq;
+ SMMUQueue eventq;
+ SMMUQueue cmdq;
+} SMMUv3RegBank;
+
+struct SMMUv3State {
+ SMMUState smmu_state;
+
+ uint8_t sid_size;
+ uint32_t iidr;
+ uint32_t aidr;
+ uint32_t statusr;
+
+ SMMUv3RegBank bank[SMMU_SEC_SID_NUM];
qemu_irq irq[4];
QemuMutex mutex;
@@ -84,7 +90,19 @@ struct SMMUv3Class {
#define TYPE_ARM_SMMUV3 "arm-smmuv3"
OBJECT_DECLARE_TYPE(SMMUv3State, SMMUv3Class, ARM_SMMUV3)
-#define STAGE1_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S1P)
-#define STAGE2_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S2P)
+#define STAGE1_SUPPORTED(s) \
+ FIELD_EX32((s)->bank[SMMU_SEC_SID_NS].idr[0], IDR0, S1P)
+#define STAGE2_SUPPORTED(s) \
+ FIELD_EX32((s)->bank[SMMU_SEC_SID_NS].idr[0], IDR0, S2P)
+
+static inline SMMUv3RegBank *smmuv3_bank(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ return &s->bank[sec_sid];
+}
+
+static inline SMMUv3RegBank *smmuv3_bank_ns(SMMUv3State *s)
+{
+ return smmuv3_bank(s, SMMU_SEC_SID_NS);
+}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 05/21] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
2025-10-12 15:06 ` [RFC v3 05/21] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state Tao Tang
@ 2025-11-21 13:02 ` Eric Auger
2025-11-23 9:28 ` [RESEND RFC " Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-11-21 13:02 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 10/12/25 5:06 PM, Tao Tang wrote:
> Rework the SMMUv3 state management by introducing a banked register
> structure. This is a purely mechanical refactoring with no functional
> changes.
>
> To support multiple security states, a new enum, SMMUSecSID, is
> introduced to identify each state, sticking to the spec terminology.
>
> A new structure, SMMUv3RegBank, is then defined to hold the state
> for a single security context. The main SMMUv3State now contains an
> array of these banks, indexed by SMMUSecSID. This avoids the need for
> separate fields for non-secure and future secure registers.
>
> All existing code, which handles only the Non-secure state, is updated
> to access its state via s->bank[SMMU_SEC_SID_NS]. A local bank helper
> pointer is used where it improves readability.
>
> Function signatures and logic remain untouched in this commit to
> isolate the structural changes and simplify review. This is the
> foundational step for building multi-security-state support.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3-internal.h | 24 ++-
> hw/arm/smmuv3.c | 344 +++++++++++++++++++----------------
> include/hw/arm/smmu-common.h | 6 +
> include/hw/arm/smmuv3.h | 38 +++-
> 4 files changed, 239 insertions(+), 173 deletions(-)
>
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index e420c5dc72..858bc206a2 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -248,7 +248,9 @@ REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
>
> static inline int smmu_enabled(SMMUv3State *s)
> {
> - return FIELD_EX32(s->cr[0], CR0, SMMUEN);
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> + return FIELD_EX32(bank->cr[0], CR0, SMMUEN);
> }
>
> /* Command Queue Entry */
> @@ -276,12 +278,16 @@ static inline uint32_t smmuv3_idreg(int regoffset)
>
> static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s)
> {
> - return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
why aren't you using smmuv3_bank_ns(s) here and elsewhere. Some other
functions are already using it.
Is it to reduce the diffstat in subsequent patches?
> + return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
> }
>
> static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
> {
> - return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> + return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
> }
>
> /* Queue Handling */
> @@ -326,17 +332,23 @@ static inline void queue_cons_incr(SMMUQueue *q)
>
> static inline bool smmuv3_cmdq_enabled(SMMUv3State *s)
> {
> - return FIELD_EX32(s->cr[0], CR0, CMDQEN);
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> + return FIELD_EX32(bank->cr[0], CR0, CMDQEN);
> }
>
> static inline bool smmuv3_eventq_enabled(SMMUv3State *s)
> {
> - return FIELD_EX32(s->cr[0], CR0, EVENTQEN);
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> + return FIELD_EX32(bank->cr[0], CR0, EVENTQEN);
> }
>
> static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
> {
> - s->cmdq.cons = FIELD_DP32(s->cmdq.cons, CMDQ_CONS, ERR, err_type);
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> + bank->cmdq.cons = FIELD_DP32(bank->cmdq.cons, CMDQ_CONS, ERR, err_type);
> }
>
> /* Commands */
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index bcf8af8dc7..9c085ac678 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -50,6 +50,8 @@
> static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
> uint32_t gerror_mask)
> {
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>
> bool pulse = false;
>
> @@ -65,15 +67,15 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
> break;
> case SMMU_IRQ_GERROR:
> {
> - uint32_t pending = s->gerror ^ s->gerrorn;
> + uint32_t pending = bank->gerror ^ bank->gerrorn;
> uint32_t new_gerrors = ~pending & gerror_mask;
>
> if (!new_gerrors) {
> /* only toggle non pending errors */
> return;
> }
> - s->gerror ^= new_gerrors;
> - trace_smmuv3_write_gerror(new_gerrors, s->gerror);
> + bank->gerror ^= new_gerrors;
> + trace_smmuv3_write_gerror(new_gerrors, bank->gerror);
>
> pulse = smmuv3_gerror_irq_enabled(s);
> break;
> @@ -87,8 +89,10 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
>
> static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
> {
> - uint32_t pending = s->gerror ^ s->gerrorn;
> - uint32_t toggled = s->gerrorn ^ new_gerrorn;
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> + uint32_t pending = bank->gerror ^ bank->gerrorn;
> + uint32_t toggled = bank->gerrorn ^ new_gerrorn;
>
> if (toggled & ~pending) {
> qemu_log_mask(LOG_GUEST_ERROR,
> @@ -100,9 +104,9 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
> * We do not raise any error in case guest toggles bits corresponding
> * to not active IRQs (CONSTRAINED UNPREDICTABLE)
> */
> - s->gerrorn = new_gerrorn;
> + bank->gerrorn = new_gerrorn;
>
> - trace_smmuv3_write_gerrorn(toggled & pending, s->gerrorn);
> + trace_smmuv3_write_gerrorn(toggled & pending, bank->gerrorn);
> }
>
> static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
> @@ -144,7 +148,9 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
>
> static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
> {
> - SMMUQueue *q = &s->eventq;
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> + SMMUQueue *q = &bank->eventq;
> MemTxResult r;
>
> if (!smmuv3_eventq_enabled(s)) {
> @@ -259,64 +265,66 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
>
> static void smmuv3_init_regs(SMMUv3State *s)
> {
> + SMMUv3RegBank *bk = smmuv3_bank_ns(s);
> +
> /* Based on sys property, the stages supported in smmu will be advertised.*/
> if (s->stage && !strcmp("2", s->stage)) {
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, S2P, 1);
> } else if (s->stage && !strcmp("nested", s->stage)) {
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, S1P, 1);
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, S2P, 1);
> } else {
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, S1P, 1);
> }
>
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, COHACC, 1); /* IO coherent */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTENDIAN, 2); /* little endian */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STALL_MODEL, 1); /* No stall */
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, COHACC, 1); /* IO coherent */
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, TTENDIAN, 2); /* little endian */
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, STALL_MODEL, 1); /* No stall */
> /* terminated transaction will always be aborted/error returned */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TERM_MODEL, 1);
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, TERM_MODEL, 1);
> /* 2-level stream table supported */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STLEVEL, 1);
> + bk->idr[0] = FIELD_DP32(bk->idr[0], IDR0, STLEVEL, 1);
>
> - s->idr[1] = FIELD_DP32(s->idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
> - s->idr[1] = FIELD_DP32(s->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
> - s->idr[1] = FIELD_DP32(s->idr[1], IDR1, CMDQS, SMMU_CMDQS);
> + bk->idr[1] = FIELD_DP32(bk->idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
> + bk->idr[1] = FIELD_DP32(bk->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
> + bk->idr[1] = FIELD_DP32(bk->idr[1], IDR1, CMDQS, SMMU_CMDQS);
>
> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, HAD, 1);
> - if (FIELD_EX32(s->idr[0], IDR0, S2P)) {
> + bk->idr[3] = FIELD_DP32(bk->idr[3], IDR3, HAD, 1);
> + if (FIELD_EX32(bk->idr[0], IDR0, S2P)) {
> /* XNX is a stage-2-specific feature */
> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, XNX, 1);
> + bk->idr[3] = FIELD_DP32(bk->idr[3], IDR3, XNX, 1);
> }
> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, RIL, 1);
> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, BBML, 2);
> + bk->idr[3] = FIELD_DP32(bk->idr[3], IDR3, RIL, 1);
> + bk->idr[3] = FIELD_DP32(bk->idr[3], IDR3, BBML, 2);
>
> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44 bits */
> + bk->idr[5] = FIELD_DP32(bk->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44 bits */
> /* 4K, 16K and 64K granule support */
> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN16K, 1);
> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN64K, 1);
> -
> - s->cmdq.base = deposit64(s->cmdq.base, 0, 5, SMMU_CMDQS);
> - s->cmdq.prod = 0;
> - s->cmdq.cons = 0;
> - s->cmdq.entry_size = sizeof(struct Cmd);
> - s->eventq.base = deposit64(s->eventq.base, 0, 5, SMMU_EVENTQS);
> - s->eventq.prod = 0;
> - s->eventq.cons = 0;
> - s->eventq.entry_size = sizeof(struct Evt);
> -
> - s->features = 0;
> - s->sid_split = 0;
> + bk->idr[5] = FIELD_DP32(bk->idr[5], IDR5, GRAN4K, 1);
> + bk->idr[5] = FIELD_DP32(bk->idr[5], IDR5, GRAN16K, 1);
> + bk->idr[5] = FIELD_DP32(bk->idr[5], IDR5, GRAN64K, 1);
> +
> + bk->cmdq.base = deposit64(bk->cmdq.base, 0, 5, SMMU_CMDQS);
> + bk->cmdq.prod = 0;
> + bk->cmdq.cons = 0;
> + bk->cmdq.entry_size = sizeof(struct Cmd);
> + bk->eventq.base = deposit64(bk->eventq.base, 0, 5, SMMU_EVENTQS);
> + bk->eventq.prod = 0;
> + bk->eventq.cons = 0;
> + bk->eventq.entry_size = sizeof(struct Evt);
> +
> + bk->features = 0;
> + bk->sid_split = 0;
> s->aidr = 0x1;
maybe put the non banked regs at the end to have a clear separation.
There is no ordering concern I think.
> - s->cr[0] = 0;
> - s->cr0ack = 0;
> - s->irq_ctrl = 0;
> - s->gerror = 0;
> - s->gerrorn = 0;
> + bk->cr[0] = 0;
> + bk->cr0ack = 0;
> + bk->irq_ctrl = 0;
> + bk->gerror = 0;
> + bk->gerrorn = 0;
> s->statusr = 0;
> - s->gbpa = SMMU_GBPA_RESET_VAL;
> + bk->gbpa = SMMU_GBPA_RESET_VAL;
> }
>
> static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
> @@ -430,7 +438,7 @@ static bool s2_pgtable_config_valid(uint8_t sl0, uint8_t t0sz, uint8_t gran)
> static int decode_ste_s2_cfg(SMMUv3State *s, SMMUTransCfg *cfg,
> STE *ste)
> {
> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
> + uint8_t oas = FIELD_EX32(smmuv3_bank_ns(s)->idr[5], IDR5, OAS);
>
> if (STE_S2AA64(ste) == 0x0) {
> qemu_log_mask(LOG_UNIMP,
> @@ -548,7 +556,8 @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
> STE *ste, SMMUEventInfo *event)
> {
> uint32_t config;
> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
> + /* OAS is shared between S and NS and only present on NS-IDR5 */
I am not sure the comment belongs to this patch as up to now we are just
converting the existing code
> + uint8_t oas = FIELD_EX32(smmuv3_bank_ns(s)->idr[5], IDR5, OAS);
> int ret;
>
> if (!STE_VALID(ste)) {
> @@ -636,9 +645,11 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> uint32_t log2size;
> int strtab_size_shift;
> int ret;
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>
> - trace_smmuv3_find_ste(sid, s->features, s->sid_split);
> - log2size = FIELD_EX32(s->strtab_base_cfg, STRTAB_BASE_CFG, LOG2SIZE);
> + trace_smmuv3_find_ste(sid, bank->features, bank->sid_split);
> + log2size = FIELD_EX32(bank->strtab_base_cfg, STRTAB_BASE_CFG, LOG2SIZE);
> /*
> * Check SID range against both guest-configured and implementation limits
> */
> @@ -646,7 +657,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> event->type = SMMU_EVT_C_BAD_STREAMID;
> return -EINVAL;
> }
> - if (s->features & SMMU_FEATURE_2LVL_STE) {
> + if (bank->features & SMMU_FEATURE_2LVL_STE) {
> int l1_ste_offset, l2_ste_offset, max_l2_ste, span, i;
> dma_addr_t l1ptr, l2ptr;
> STEDesc l1std;
> @@ -655,11 +666,11 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> * Align strtab base address to table size. For this purpose, assume it
> * is not bounded by SMMU_IDR1_SIDSIZE.
> */
> - strtab_size_shift = MAX(5, (int)log2size - s->sid_split - 1 + 3);
> - strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
> + strtab_size_shift = MAX(5, (int)log2size - bank->sid_split - 1 + 3);
> + strtab_base = bank->strtab_base & SMMU_BASE_ADDR_MASK &
> ~MAKE_64BIT_MASK(0, strtab_size_shift);
> - l1_ste_offset = sid >> s->sid_split;
> - l2_ste_offset = sid & ((1 << s->sid_split) - 1);
> + l1_ste_offset = sid >> bank->sid_split;
> + l2_ste_offset = sid & ((1 << bank->sid_split) - 1);
> l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
> /* TODO: guarantee 64-bit single-copy atomicity */
> ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
> @@ -688,7 +699,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> }
> max_l2_ste = (1 << span) - 1;
> l2ptr = l1std_l2ptr(&l1std);
> - trace_smmuv3_find_ste_2lvl(s->strtab_base, l1ptr, l1_ste_offset,
> + trace_smmuv3_find_ste_2lvl(bank->strtab_base, l1ptr, l1_ste_offset,
> l2ptr, l2_ste_offset, max_l2_ste);
> if (l2_ste_offset > max_l2_ste) {
> qemu_log_mask(LOG_GUEST_ERROR,
> @@ -700,7 +711,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> addr = l2ptr + l2_ste_offset * sizeof(*ste);
> } else {
> strtab_size_shift = log2size + 5;
> - strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
> + strtab_base = bank->strtab_base & SMMU_BASE_ADDR_MASK &
> ~MAKE_64BIT_MASK(0, strtab_size_shift);
> addr = strtab_base + sid * sizeof(*ste);
> }
> @@ -719,7 +730,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
> int i;
> SMMUTranslationStatus status;
> SMMUTLBEntry *entry;
> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
> + uint8_t oas = FIELD_EX32(smmuv3_bank_ns(s)->idr[5], IDR5, OAS);
>
> if (!CD_VALID(cd) || !CD_AARCH64(cd)) {
> goto bad_cd;
> @@ -1041,6 +1052,8 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
> SMMUv3State *s = sdev->smmu;
> uint32_t sid = smmu_get_sid(sdev);
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
> .sid = sid,
> .inval_ste_allowed = false};
> @@ -1058,7 +1071,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> qemu_mutex_lock(&s->mutex);
>
> if (!smmu_enabled(s)) {
> - if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
> + if (FIELD_EX32(bank->gbpa, GBPA, ABORT)) {
> status = SMMU_TRANS_ABORT;
> } else {
> status = SMMU_TRANS_DISABLE;
> @@ -1282,7 +1295,9 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> {
> SMMUState *bs = ARM_SMMU(s);
> SMMUCmdError cmd_error = SMMU_CERROR_NONE;
> - SMMUQueue *q = &s->cmdq;
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> + SMMUQueue *q = &bank->cmdq;
> SMMUCommandType type = 0;
>
> if (!smmuv3_cmdq_enabled(s)) {
> @@ -1296,7 +1311,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> */
>
> while (!smmuv3_q_empty(q)) {
> - uint32_t pending = s->gerror ^ s->gerrorn;
> + uint32_t pending = bank->gerror ^ bank->gerrorn;
> Cmd cmd;
>
> trace_smmuv3_cmdq_consume(Q_PROD(q), Q_CONS(q),
> @@ -1511,29 +1526,32 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> uint64_t data, MemTxAttrs attrs)
> {
> + SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> +
> switch (offset) {
> case A_GERROR_IRQ_CFG0:
> - s->gerror_irq_cfg0 = data;
> + bank->gerror_irq_cfg0 = data;
> return MEMTX_OK;
> case A_STRTAB_BASE:
> - s->strtab_base = data;
> + bank->strtab_base = data;
> return MEMTX_OK;
> case A_CMDQ_BASE:
> - s->cmdq.base = data;
> - s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
> - if (s->cmdq.log2size > SMMU_CMDQS) {
> - s->cmdq.log2size = SMMU_CMDQS;
> + bank->cmdq.base = data;
> + bank->cmdq.log2size = extract64(bank->cmdq.base, 0, 5);
> + if (bank->cmdq.log2size > SMMU_CMDQS) {
> + bank->cmdq.log2size = SMMU_CMDQS;
> }
> return MEMTX_OK;
> case A_EVENTQ_BASE:
> - s->eventq.base = data;
> - s->eventq.log2size = extract64(s->eventq.base, 0, 5);
> - if (s->eventq.log2size > SMMU_EVENTQS) {
> - s->eventq.log2size = SMMU_EVENTQS;
> + bank->eventq.base = data;
> + bank->eventq.log2size = extract64(bank->eventq.base, 0, 5);
> + if (bank->eventq.log2size > SMMU_EVENTQS) {
> + bank->eventq.log2size = SMMU_EVENTQS;
> }
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0:
> - s->eventq_irq_cfg0 = data;
> + bank->eventq_irq_cfg0 = data;
> return MEMTX_OK;
> default:
> qemu_log_mask(LOG_UNIMP,
> @@ -1546,21 +1564,24 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> uint64_t data, MemTxAttrs attrs)
> {
> + SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> +
> switch (offset) {
> case A_CR0:
> - s->cr[0] = data;
> - s->cr0ack = data & ~SMMU_CR0_RESERVED;
> + bank->cr[0] = data;
> + bank->cr0ack = data & ~SMMU_CR0_RESERVED;
> /* in case the command queue has been enabled */
> smmuv3_cmdq_consume(s);
> return MEMTX_OK;
> case A_CR1:
> - s->cr[1] = data;
> + bank->cr[1] = data;
> return MEMTX_OK;
> case A_CR2:
> - s->cr[2] = data;
> + bank->cr[2] = data;
> return MEMTX_OK;
> case A_IRQ_CTRL:
> - s->irq_ctrl = data;
> + bank->irq_ctrl = data;
> return MEMTX_OK;
> case A_GERRORN:
> smmuv3_write_gerrorn(s, data);
> @@ -1571,16 +1592,16 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> smmuv3_cmdq_consume(s);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> - s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 0, 32, data);
> + bank->gerror_irq_cfg0 = deposit64(bank->gerror_irq_cfg0, 0, 32, data);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> - s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 32, 32, data);
> + bank->gerror_irq_cfg0 = deposit64(bank->gerror_irq_cfg0, 32, 32, data);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> - s->gerror_irq_cfg1 = data;
> + bank->gerror_irq_cfg1 = data;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> - s->gerror_irq_cfg2 = data;
> + bank->gerror_irq_cfg2 = data;
> return MEMTX_OK;
> case A_GBPA:
> /*
> @@ -1589,66 +1610,66 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> */
> if (data & R_GBPA_UPDATE_MASK) {
> /* Ignore update bit as write is synchronous. */
> - s->gbpa = data & ~R_GBPA_UPDATE_MASK;
> + bank->gbpa = data & ~R_GBPA_UPDATE_MASK;
> }
> return MEMTX_OK;
> case A_STRTAB_BASE: /* 64b */
> - s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
> + bank->strtab_base = deposit64(bank->strtab_base, 0, 32, data);
> return MEMTX_OK;
> case A_STRTAB_BASE + 4:
> - s->strtab_base = deposit64(s->strtab_base, 32, 32, data);
> + bank->strtab_base = deposit64(bank->strtab_base, 32, 32, data);
> return MEMTX_OK;
> case A_STRTAB_BASE_CFG:
> - s->strtab_base_cfg = data;
> + bank->strtab_base_cfg = data;
> if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
> - s->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
> - s->features |= SMMU_FEATURE_2LVL_STE;
> + bank->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
> + bank->features |= SMMU_FEATURE_2LVL_STE;
> }
> return MEMTX_OK;
> case A_CMDQ_BASE: /* 64b */
> - s->cmdq.base = deposit64(s->cmdq.base, 0, 32, data);
> - s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
> - if (s->cmdq.log2size > SMMU_CMDQS) {
> - s->cmdq.log2size = SMMU_CMDQS;
> + bank->cmdq.base = deposit64(bank->cmdq.base, 0, 32, data);
> + bank->cmdq.log2size = extract64(bank->cmdq.base, 0, 5);
> + if (bank->cmdq.log2size > SMMU_CMDQS) {
> + bank->cmdq.log2size = SMMU_CMDQS;
> }
> return MEMTX_OK;
> case A_CMDQ_BASE + 4: /* 64b */
> - s->cmdq.base = deposit64(s->cmdq.base, 32, 32, data);
> + bank->cmdq.base = deposit64(bank->cmdq.base, 32, 32, data);
> return MEMTX_OK;
> case A_CMDQ_PROD:
> - s->cmdq.prod = data;
> + bank->cmdq.prod = data;
> smmuv3_cmdq_consume(s);
> return MEMTX_OK;
> case A_CMDQ_CONS:
> - s->cmdq.cons = data;
> + bank->cmdq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_BASE: /* 64b */
> - s->eventq.base = deposit64(s->eventq.base, 0, 32, data);
> - s->eventq.log2size = extract64(s->eventq.base, 0, 5);
> - if (s->eventq.log2size > SMMU_EVENTQS) {
> - s->eventq.log2size = SMMU_EVENTQS;
> + bank->eventq.base = deposit64(bank->eventq.base, 0, 32, data);
> + bank->eventq.log2size = extract64(bank->eventq.base, 0, 5);
> + if (bank->eventq.log2size > SMMU_EVENTQS) {
> + bank->eventq.log2size = SMMU_EVENTQS;
> }
> return MEMTX_OK;
> case A_EVENTQ_BASE + 4:
> - s->eventq.base = deposit64(s->eventq.base, 32, 32, data);
> + bank->eventq.base = deposit64(bank->eventq.base, 32, 32, data);
> return MEMTX_OK;
> case A_EVENTQ_PROD:
> - s->eventq.prod = data;
> + bank->eventq.prod = data;
> return MEMTX_OK;
> case A_EVENTQ_CONS:
> - s->eventq.cons = data;
> + bank->eventq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0: /* 64b */
> - s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 0, 32, data);
> + bank->eventq_irq_cfg0 = deposit64(bank->eventq_irq_cfg0, 0, 32, data);
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0 + 4:
> - s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 32, 32, data);
> + bank->eventq_irq_cfg0 = deposit64(bank->eventq_irq_cfg0, 32, 32, data);
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG1:
> - s->eventq_irq_cfg1 = data;
> + bank->eventq_irq_cfg1 = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG2:
> - s->eventq_irq_cfg2 = data;
> + bank->eventq_irq_cfg2 = data;
> return MEMTX_OK;
> default:
> qemu_log_mask(LOG_UNIMP,
> @@ -1687,18 +1708,21 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
> static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> uint64_t *data, MemTxAttrs attrs)
> {
> + SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> +
> switch (offset) {
> case A_GERROR_IRQ_CFG0:
> - *data = s->gerror_irq_cfg0;
> + *data = bank->gerror_irq_cfg0;
> return MEMTX_OK;
> case A_STRTAB_BASE:
> - *data = s->strtab_base;
> + *data = bank->strtab_base;
> return MEMTX_OK;
> case A_CMDQ_BASE:
> - *data = s->cmdq.base;
> + *data = bank->cmdq.base;
> return MEMTX_OK;
> case A_EVENTQ_BASE:
> - *data = s->eventq.base;
> + *data = bank->eventq.base;
> return MEMTX_OK;
> default:
> *data = 0;
> @@ -1712,12 +1736,15 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> uint64_t *data, MemTxAttrs attrs)
> {
> + SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> + SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> +
> switch (offset) {
> case A_IDREGS ... A_IDREGS + 0x2f:
> *data = smmuv3_idreg(offset - A_IDREGS);
> return MEMTX_OK;
> case A_IDR0 ... A_IDR5:
> - *data = s->idr[(offset - A_IDR0) / 4];
> + *data = bank->idr[(offset - A_IDR0) / 4];
> return MEMTX_OK;
> case A_IIDR:
> *data = s->iidr;
> @@ -1726,77 +1753,77 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> *data = s->aidr;
> return MEMTX_OK;
> case A_CR0:
> - *data = s->cr[0];
> + *data = bank->cr[0];
> return MEMTX_OK;
> case A_CR0ACK:
> - *data = s->cr0ack;
> + *data = bank->cr0ack;
> return MEMTX_OK;
> case A_CR1:
> - *data = s->cr[1];
> + *data = bank->cr[1];
> return MEMTX_OK;
> case A_CR2:
> - *data = s->cr[2];
> + *data = bank->cr[2];
> return MEMTX_OK;
> case A_STATUSR:
> *data = s->statusr;
> return MEMTX_OK;
> case A_GBPA:
> - *data = s->gbpa;
> + *data = bank->gbpa;
> return MEMTX_OK;
> case A_IRQ_CTRL:
> case A_IRQ_CTRL_ACK:
> - *data = s->irq_ctrl;
> + *data = bank->irq_ctrl;
> return MEMTX_OK;
> case A_GERROR:
> - *data = s->gerror;
> + *data = bank->gerror;
> return MEMTX_OK;
> case A_GERRORN:
> - *data = s->gerrorn;
> + *data = bank->gerrorn;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> - *data = extract64(s->gerror_irq_cfg0, 0, 32);
> + *data = extract64(bank->gerror_irq_cfg0, 0, 32);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> - *data = extract64(s->gerror_irq_cfg0, 32, 32);
> + *data = extract64(bank->gerror_irq_cfg0, 32, 32);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> - *data = s->gerror_irq_cfg1;
> + *data = bank->gerror_irq_cfg1;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> - *data = s->gerror_irq_cfg2;
> + *data = bank->gerror_irq_cfg2;
> return MEMTX_OK;
> case A_STRTAB_BASE: /* 64b */
> - *data = extract64(s->strtab_base, 0, 32);
> + *data = extract64(bank->strtab_base, 0, 32);
> return MEMTX_OK;
> case A_STRTAB_BASE + 4: /* 64b */
> - *data = extract64(s->strtab_base, 32, 32);
> + *data = extract64(bank->strtab_base, 32, 32);
> return MEMTX_OK;
> case A_STRTAB_BASE_CFG:
> - *data = s->strtab_base_cfg;
> + *data = bank->strtab_base_cfg;
> return MEMTX_OK;
> case A_CMDQ_BASE: /* 64b */
> - *data = extract64(s->cmdq.base, 0, 32);
> + *data = extract64(bank->cmdq.base, 0, 32);
> return MEMTX_OK;
> case A_CMDQ_BASE + 4:
> - *data = extract64(s->cmdq.base, 32, 32);
> + *data = extract64(bank->cmdq.base, 32, 32);
> return MEMTX_OK;
> case A_CMDQ_PROD:
> - *data = s->cmdq.prod;
> + *data = bank->cmdq.prod;
> return MEMTX_OK;
> case A_CMDQ_CONS:
> - *data = s->cmdq.cons;
> + *data = bank->cmdq.cons;
> return MEMTX_OK;
> case A_EVENTQ_BASE: /* 64b */
> - *data = extract64(s->eventq.base, 0, 32);
> + *data = extract64(bank->eventq.base, 0, 32);
> return MEMTX_OK;
> case A_EVENTQ_BASE + 4: /* 64b */
> - *data = extract64(s->eventq.base, 32, 32);
> + *data = extract64(bank->eventq.base, 32, 32);
> return MEMTX_OK;
> case A_EVENTQ_PROD:
> - *data = s->eventq.prod;
> + *data = bank->eventq.prod;
> return MEMTX_OK;
> case A_EVENTQ_CONS:
> - *data = s->eventq.cons;
> + *data = bank->eventq.cons;
> return MEMTX_OK;
> default:
> *data = 0;
> @@ -1916,9 +1943,10 @@ static const VMStateDescription vmstate_smmuv3_queue = {
> static bool smmuv3_gbpa_needed(void *opaque)
> {
> SMMUv3State *s = opaque;
> + SMMUv3RegBank *bank = smmuv3_bank_ns(s);
>
> /* Only migrate GBPA if it has different reset value. */
> - return s->gbpa != SMMU_GBPA_RESET_VAL;
> + return bank->gbpa != SMMU_GBPA_RESET_VAL;
> }
>
> static const VMStateDescription vmstate_gbpa = {
> @@ -1927,7 +1955,7 @@ static const VMStateDescription vmstate_gbpa = {
> .minimum_version_id = 1,
> .needed = smmuv3_gbpa_needed,
> .fields = (const VMStateField[]) {
> - VMSTATE_UINT32(gbpa, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gbpa, SMMUv3State),
> VMSTATE_END_OF_LIST()
> }
> };
> @@ -1938,27 +1966,29 @@ static const VMStateDescription vmstate_smmuv3 = {
> .minimum_version_id = 1,
> .priority = MIG_PRI_IOMMU,
> .fields = (const VMStateField[]) {
> - VMSTATE_UINT32(features, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].features, SMMUv3State),
> VMSTATE_UINT8(sid_size, SMMUv3State),
> - VMSTATE_UINT8(sid_split, SMMUv3State),
> + VMSTATE_UINT8(bank[SMMU_SEC_SID_NS].sid_split, SMMUv3State),
>
> - VMSTATE_UINT32_ARRAY(cr, SMMUv3State, 3),
> - VMSTATE_UINT32(cr0ack, SMMUv3State),
> + VMSTATE_UINT32_ARRAY(bank[SMMU_SEC_SID_NS].cr, SMMUv3State, 3),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].cr0ack, SMMUv3State),
> VMSTATE_UINT32(statusr, SMMUv3State),
> - VMSTATE_UINT32(irq_ctrl, SMMUv3State),
> - VMSTATE_UINT32(gerror, SMMUv3State),
> - VMSTATE_UINT32(gerrorn, SMMUv3State),
> - VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3State),
> - VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3State),
> - VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3State),
> - VMSTATE_UINT64(strtab_base, SMMUv3State),
> - VMSTATE_UINT32(strtab_base_cfg, SMMUv3State),
> - VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3State),
> - VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3State),
> - VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3State),
> -
> - VMSTATE_STRUCT(cmdq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
> - VMSTATE_STRUCT(eventq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].irq_ctrl, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gerror, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gerrorn, SMMUv3State),
> + VMSTATE_UINT64(bank[SMMU_SEC_SID_NS].gerror_irq_cfg0, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gerror_irq_cfg1, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].gerror_irq_cfg2, SMMUv3State),
> + VMSTATE_UINT64(bank[SMMU_SEC_SID_NS].strtab_base, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].strtab_base_cfg, SMMUv3State),
> + VMSTATE_UINT64(bank[SMMU_SEC_SID_NS].eventq_irq_cfg0, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].eventq_irq_cfg1, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_NS].eventq_irq_cfg2, SMMUv3State),
> +
> + VMSTATE_STRUCT(bank[SMMU_SEC_SID_NS].cmdq, SMMUv3State, 0,
> + vmstate_smmuv3_queue, SMMUQueue),
> + VMSTATE_STRUCT(bank[SMMU_SEC_SID_NS].eventq, SMMUv3State, 0,
> + vmstate_smmuv3_queue, SMMUQueue),
>
> VMSTATE_END_OF_LIST(),
> },
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index 80d0fecfde..2dd6cfa895 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -40,6 +40,12 @@
> #define CACHED_ENTRY_TO_ADDR(ent, addr) ((ent)->entry.translated_addr + \
> ((addr) & (ent)->entry.addr_mask))
>
> +/* StreamID Security state */
> +typedef enum SMMUSecSID {
> + SMMU_SEC_SID_NS = 0,
> + SMMU_SEC_SID_NUM,
> +} SMMUSecSID;
> +
> /*
> * Page table walk error types
> */
> diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
> index d183a62766..e9012fcdb0 100644
> --- a/include/hw/arm/smmuv3.h
> +++ b/include/hw/arm/smmuv3.h
> @@ -32,19 +32,13 @@ typedef struct SMMUQueue {
> uint8_t log2size;
> } SMMUQueue;
>
> -struct SMMUv3State {
> - SMMUState smmu_state;
> -
> +typedef struct SMMUv3RegBank {
> uint32_t features;
> - uint8_t sid_size;
> uint8_t sid_split;
>
> uint32_t idr[6];
> - uint32_t iidr;
> - uint32_t aidr;
> uint32_t cr[3];
> uint32_t cr0ack;
> - uint32_t statusr;
> uint32_t gbpa;
> uint32_t irq_ctrl;
> uint32_t gerror;
> @@ -58,7 +52,19 @@ struct SMMUv3State {
> uint32_t eventq_irq_cfg1;
> uint32_t eventq_irq_cfg2;
>
> - SMMUQueue eventq, cmdq;
> + SMMUQueue eventq;
> + SMMUQueue cmdq;
> +} SMMUv3RegBank;
> +
> +struct SMMUv3State {
> + SMMUState smmu_state;
> +
> + uint8_t sid_size;
> + uint32_t iidr;
> + uint32_t aidr;
> + uint32_t statusr;
> +
> + SMMUv3RegBank bank[SMMU_SEC_SID_NUM];
>
> qemu_irq irq[4];
> QemuMutex mutex;
> @@ -84,7 +90,19 @@ struct SMMUv3Class {
> #define TYPE_ARM_SMMUV3 "arm-smmuv3"
> OBJECT_DECLARE_TYPE(SMMUv3State, SMMUv3Class, ARM_SMMUV3)
>
> -#define STAGE1_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S1P)
> -#define STAGE2_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S2P)
> +#define STAGE1_SUPPORTED(s) \
> + FIELD_EX32((s)->bank[SMMU_SEC_SID_NS].idr[0], IDR0, S1P)
> +#define STAGE2_SUPPORTED(s) \
> + FIELD_EX32((s)->bank[SMMU_SEC_SID_NS].idr[0], IDR0, S2P)
> +
> +static inline SMMUv3RegBank *smmuv3_bank(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + return &s->bank[sec_sid];
> +}
> +
> +static inline SMMUv3RegBank *smmuv3_bank_ns(SMMUv3State *s)
> +{
> + return smmuv3_bank(s, SMMU_SEC_SID_NS);
> +}
>
> #endif
Besides looks goot to me
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Eric
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RESEND RFC v3 05/21] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
2025-11-21 13:02 ` Eric Auger
@ 2025-11-23 9:28 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-11-23 9:28 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Note: Resending due to delivery failure to qemu-devel mailing list. I'm
not sure if everyone received the original email (at least qemu-devel
not), so please disregard if this is a duplicate.
On 2025/11/21 21:02, Eric Auger wrote:
> Hi Tao,
>
> On 10/12/25 5:06 PM, Tao Tang wrote:
>> Rework the SMMUv3 state management by introducing a banked register
>> structure. This is a purely mechanical refactoring with no functional
>> changes.
>>
>> To support multiple security states, a new enum, SMMUSecSID, is
>> introduced to identify each state, sticking to the spec terminology.
>>
>> A new structure, SMMUv3RegBank, is then defined to hold the state
>> for a single security context. The main SMMUv3State now contains an
>> array of these banks, indexed by SMMUSecSID. This avoids the need for
>> separate fields for non-secure and future secure registers.
>>
>> All existing code, which handles only the Non-secure state, is updated
>> to access its state via s->bank[SMMU_SEC_SID_NS]. A local bank helper
>> pointer is used where it improves readability.
>>
>> Function signatures and logic remain untouched in this commit to
>> isolate the structural changes and simplify review. This is the
>> foundational step for building multi-security-state support.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3-internal.h | 24 ++-
>> hw/arm/smmuv3.c | 344 +++++++++++++++++++----------------
>> include/hw/arm/smmu-common.h | 6 +
>> include/hw/arm/smmuv3.h | 38 +++-
>> 4 files changed, 239 insertions(+), 173 deletions(-)
>>
>> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
>> index e420c5dc72..858bc206a2 100644
>> --- a/hw/arm/smmuv3-internal.h
>> +++ b/hw/arm/smmuv3-internal.h
>> @@ -248,7 +248,9 @@ REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
>> static inline int smmu_enabled(SMMUv3State *s)
>> {
>> - return FIELD_EX32(s->cr[0], CR0, SMMUEN);
>> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>> + return FIELD_EX32(bank->cr[0], CR0, SMMUEN);
>> }
>> /* Command Queue Entry */
>> @@ -276,12 +278,16 @@ static inline uint32_t smmuv3_idreg(int regoffset)
>> static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s)
>> {
>> - return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
>> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> why aren't you using smmuv3_bank_ns(s) here and elsewhere. Some other
> functions are already using it.
> Is it to reduce the diffstat in subsequent patches?
Hi Eric,
Yes, you guessed correctly—my original intention was indeed to reduce
the diffstat in subsequent patches.
The smmuv3_bank_ns helper was originally kept mainly to handle specific
edge cases like IDR5.OAS, which is unique because it only exists in the
Non-Secure bank. But it seems that IDR5.OAS is the ONLY case
using smmuv3_bank_ns.
Therefore, for the sake of consistency, I plan to remove the
smmuv3_bank_ns helper entirely and use smmuv3_bank(s, SMMU_SEC_SID_NS)
uniformly in V4.
>> + return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
>> }
>> static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
>> {
>> - return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
>> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>> + SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>> + return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
>> ------------------------------<snip>------------------------------
>>
>>
>>
>> ------------------------------<snip>------------------------------
>> +
>> + bk->features = 0;
>> + bk->sid_split = 0;
>> s->aidr = 0x1;
> maybe put the non banked regs at the end to have a clear separation.
> There is no ordering concern I think.
>> - s->cr[0] = 0;
>> - s->cr0ack = 0;
>> - s->irq_ctrl = 0;
>> - s->gerror = 0;
>> - s->gerrorn = 0;
>> + bk->cr[0] = 0;
>> + bk->cr0ack = 0;
>> + bk->irq_ctrl = 0;
>> + bk->gerror = 0;
>> + bk->gerrorn = 0;
>> s->statusr = 0;
>> - s->gbpa = SMMU_GBPA_RESET_VAL;
>> + bk->gbpa = SMMU_GBPA_RESET_VAL;
>> }
>> ------------------------------<snip>------------------------------
>>
>>
>>
>> ------------------------------<snip>------------------------------
>> @@ -548,7 +556,8 @@ static int decode_ste(SMMUv3State *s,
>> SMMUTransCfg *cfg,
>> STE *ste, SMMUEventInfo *event)
>> {
>> uint32_t config;
>> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
>> + /* OAS is shared between S and NS and only present on NS-IDR5 */
> I am not sure the comment belongs to this patch as up to now we are just
> converting the existing code
I will also remove the premature comment regarding OAS and reorder the
non-banked registers initialization to the end in V4.
Thanks again for your suggestion.
Regards,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 06/21] hw/arm/smmuv3: Thread SEC_SID through helper APIs
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (4 preceding siblings ...)
2025-10-12 15:06 ` [RFC v3 05/21] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-11-21 13:13 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 07/21] hw/arm/smmuv3: Track SEC_SID in configs and events Tao Tang
` (14 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Extend the register and queue helper routines to accept an explicit
SEC_SID argument instead of hard-coding the non-secure bank.
All existing callers are updated to pass SMMU_SEC_SID_NS, so the
behavior remains identical. This prepares the code for handling
additional security state banks in the future. So Non-secure state
is the only state bank supported for now.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3-internal.h | 21 +++++++++------------
hw/arm/smmuv3.c | 15 ++++++++-------
2 files changed, 17 insertions(+), 19 deletions(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 858bc206a2..af0e0b32b3 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -246,9 +246,8 @@ REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
-static inline int smmu_enabled(SMMUv3State *s)
+static inline int smmu_enabled(SMMUv3State *s, SMMUSecSID sec_sid)
{
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
return FIELD_EX32(bank->cr[0], CR0, SMMUEN);
}
@@ -276,16 +275,16 @@ static inline uint32_t smmuv3_idreg(int regoffset)
return smmuv3_ids[regoffset / 4];
}
-static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s)
+static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s,
+ SMMUSecSID sec_sid)
{
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
}
-static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
+static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s,
+ SMMUSecSID sec_sid)
{
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
}
@@ -330,23 +329,21 @@ static inline void queue_cons_incr(SMMUQueue *q)
q->cons = deposit32(q->cons, 0, q->log2size + 1, q->cons + 1);
}
-static inline bool smmuv3_cmdq_enabled(SMMUv3State *s)
+static inline bool smmuv3_cmdq_enabled(SMMUv3State *s, SMMUSecSID sec_sid)
{
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
return FIELD_EX32(bank->cr[0], CR0, CMDQEN);
}
-static inline bool smmuv3_eventq_enabled(SMMUv3State *s)
+static inline bool smmuv3_eventq_enabled(SMMUv3State *s, SMMUSecSID sec_sid)
{
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
return FIELD_EX32(bank->cr[0], CR0, EVENTQEN);
}
-static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
+static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type,
+ SMMUSecSID sec_sid)
{
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
bank->cmdq.cons = FIELD_DP32(bank->cmdq.cons, CMDQ_CONS, ERR, err_type);
}
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 9c085ac678..6d05bb1310 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -57,7 +57,7 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
switch (irq) {
case SMMU_IRQ_EVTQ:
- pulse = smmuv3_eventq_irq_enabled(s);
+ pulse = smmuv3_eventq_irq_enabled(s, sec_sid);
break;
case SMMU_IRQ_PRIQ:
qemu_log_mask(LOG_UNIMP, "PRI not yet supported\n");
@@ -77,7 +77,7 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
bank->gerror ^= new_gerrors;
trace_smmuv3_write_gerror(new_gerrors, bank->gerror);
- pulse = smmuv3_gerror_irq_enabled(s);
+ pulse = smmuv3_gerror_irq_enabled(s, sec_sid);
break;
}
}
@@ -153,7 +153,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
SMMUQueue *q = &bank->eventq;
MemTxResult r;
- if (!smmuv3_eventq_enabled(s)) {
+ if (!smmuv3_eventq_enabled(s, sec_sid)) {
return MEMTX_ERROR;
}
@@ -176,8 +176,9 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
{
Evt evt = {};
MemTxResult r;
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
- if (!smmuv3_eventq_enabled(s)) {
+ if (!smmuv3_eventq_enabled(s, sec_sid)) {
return;
}
@@ -1070,7 +1071,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
qemu_mutex_lock(&s->mutex);
- if (!smmu_enabled(s)) {
+ if (!smmu_enabled(s, sec_sid)) {
if (FIELD_EX32(bank->gbpa, GBPA, ABORT)) {
status = SMMU_TRANS_ABORT;
} else {
@@ -1300,7 +1301,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
SMMUQueue *q = &bank->cmdq;
SMMUCommandType type = 0;
- if (!smmuv3_cmdq_enabled(s)) {
+ if (!smmuv3_cmdq_enabled(s, sec_sid)) {
return 0;
}
/*
@@ -1513,7 +1514,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
if (cmd_error) {
trace_smmuv3_cmdq_consume_error(smmu_cmd_string(type), cmd_error);
- smmu_write_cmdq_err(s, cmd_error);
+ smmu_write_cmdq_err(s, cmd_error, sec_sid);
smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 06/21] hw/arm/smmuv3: Thread SEC_SID through helper APIs
2025-10-12 15:06 ` [RFC v3 06/21] hw/arm/smmuv3: Thread SEC_SID through helper APIs Tao Tang
@ 2025-11-21 13:13 ` Eric Auger
0 siblings, 0 replies; 67+ messages in thread
From: Eric Auger @ 2025-11-21 13:13 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:06 PM, Tao Tang wrote:
> Extend the register and queue helper routines to accept an explicit
> SEC_SID argument instead of hard-coding the non-secure bank.
>
> All existing callers are updated to pass SMMU_SEC_SID_NS, so the
> behavior remains identical. This prepares the code for handling
> additional security state banks in the future. So Non-secure state
> is the only state bank supported for now.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Eric
> ---
> hw/arm/smmuv3-internal.h | 21 +++++++++------------
> hw/arm/smmuv3.c | 15 ++++++++-------
> 2 files changed, 17 insertions(+), 19 deletions(-)
>
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index 858bc206a2..af0e0b32b3 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -246,9 +246,8 @@ REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
> REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
> REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
>
> -static inline int smmu_enabled(SMMUv3State *s)
> +static inline int smmu_enabled(SMMUv3State *s, SMMUSecSID sec_sid)
> {
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> return FIELD_EX32(bank->cr[0], CR0, SMMUEN);
> }
> @@ -276,16 +275,16 @@ static inline uint32_t smmuv3_idreg(int regoffset)
> return smmuv3_ids[regoffset / 4];
> }
>
> -static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s)
> +static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s,
> + SMMUSecSID sec_sid)
> {
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
> }
>
> -static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
> +static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s,
> + SMMUSecSID sec_sid)
> {
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> return FIELD_EX32(bank->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
> }
> @@ -330,23 +329,21 @@ static inline void queue_cons_incr(SMMUQueue *q)
> q->cons = deposit32(q->cons, 0, q->log2size + 1, q->cons + 1);
> }
>
> -static inline bool smmuv3_cmdq_enabled(SMMUv3State *s)
> +static inline bool smmuv3_cmdq_enabled(SMMUv3State *s, SMMUSecSID sec_sid)
> {
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> return FIELD_EX32(bank->cr[0], CR0, CMDQEN);
> }
>
> -static inline bool smmuv3_eventq_enabled(SMMUv3State *s)
> +static inline bool smmuv3_eventq_enabled(SMMUv3State *s, SMMUSecSID sec_sid)
> {
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> return FIELD_EX32(bank->cr[0], CR0, EVENTQEN);
> }
>
> -static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
> +static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type,
> + SMMUSecSID sec_sid)
> {
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> bank->cmdq.cons = FIELD_DP32(bank->cmdq.cons, CMDQ_CONS, ERR, err_type);
> }
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 9c085ac678..6d05bb1310 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -57,7 +57,7 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
>
> switch (irq) {
> case SMMU_IRQ_EVTQ:
> - pulse = smmuv3_eventq_irq_enabled(s);
> + pulse = smmuv3_eventq_irq_enabled(s, sec_sid);
> break;
> case SMMU_IRQ_PRIQ:
> qemu_log_mask(LOG_UNIMP, "PRI not yet supported\n");
> @@ -77,7 +77,7 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
> bank->gerror ^= new_gerrors;
> trace_smmuv3_write_gerror(new_gerrors, bank->gerror);
>
> - pulse = smmuv3_gerror_irq_enabled(s);
> + pulse = smmuv3_gerror_irq_enabled(s, sec_sid);
> break;
> }
> }
> @@ -153,7 +153,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
> SMMUQueue *q = &bank->eventq;
> MemTxResult r;
>
> - if (!smmuv3_eventq_enabled(s)) {
> + if (!smmuv3_eventq_enabled(s, sec_sid)) {
> return MEMTX_ERROR;
> }
>
> @@ -176,8 +176,9 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
> {
> Evt evt = {};
> MemTxResult r;
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>
> - if (!smmuv3_eventq_enabled(s)) {
> + if (!smmuv3_eventq_enabled(s, sec_sid)) {
> return;
> }
>
> @@ -1070,7 +1071,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>
> qemu_mutex_lock(&s->mutex);
>
> - if (!smmu_enabled(s)) {
> + if (!smmu_enabled(s, sec_sid)) {
> if (FIELD_EX32(bank->gbpa, GBPA, ABORT)) {
> status = SMMU_TRANS_ABORT;
> } else {
> @@ -1300,7 +1301,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> SMMUQueue *q = &bank->cmdq;
> SMMUCommandType type = 0;
>
> - if (!smmuv3_cmdq_enabled(s)) {
> + if (!smmuv3_cmdq_enabled(s, sec_sid)) {
> return 0;
> }
> /*
> @@ -1513,7 +1514,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>
> if (cmd_error) {
> trace_smmuv3_cmdq_consume_error(smmu_cmd_string(type), cmd_error);
> - smmu_write_cmdq_err(s, cmd_error);
> + smmu_write_cmdq_err(s, cmd_error, sec_sid);
> smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK);
> }
>
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 07/21] hw/arm/smmuv3: Track SEC_SID in configs and events
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (5 preceding siblings ...)
2025-10-12 15:06 ` [RFC v3 06/21] hw/arm/smmuv3: Thread SEC_SID through helper APIs Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-12-02 11:05 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses Tao Tang
` (13 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Cache the SEC_SID inside SMMUTransCfg to keep configuration lookups
tied to the correct register bank.
Plumb the SEC_SID through tracepoints and queue helpers so diagnostics
and event logs always show which security interface emitted the record.
To support this, the SEC_SID is placed in SMMUEventInfo so the bank is
identified as soon as an event record is built.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3-internal.h | 1 +
hw/arm/smmuv3.c | 22 +++++++++++++++-------
hw/arm/trace-events | 2 +-
include/hw/arm/smmu-common.h | 1 +
4 files changed, 18 insertions(+), 8 deletions(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index af0e0b32b3..99fdbcf3f5 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -512,6 +512,7 @@ static inline const char *smmu_event_string(SMMUEventType type)
/* Encode an event record */
typedef struct SMMUEventInfo {
+ SMMUSecSID sec_sid;
SMMUEventType type;
uint32_t sid;
bool recorded;
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 6d05bb1310..a87ae36e8b 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -146,9 +146,9 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
return MEMTX_OK;
}
-static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
+static MemTxResult smmuv3_write_eventq(SMMUv3State *s, SMMUSecSID sec_sid,
+ Evt *evt)
{
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
SMMUQueue *q = &bank->eventq;
MemTxResult r;
@@ -176,7 +176,10 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
{
Evt evt = {};
MemTxResult r;
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUSecSID sec_sid = info->sec_sid;
+ if (sec_sid >= SMMU_SEC_SID_NUM) {
+ g_assert_not_reached();
+ }
if (!smmuv3_eventq_enabled(s, sec_sid)) {
return;
@@ -256,8 +259,9 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
g_assert_not_reached();
}
- trace_smmuv3_record_event(smmu_event_string(info->type), info->sid);
- r = smmuv3_write_eventq(s, &evt);
+ trace_smmuv3_record_event(sec_sid, smmu_event_string(info->type),
+ info->sid);
+ r = smmuv3_write_eventq(s, sec_sid, &evt);
if (r != MEMTX_OK) {
smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
}
@@ -900,6 +904,7 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
100 * sdev->cfg_cache_hits /
(sdev->cfg_cache_hits + sdev->cfg_cache_misses));
cfg = g_new0(SMMUTransCfg, 1);
+ cfg->sec_sid = SMMU_SEC_SID_NS;
if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
g_hash_table_insert(bc->configs, sdev, cfg);
@@ -1057,7 +1062,8 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
SMMUEventInfo event = {.type = SMMU_EVT_NONE,
.sid = sid,
- .inval_ste_allowed = false};
+ .inval_ste_allowed = false,
+ .sec_sid = sec_sid};
SMMUTranslationStatus status;
SMMUTransCfg *cfg = NULL;
IOMMUTLBEntry entry = {
@@ -1159,7 +1165,9 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
uint64_t num_pages, int stage)
{
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
- SMMUEventInfo eventinfo = {.inval_ste_allowed = true};
+ SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUEventInfo eventinfo = {.sec_sid = sec_sid,
+ .inval_ste_allowed = true};
SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo);
IOMMUTLBEvent event;
uint8_t granule;
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index f3386bd7ae..96ebd1b11b 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -40,7 +40,7 @@ smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
smmuv3_write_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
-smmuv3_record_event(const char *type, uint32_t sid) "%s sid=0x%x"
+smmuv3_record_event(int sec_sid, const char *type, uint32_t sid) "sec_sid=%d %s sid=0x%x"
smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "sid=0x%x features:0x%x, sid_split:0x%x"
smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%"PRIx64" l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index 2dd6cfa895..b0dae18a62 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -107,6 +107,7 @@ typedef struct SMMUS2Cfg {
typedef struct SMMUTransCfg {
/* Shared fields between stage-1 and stage-2. */
SMMUStage stage; /* translation stage */
+ SMMUSecSID sec_sid; /* cached sec sid */
bool disabled; /* smmu is disabled */
bool bypassed; /* translation is bypassed */
bool aborted; /* translation is aborted */
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 07/21] hw/arm/smmuv3: Track SEC_SID in configs and events
2025-10-12 15:06 ` [RFC v3 07/21] hw/arm/smmuv3: Track SEC_SID in configs and events Tao Tang
@ 2025-12-02 11:05 ` Eric Auger
0 siblings, 0 replies; 67+ messages in thread
From: Eric Auger @ 2025-12-02 11:05 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:06 PM, Tao Tang wrote:
> Cache the SEC_SID inside SMMUTransCfg to keep configuration lookups
> tied to the correct register bank.
>
> Plumb the SEC_SID through tracepoints and queue helpers so diagnostics
> and event logs always show which security interface emitted the record.
> To support this, the SEC_SID is placed in SMMUEventInfo so the bank is
> identified as soon as an event record is built.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3-internal.h | 1 +
> hw/arm/smmuv3.c | 22 +++++++++++++++-------
> hw/arm/trace-events | 2 +-
> include/hw/arm/smmu-common.h | 1 +
> 4 files changed, 18 insertions(+), 8 deletions(-)
>
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index af0e0b32b3..99fdbcf3f5 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -512,6 +512,7 @@ static inline const char *smmu_event_string(SMMUEventType type)
>
> /* Encode an event record */
> typedef struct SMMUEventInfo {
> + SMMUSecSID sec_sid;
> SMMUEventType type;
> uint32_t sid;
> bool recorded;
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 6d05bb1310..a87ae36e8b 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -146,9 +146,9 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
> return MEMTX_OK;
> }
>
> -static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
> +static MemTxResult smmuv3_write_eventq(SMMUv3State *s, SMMUSecSID sec_sid,
> + Evt *evt)
> {
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> SMMUQueue *q = &bank->eventq;
> MemTxResult r;
> @@ -176,7 +176,10 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
> {
> Evt evt = {};
> MemTxResult r;
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUSecSID sec_sid = info->sec_sid;
> + if (sec_sid >= SMMU_SEC_SID_NUM) {
> + g_assert_not_reached();
simply use g_assert(cond)
> + }
>
> if (!smmuv3_eventq_enabled(s, sec_sid)) {
> return;
> @@ -256,8 +259,9 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
> g_assert_not_reached();
> }
>
> - trace_smmuv3_record_event(smmu_event_string(info->type), info->sid);
> - r = smmuv3_write_eventq(s, &evt);
> + trace_smmuv3_record_event(sec_sid, smmu_event_string(info->type),
> + info->sid);
> + r = smmuv3_write_eventq(s, sec_sid, &evt);
> if (r != MEMTX_OK) {
> smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
> }
> @@ -900,6 +904,7 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
> 100 * sdev->cfg_cache_hits /
> (sdev->cfg_cache_hits + sdev->cfg_cache_misses));
> cfg = g_new0(SMMUTransCfg, 1);
> + cfg->sec_sid = SMMU_SEC_SID_NS;
>
> if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
> g_hash_table_insert(bc->configs, sdev, cfg);
> @@ -1057,7 +1062,8 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
> .sid = sid,
> - .inval_ste_allowed = false};
> + .inval_ste_allowed = false,
> + .sec_sid = sec_sid};
> SMMUTranslationStatus status;
> SMMUTransCfg *cfg = NULL;
> IOMMUTLBEntry entry = {
> @@ -1159,7 +1165,9 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
> uint64_t num_pages, int stage)
> {
> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
> - SMMUEventInfo eventinfo = {.inval_ste_allowed = true};
> + SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUEventInfo eventinfo = {.sec_sid = sec_sid,
> + .inval_ste_allowed = true};
> SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo);
> IOMMUTLBEvent event;
> uint8_t granule;
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index f3386bd7ae..96ebd1b11b 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -40,7 +40,7 @@ smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
> smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
> smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
> smmuv3_write_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
> -smmuv3_record_event(const char *type, uint32_t sid) "%s sid=0x%x"
> +smmuv3_record_event(int sec_sid, const char *type, uint32_t sid) "sec_sid=%d %s sid=0x%x"
> smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "sid=0x%x features:0x%x, sid_split:0x%x"
> smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%"PRIx64" l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
> smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index 2dd6cfa895..b0dae18a62 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -107,6 +107,7 @@ typedef struct SMMUS2Cfg {
> typedef struct SMMUTransCfg {
> /* Shared fields between stage-1 and stage-2. */
> SMMUStage stage; /* translation stage */
> + SMMUSecSID sec_sid; /* cached sec sid */
> bool disabled; /* smmu is disabled */
> bool bypassed; /* translation is bypassed */
> bool aborted; /* translation is aborted */
Besides
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Eric
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (6 preceding siblings ...)
2025-10-12 15:06 ` [RFC v3 07/21] hw/arm/smmuv3: Track SEC_SID in configs and events Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-12-02 13:53 ` Eric Auger
2025-12-11 22:12 ` Pierrick Bouvier
2025-10-12 15:06 ` [RFC v3 09/21] hw/arm/smmuv3: Plumb transaction attributes into config helpers Tao Tang
` (12 subsequent siblings)
20 siblings, 2 replies; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
According to the Arm architecture, SMMU-originated memory accesses,
such as fetching commands or writing events for a secure stream, must
target the Secure Physical Address (PA) space. The existing model sends
all DMA to the global non-secure address_space_memory.
This patch introduces the infrastructure to differentiate between secure
and non-secure memory accesses. Firstly, SMMU_SEC_SID_S is added in
SMMUSecSID enum to represent the secure context. Then a weak global
symbol, arm_secure_address_space, is added, which can be provided by the
machine model to represent the Secure PA space.
A new helper, smmu_get_address_space(), selects the target address
space based on SEC_SID. All internal DMA calls
(dma_memory_read/write) will be updated to use this helper in follow-up
patches.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-common.c | 8 ++++++++
hw/arm/virt.c | 5 +++++
include/hw/arm/smmu-common.h | 27 +++++++++++++++++++++++++++
3 files changed, 40 insertions(+)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 62a7612184..24db448683 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -30,6 +30,14 @@
#include "hw/arm/smmu-common.h"
#include "smmu-internal.h"
+/* Global state for secure address space availability */
+bool arm_secure_as_available;
+
+void smmu_enable_secure_address_space(void)
+{
+ arm_secure_as_available = true;
+}
+
/* IOTLB Management */
static guint smmu_iotlb_key_hash(gconstpointer v)
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 175023897a..83dc62a095 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -92,6 +92,8 @@
#include "hw/cxl/cxl_host.h"
#include "qemu/guest-random.h"
+AddressSpace arm_secure_address_space;
+
static GlobalProperty arm_virt_compat[] = {
{ TYPE_VIRTIO_IOMMU_PCI, "aw-bits", "48" },
};
@@ -2257,6 +2259,9 @@ static void machvirt_init(MachineState *machine)
memory_region_init(secure_sysmem, OBJECT(machine), "secure-memory",
UINT64_MAX);
memory_region_add_subregion_overlap(secure_sysmem, 0, sysmem, -1);
+ address_space_init(&arm_secure_address_space, secure_sysmem,
+ "secure-memory-space");
+ smmu_enable_secure_address_space();
}
firmware_loaded = virt_firmware_init(vms, sysmem,
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index b0dae18a62..d54558f94b 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -43,9 +43,36 @@
/* StreamID Security state */
typedef enum SMMUSecSID {
SMMU_SEC_SID_NS = 0,
+ SMMU_SEC_SID_S,
SMMU_SEC_SID_NUM,
} SMMUSecSID;
+extern AddressSpace __attribute__((weak)) arm_secure_address_space;
+extern bool arm_secure_as_available;
+void smmu_enable_secure_address_space(void);
+
+/*
+ * Return the address space corresponding to the SEC_SID.
+ * If SEC_SID is Secure, but secure address space is not available,
+ * return NULL and print a warning message.
+ */
+static inline AddressSpace *smmu_get_address_space(SMMUSecSID sec_sid)
+{
+ switch (sec_sid) {
+ case SMMU_SEC_SID_NS:
+ return &address_space_memory;
+ case SMMU_SEC_SID_S:
+ if (!arm_secure_as_available || arm_secure_address_space.root == NULL) {
+ printf("Secure address space requested but not available");
+ return NULL;
+ }
+ return &arm_secure_address_space;
+ default:
+ printf("Unknown SEC_SID value %d", sec_sid);
+ return NULL;
+ }
+}
+
/*
* Page table walk error types
*/
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses
2025-10-12 15:06 ` [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses Tao Tang
@ 2025-12-02 13:53 ` Eric Auger
2025-12-03 13:50 ` Tao Tang
2025-12-11 22:12 ` Pierrick Bouvier
1 sibling, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-02 13:53 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 10/12/25 5:06 PM, Tao Tang wrote:
> According to the Arm architecture, SMMU-originated memory accesses,
> such as fetching commands or writing events for a secure stream, must
> target the Secure Physical Address (PA) space. The existing model sends
> all DMA to the global non-secure address_space_memory.
>
> This patch introduces the infrastructure to differentiate between secure
> and non-secure memory accesses. Firstly, SMMU_SEC_SID_S is added in
> SMMUSecSID enum to represent the secure context. Then a weak global
> symbol, arm_secure_address_space, is added, which can be provided by the
> machine model to represent the Secure PA space.
>
> A new helper, smmu_get_address_space(), selects the target address
> space based on SEC_SID. All internal DMA calls
> (dma_memory_read/write) will be updated to use this helper in follow-up
> patches.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-common.c | 8 ++++++++
> hw/arm/virt.c | 5 +++++
> include/hw/arm/smmu-common.h | 27 +++++++++++++++++++++++++++
> 3 files changed, 40 insertions(+)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index 62a7612184..24db448683 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -30,6 +30,14 @@
> #include "hw/arm/smmu-common.h"
> #include "smmu-internal.h"
>
> +/* Global state for secure address space availability */
> +bool arm_secure_as_available;
don't you need to initialize it?
why is it local to the SMMU. To me the secure address space sounds
global like address_space_memory usable by other IPs than the SMMU and
the CPUs.
> +
> +void smmu_enable_secure_address_space(void)
> +{
> + arm_secure_as_available = true;
> +}
> +
> /* IOTLB Management */
>
> static guint smmu_iotlb_key_hash(gconstpointer v)
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 175023897a..83dc62a095 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -92,6 +92,8 @@
> #include "hw/cxl/cxl_host.h"
> #include "qemu/guest-random.h"
>
> +AddressSpace arm_secure_address_space;
> +
> static GlobalProperty arm_virt_compat[] = {
> { TYPE_VIRTIO_IOMMU_PCI, "aw-bits", "48" },
> };
> @@ -2257,6 +2259,9 @@ static void machvirt_init(MachineState *machine)
> memory_region_init(secure_sysmem, OBJECT(machine), "secure-memory",
> UINT64_MAX);
> memory_region_add_subregion_overlap(secure_sysmem, 0, sysmem, -1);
> + address_space_init(&arm_secure_address_space, secure_sysmem,
> + "secure-memory-space");
besides using dynamic allocation like in cpu_address_space_init() would
allow to get rid of arm_secure_as_available
> + smmu_enable_secure_address_space();
> }
>
> firmware_loaded = virt_firmware_init(vms, sysmem,
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index b0dae18a62..d54558f94b 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -43,9 +43,36 @@
> /* StreamID Security state */
> typedef enum SMMUSecSID {
> SMMU_SEC_SID_NS = 0,
> + SMMU_SEC_SID_S,
> SMMU_SEC_SID_NUM,
> } SMMUSecSID;
>
> +extern AddressSpace __attribute__((weak)) arm_secure_address_space;
> +extern bool arm_secure_as_available;
> +void smmu_enable_secure_address_space(void);
> +
> +/*
> + * Return the address space corresponding to the SEC_SID.
> + * If SEC_SID is Secure, but secure address space is not available,
> + * return NULL and print a warning message.
> + */
> +static inline AddressSpace *smmu_get_address_space(SMMUSecSID sec_sid)
> +{
> + switch (sec_sid) {
> + case SMMU_SEC_SID_NS:
> + return &address_space_memory;
> + case SMMU_SEC_SID_S:
> + if (!arm_secure_as_available || arm_secure_address_space.root == NULL) {
> + printf("Secure address space requested but not available");
> + return NULL;
> + }
> + return &arm_secure_address_space;
> + default:
> + printf("Unknown SEC_SID value %d", sec_sid);
> + return NULL;
> + }
> +}
> +
> /*
> * Page table walk error types
> */
Thanks
Eric
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses
2025-12-02 13:53 ` Eric Auger
@ 2025-12-03 13:50 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-03 13:50 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/12/2 21:53, Eric Auger wrote:
> Hi Tao,
>
> On 10/12/25 5:06 PM, Tao Tang wrote:
>> According to the Arm architecture, SMMU-originated memory accesses,
>> such as fetching commands or writing events for a secure stream, must
>> target the Secure Physical Address (PA) space. The existing model sends
>> all DMA to the global non-secure address_space_memory.
>>
>> This patch introduces the infrastructure to differentiate between secure
>> and non-secure memory accesses. Firstly, SMMU_SEC_SID_S is added in
>> SMMUSecSID enum to represent the secure context. Then a weak global
>> symbol, arm_secure_address_space, is added, which can be provided by the
>> machine model to represent the Secure PA space.
>>
>> A new helper, smmu_get_address_space(), selects the target address
>> space based on SEC_SID. All internal DMA calls
>> (dma_memory_read/write) will be updated to use this helper in follow-up
>> patches.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmu-common.c | 8 ++++++++
>> hw/arm/virt.c | 5 +++++
>> include/hw/arm/smmu-common.h | 27 +++++++++++++++++++++++++++
>> 3 files changed, 40 insertions(+)
>>
>> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
>> index 62a7612184..24db448683 100644
>> --- a/hw/arm/smmu-common.c
>> +++ b/hw/arm/smmu-common.c
>> @@ -30,6 +30,14 @@
>> #include "hw/arm/smmu-common.h"
>> #include "smmu-internal.h"
>>
>> +/* Global state for secure address space availability */
>> +bool arm_secure_as_available;
> don't you need to initialize it?
>
> why is it local to the SMMU. To me the secure address space sounds
> global like address_space_memory usable by other IPs than the SMMU and
> the CPUs.
>> +
>> +void smmu_enable_secure_address_space(void)
>> +{
>> + arm_secure_as_available = true;
>> +}
>> +
>> /* IOTLB Management */
>>
>> static guint smmu_iotlb_key_hash(gconstpointer v)
>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>> index 175023897a..83dc62a095 100644
>> --- a/hw/arm/virt.c
>> +++ b/hw/arm/virt.c
>> @@ -92,6 +92,8 @@
>> #include "hw/cxl/cxl_host.h"
>> #include "qemu/guest-random.h"
>>
>> +AddressSpace arm_secure_address_space;
>> +
>> static GlobalProperty arm_virt_compat[] = {
>> { TYPE_VIRTIO_IOMMU_PCI, "aw-bits", "48" },
>> };
>> @@ -2257,6 +2259,9 @@ static void machvirt_init(MachineState *machine)
>> memory_region_init(secure_sysmem, OBJECT(machine), "secure-memory",
>> UINT64_MAX);
>> memory_region_add_subregion_overlap(secure_sysmem, 0, sysmem, -1);
>> + address_space_init(&arm_secure_address_space, secure_sysmem,
>> + "secure-memory-space");
> besides using dynamic allocation like in cpu_address_space_init() would
> allow to get rid of arm_secure_as_available
Thanks for the feedback.
I also think the extra arm_secure_as_available flag is unnecessary after
read the cpu_address_space_init code.
For the next version I plan to:
- Drop the arm_secure_as_available concept entirely.
- Make the secure address space dynamically allocated in the machine
code : secure_address_space = g_new0(AddressSpace, 1);
- Have smmu_get_address_space() simply check whether the secure
AddressSpace * is non-NULL, instead of relying on a separate
availability flag.
Thanks again for the review and suggestions. Best regards, Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses
2025-10-12 15:06 ` [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses Tao Tang
2025-12-02 13:53 ` Eric Auger
@ 2025-12-11 22:12 ` Pierrick Bouvier
2025-12-11 22:19 ` Pierrick Bouvier
1 sibling, 1 reply; 67+ messages in thread
From: Pierrick Bouvier @ 2025-12-11 22:12 UTC (permalink / raw)
To: Tao Tang, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Richard Henderson,
Philippe Mathieu-Daudé
[-- Attachment #1: Type: text/plain, Size: 4857 bytes --]
Hi Tao,
On 10/12/25 8:06 AM, Tao Tang wrote:
> According to the Arm architecture, SMMU-originated memory accesses,
> such as fetching commands or writing events for a secure stream, must
> target the Secure Physical Address (PA) space. The existing model sends
> all DMA to the global non-secure address_space_memory.
>
> This patch introduces the infrastructure to differentiate between secure
> and non-secure memory accesses. Firstly, SMMU_SEC_SID_S is added in
> SMMUSecSID enum to represent the secure context. Then a weak global
> symbol, arm_secure_address_space, is added, which can be provided by the
> machine model to represent the Secure PA space.
>
> A new helper, smmu_get_address_space(), selects the target address
> space based on SEC_SID. All internal DMA calls
> (dma_memory_read/write) will be updated to use this helper in follow-up
> patches.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-common.c | 8 ++++++++
> hw/arm/virt.c | 5 +++++
> include/hw/arm/smmu-common.h | 27 +++++++++++++++++++++++++++
> 3 files changed, 40 insertions(+)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index 62a7612184..24db448683 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -30,6 +30,14 @@
> #include "hw/arm/smmu-common.h"
> #include "smmu-internal.h"
>
> +/* Global state for secure address space availability */
> +bool arm_secure_as_available;
> +
> +void smmu_enable_secure_address_space(void)
> +{
> + arm_secure_as_available = true;
> +}
> +
> /* IOTLB Management */
>
> static guint smmu_iotlb_key_hash(gconstpointer v)
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 175023897a..83dc62a095 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -92,6 +92,8 @@
> #include "hw/cxl/cxl_host.h"
> #include "qemu/guest-random.h"
>
> +AddressSpace arm_secure_address_space;
> +
> static GlobalProperty arm_virt_compat[] = {
> { TYPE_VIRTIO_IOMMU_PCI, "aw-bits", "48" },
> };
> @@ -2257,6 +2259,9 @@ static void machvirt_init(MachineState *machine)
> memory_region_init(secure_sysmem, OBJECT(machine), "secure-memory",
> UINT64_MAX);
> memory_region_add_subregion_overlap(secure_sysmem, 0, sysmem, -1);
> + address_space_init(&arm_secure_address_space, secure_sysmem,
> + "secure-memory-space");
> + smmu_enable_secure_address_space();
> }
>
> firmware_loaded = virt_firmware_init(vms, sysmem,
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index b0dae18a62..d54558f94b 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -43,9 +43,36 @@
> /* StreamID Security state */
> typedef enum SMMUSecSID {
> SMMU_SEC_SID_NS = 0,
> + SMMU_SEC_SID_S,
> SMMU_SEC_SID_NUM,
> } SMMUSecSID;
>
> +extern AddressSpace __attribute__((weak)) arm_secure_address_space;
> +extern bool arm_secure_as_available;
> +void smmu_enable_secure_address_space(void);
> +
> +/*
> + * Return the address space corresponding to the SEC_SID.
> + * If SEC_SID is Secure, but secure address space is not available,
> + * return NULL and print a warning message.
> + */
> +static inline AddressSpace *smmu_get_address_space(SMMUSecSID sec_sid)
> +{
> + switch (sec_sid) {
> + case SMMU_SEC_SID_NS:
> + return &address_space_memory;
> + case SMMU_SEC_SID_S:
> + if (!arm_secure_as_available || arm_secure_address_space.root == NULL) {
> + printf("Secure address space requested but not available");
> + return NULL;
> + }
> + return &arm_secure_address_space;
> + default:
> + printf("Unknown SEC_SID value %d", sec_sid);
> + return NULL;
> + }
> +}
> +
> /*
> * Page table walk error types
> */
I ran into the same issue, when adding Granule Protection Check to the
SMMU, for RME support. It requires access to secure memory, where
Granule Protection Table is kept, and thus, access secure address space.
After talking with Richard and Philippe, I have been suggested a better
way. Similar to how arm cpus handle this, boards (virt & sbsa-ref) are
simply passing pointers to MemoryRegion for global and secure memory.
Then, the SMMU can create its own address spaces, based on those regions.
It's clean, does not require any weak variable, and mimic what is
already done for cpus. Please see the two patches attached.
First one define properties, and pass memory regions from boards to
SMMU. Second one replace global address spaces with SMMU ones.
I'll send patch 1 as it's own series, and you can take inspiration from
patch 2 for this series. SMMU unit tests will need to be modified to be
passed the memory regions also.
Regards,
Pierrick
[-- Attachment #2: 0001-hw-arm-smmu-add-memory-regions-as-property-for-an-SM.patch --]
[-- Type: text/x-patch, Size: 7751 bytes --]
From 918a003547e8f31b572726123bb8bf4f8466db0c Mon Sep 17 00:00:00 2001
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Date: Thu, 11 Dec 2025 12:33:43 -0800
Subject: [PATCH 1/2] hw/arm/smmu: add memory regions as property for an SMMU
instance
This will be used to access non-secure and secure memory. Secure support
and Granule Protection Check (for RME) for SMMU need to access secure
memory.
As well, it allows to remove usage of global address_space_memory,
allowing different SMMU instances to have a specific view of memory.
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
include/hw/arm/smmu-common.h | 4 ++++
hw/arm/sbsa-ref.c | 16 ++++++++++++----
hw/arm/smmu-common.c | 24 ++++++++++++++++++++++++
hw/arm/virt.c | 16 +++++++++++-----
4 files changed, 51 insertions(+), 9 deletions(-)
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index a6bdb67a983..0f08ae080c9 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -227,6 +227,10 @@ struct SMMUState {
uint8_t bus_num;
PCIBus *primary_bus;
bool smmu_per_bus; /* SMMU is specific to the primary_bus */
+ MemoryRegion *memory;
+ AddressSpace as_memory;
+ MemoryRegion *secure_memory;
+ AddressSpace as_secure_memory;
};
struct SMMUBaseClass {
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
index 45d2e3e946d..840b1a216f4 100644
--- a/hw/arm/sbsa-ref.c
+++ b/hw/arm/sbsa-ref.c
@@ -616,7 +616,9 @@ static void create_xhci(const SBSAMachineState *sms)
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, qdev_get_gpio_in(sms->gic, irq));
}
-static void create_smmu(const SBSAMachineState *sms, PCIBus *bus)
+static void create_smmu(const SBSAMachineState *sms, PCIBus *bus,
+ MemoryRegion *sysmem,
+ MemoryRegion *secure_sysmem)
{
hwaddr base = sbsa_ref_memmap[SBSA_SMMU].base;
int irq = sbsa_ref_irqmap[SBSA_SMMU];
@@ -628,6 +630,10 @@ static void create_smmu(const SBSAMachineState *sms, PCIBus *bus)
object_property_set_str(OBJECT(dev), "stage", "nested", &error_abort);
object_property_set_link(OBJECT(dev), "primary-bus", OBJECT(bus),
&error_abort);
+ object_property_set_link(OBJECT(dev), "memory", OBJECT(sysmem),
+ &error_abort);
+ object_property_set_link(OBJECT(dev), "secure-memory", OBJECT(secure_sysmem),
+ &error_abort);
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
for (i = 0; i < NUM_SMMU_IRQS; i++) {
@@ -636,7 +642,9 @@ static void create_smmu(const SBSAMachineState *sms, PCIBus *bus)
}
}
-static void create_pcie(SBSAMachineState *sms)
+static void create_pcie(SBSAMachineState *sms,
+ MemoryRegion *sysmem,
+ MemoryRegion *secure_sysmem)
{
hwaddr base_ecam = sbsa_ref_memmap[SBSA_PCIE_ECAM].base;
hwaddr size_ecam = sbsa_ref_memmap[SBSA_PCIE_ECAM].size;
@@ -692,7 +700,7 @@ static void create_pcie(SBSAMachineState *sms)
pci_create_simple(pci->bus, -1, "bochs-display");
- create_smmu(sms, pci->bus);
+ create_smmu(sms, pci->bus, sysmem, secure_sysmem);
}
static void *sbsa_ref_dtb(const struct arm_boot_info *binfo, int *fdt_size)
@@ -831,7 +839,7 @@ static void sbsa_ref_init(MachineState *machine)
create_xhci(sms);
- create_pcie(sms);
+ create_pcie(sms, sysmem, secure_sysmem);
create_secure_ec(secure_sysmem);
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 66367adc2a4..5fbfe825fd0 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -1171,6 +1171,12 @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
return;
}
+ g_assert(s->memory);
+ address_space_init(&s->as_memory, s->memory, "memory");
+ if (s->secure_memory) {
+ address_space_init(&s->as_secure_memory, s->secure_memory, "secure-memory");
+ }
+
/*
* We only allow default PCIe Root Complex(pcie.0) or pxb-pcie based extra
* root complexes to be associated with SMMU.
@@ -1235,10 +1241,28 @@ static void smmu_base_class_init(ObjectClass *klass, const void *data)
rc->phases.exit = smmu_base_reset_exit;
}
+static void smmu_base_instance_init(Object *obj)
+{
+ SMMUState *s = ARM_SMMU(obj);
+
+ object_property_add_link(obj, "memory",
+ TYPE_MEMORY_REGION,
+ (Object **)&s->memory,
+ qdev_prop_allow_set_link_before_realize,
+ OBJ_PROP_LINK_STRONG);
+
+ object_property_add_link(obj, "secure-memory",
+ TYPE_MEMORY_REGION,
+ (Object **)&s->secure_memory,
+ qdev_prop_allow_set_link_before_realize,
+ OBJ_PROP_LINK_STRONG);
+}
+
static const TypeInfo smmu_base_info = {
.name = TYPE_ARM_SMMU,
.parent = TYPE_SYS_BUS_DEVICE,
.instance_size = sizeof(SMMUState),
+ .instance_init = smmu_base_instance_init,
.class_data = NULL,
.class_size = sizeof(SMMUBaseClass),
.class_init = smmu_base_class_init,
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 5d205eff3a1..d446c3349e9 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1514,8 +1514,9 @@ static void create_smmuv3_dev_dtb(VirtMachineState *vms,
0x0, vms->iommu_phandle, 0x0, 0x10000);
}
-static void create_smmu(const VirtMachineState *vms,
- PCIBus *bus)
+static void create_smmu(const VirtMachineState *vms, PCIBus *bus,
+ MemoryRegion *sysmem,
+ MemoryRegion *secure_sysmem)
{
VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(vms);
int irq = vms->irqmap[VIRT_SMMU];
@@ -1549,6 +1550,10 @@ static void create_smmu(const VirtMachineState *vms,
object_property_set_str(OBJECT(dev), "stage", stage, &error_fatal);
object_property_set_link(OBJECT(dev), "primary-bus", OBJECT(bus),
&error_abort);
+ object_property_set_link(OBJECT(dev), "memory", OBJECT(sysmem),
+ &error_abort);
+ object_property_set_link(OBJECT(dev), "secure-memory", OBJECT(secure_sysmem),
+ &error_abort);
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
for (i = 0; i < NUM_SMMU_IRQS; i++) {
@@ -1587,7 +1592,8 @@ static void create_virtio_iommu_dt_bindings(VirtMachineState *vms)
}
}
-static void create_pcie(VirtMachineState *vms)
+static void create_pcie(VirtMachineState *vms,
+ MemoryRegion *sysmem, MemoryRegion *secure_sysmem)
{
hwaddr base_mmio = vms->memmap[VIRT_PCIE_MMIO].base;
hwaddr size_mmio = vms->memmap[VIRT_PCIE_MMIO].size;
@@ -1706,7 +1712,7 @@ static void create_pcie(VirtMachineState *vms)
switch (vms->iommu) {
case VIRT_IOMMU_SMMUV3:
- create_smmu(vms, vms->bus);
+ create_smmu(vms, vms->bus, sysmem, secure_sysmem);
if (!vms->default_bus_bypass_iommu) {
qemu_fdt_setprop_cells(ms->fdt, nodename, "iommu-map",
0x0, vms->iommu_phandle, 0x0, 0x10000);
@@ -2520,7 +2526,7 @@ static void machvirt_init(MachineState *machine)
create_rtc(vms);
- create_pcie(vms);
+ create_pcie(vms, sysmem, secure_sysmem);
create_cxl_host_reg_region(vms);
if (aarch64 && firmware_loaded && virt_is_acpi_enabled(vms)) {
--
2.47.3
[-- Attachment #3: 0002-hw-arm-smmu-use-SMMU-address-spaces-to-access-memory.patch --]
[-- Type: text/x-patch, Size: 10813 bytes --]
From f6d8e41c02caaf0b9af73dc54de48d7b97ae1354 Mon Sep 17 00:00:00 2001
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Date: Thu, 11 Dec 2025 13:04:25 -0800
Subject: [PATCH 2/2] hw/arm/smmu: use SMMU address spaces to access memory
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
include/hw/arm/smmu-common.h | 49 ++++++++++++++++++------------------
hw/arm/smmu-common.c | 21 ++++++++--------
hw/arm/smmuv3.c | 23 +++++++++--------
3 files changed, 49 insertions(+), 44 deletions(-)
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index 0f08ae080c9..3ee853ccdd9 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -69,30 +69,6 @@ extern AddressSpace __attribute__((weak)) arm_secure_address_space;
extern bool arm_secure_as_available;
void smmu_enable_secure_address_space(void);
-/*
- * Return the address space corresponding to the SEC_SID.
- * If SEC_SID is Secure, but secure address space is not available,
- * return NULL and print a warning message.
- */
-static inline AddressSpace *smmu_get_address_space(SMMUSecSID sec_sid)
-{
- switch (sec_sid) {
- case SMMU_SEC_SID_NS:
- return &address_space_memory;
- case SMMU_SEC_SID_R:
- return &address_space_memory;
- case SMMU_SEC_SID_S:
- if (!arm_secure_as_available || arm_secure_address_space.root == NULL) {
- printf("Secure address space requested but not available\n");
- return NULL;
- }
- return &arm_secure_address_space;
- default:
- printf("Unknown SEC_SID value %d\n", sec_sid);
- return NULL;
- }
-}
-
/*
* Page table walk error types
*/
@@ -243,6 +219,31 @@ struct SMMUBaseClass {
};
+/*
+ * Return the address space corresponding to the SEC_SID.
+ * If SEC_SID is Secure, but secure address space is not available,
+ * return NULL and print a warning message.
+ */
+static inline AddressSpace *smmu_get_address_space(struct SMMUState *bs,
+ SMMUSecSID sec_sid)
+{
+ switch (sec_sid) {
+ case SMMU_SEC_SID_NS:
+ return &bs->as_memory;
+ case SMMU_SEC_SID_R:
+ return &bs->as_memory;
+ case SMMU_SEC_SID_S:
+ if (!bs->secure_memory) {
+ printf("Secure address space requested but not available\n");
+ return NULL;
+ }
+ return &bs->as_secure_memory;
+ default:
+ printf("Unknown SEC_SID value %d\n", sec_sid);
+ return NULL;
+ }
+}
+
#define TYPE_ARM_SMMU "arm-smmu"
OBJECT_DECLARE_TYPE(SMMUState, SMMUBaseClass, ARM_SMMU)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 5fbfe825fd0..d6aba95cfd9 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -405,13 +405,13 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
* get_pte - Get the content of a page table entry located at
* @base_addr[@index]
*/
-static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
- SMMUPTWEventInfo *info, SMMUSecSID sec_sid)
+static int get_pte(SMMUState *bs, dma_addr_t baseaddr, uint32_t index,
+ uint64_t *pte, SMMUPTWEventInfo *info, SMMUSecSID sec_sid)
{
int ret;
dma_addr_t addr = baseaddr + index * sizeof(*pte);
MemTxAttrs attrs = smmu_get_txattrs(sec_sid);
- AddressSpace *as = smmu_get_address_space(sec_sid);
+ AddressSpace *as = smmu_get_address_space(bs, sec_sid);
if (!as) {
info->type = SMMU_PTW_ERR_WALK_EABT;
info->addr = addr;
@@ -570,7 +570,7 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
/* Use NS if forced by previous NSTable=1 or current nscfg */
int current_ns = forced_ns || nscfg;
SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
- if (get_pte(baseaddr, offset, &pte, info, sec_sid)) {
+ if (get_pte(bs, baseaddr, offset, &pte, info, sec_sid)) {
goto error;
}
trace_smmu_ptw_level(stage, level, iova, subpage_size,
@@ -658,7 +658,7 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
}
tlbe->sec_sid = SMMU_SEC_SID_NS;
- tlbe->entry.target_as = smmu_get_address_space(tlbe->sec_sid);
+ tlbe->entry.target_as = smmu_get_address_space(bs, tlbe->sec_sid);
if (!tlbe->entry.target_as) {
info->type = SMMU_PTW_ERR_WALK_EABT;
info->addr = gpa;
@@ -720,6 +720,7 @@ static int AArch64_S2StartLevel(int sl0 , int granule_sz)
/**
* smmu_ptw_64_s2 - VMSAv8-64 Walk of the page tables for a given ipa
* for stage-2.
+ * @bs: SMMU base state
* @cfg: translation config
* @ipa: ipa to translate
* @perm: access type
@@ -731,7 +732,7 @@ static int AArch64_S2StartLevel(int sl0 , int granule_sz)
* Upon success, @tlbe is filled with translated_addr and entry
* permission rights.
*/
-static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
+static int smmu_ptw_64_s2(SMMUState *bs, SMMUTransCfg *cfg,
dma_addr_t ipa, IOMMUAccessFlags perm,
SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
{
@@ -834,7 +835,7 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
uint8_t s2ap;
/* Use NS as Secure Stage 2 is not implemented (SMMU_S_IDR1.SEL2 == 0)*/
- if (get_pte(baseaddr, offset, &pte, info, SMMU_SEC_SID_NS)) {
+ if (get_pte(bs, baseaddr, offset, &pte, info, SMMU_SEC_SID_NS)) {
goto error;
}
trace_smmu_ptw_level(stage, level, ipa, subpage_size,
@@ -888,7 +889,7 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
}
tlbe->sec_sid = SMMU_SEC_SID_NS;
- tlbe->entry.target_as = &address_space_memory;
+ tlbe->entry.target_as = &bs->as_memory;
tlbe->entry.translated_addr = gpa;
tlbe->entry.iova = ipa & ~mask;
tlbe->entry.addr_mask = mask;
@@ -964,7 +965,7 @@ int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
return -EINVAL;
}
- return smmu_ptw_64_s2(cfg, iova, perm, tlbe, info);
+ return smmu_ptw_64_s2(bs, cfg, iova, perm, tlbe, info);
}
/* SMMU_NESTED. */
@@ -985,7 +986,7 @@ int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
}
ipa = CACHED_ENTRY_TO_ADDR(tlbe, iova);
- ret = smmu_ptw_64_s2(cfg, ipa, perm, &tlbe_s2, info);
+ ret = smmu_ptw_64_s2(bs, cfg, ipa, perm, &tlbe_s2, info);
if (ret) {
return ret;
}
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 885dae6f50e..a4a03c064d5 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -110,13 +110,14 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
trace_smmuv3_write_gerrorn(toggled & pending, bank->gerrorn);
}
-static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd, SMMUSecSID sec_sid)
+static inline MemTxResult queue_read(SMMUState *bs, SMMUQueue *q,
+ Cmd *cmd, SMMUSecSID sec_sid)
{
dma_addr_t addr = Q_CONS_ENTRY(q);
MemTxResult ret;
int i;
- ret = dma_memory_read(&address_space_memory, addr, cmd, sizeof(Cmd),
+ ret = dma_memory_read(&bs->as_memory, addr, cmd, sizeof(Cmd),
MEMTXATTRS_UNSPECIFIED);
if (ret != MEMTX_OK) {
return ret;
@@ -127,7 +128,7 @@ static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd, SMMUSecSID sec_sid)
return ret;
}
-static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
+static MemTxResult queue_write(SMMUState *bs, SMMUQueue *q, Evt *evt_in)
{
dma_addr_t addr = Q_PROD_ENTRY(q);
MemTxResult ret;
@@ -137,7 +138,7 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
for (i = 0; i < ARRAY_SIZE(evt.word); i++) {
cpu_to_le32s(&evt.word[i]);
}
- ret = dma_memory_write(&address_space_memory, addr, &evt, sizeof(Evt),
+ ret = dma_memory_write(&bs->as_memory, addr, &evt, sizeof(Evt),
MEMTXATTRS_UNSPECIFIED);
if (ret != MEMTX_OK) {
return ret;
@@ -162,7 +163,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, SMMUSecSID sec_sid,
return MEMTX_ERROR;
}
- r = queue_write(q, evt);
+ r = queue_write(&s->smmu_state, q, evt);
if (r != MEMTX_OK) {
return r;
}
@@ -993,7 +994,7 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
cfg = g_new0(SMMUTransCfg, 1);
cfg->sec_sid = sec_sid;
cfg->txattrs = smmu_get_txattrs(sec_sid);
- cfg->as = smmu_get_address_space(sec_sid);
+ cfg->as = smmu_get_address_space(bc, sec_sid);
if (!cfg->as) {
/* Can't get AddressSpace, free cfg and return. */
g_free(cfg);
@@ -1221,7 +1222,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
SMMUTranslationStatus status;
SMMUTransCfg *cfg = NULL;
IOMMUTLBEntry entry = {
- .target_as = &address_space_memory,
+ .target_as = &s->smmu_state.as_memory,
.iova = addr,
.translated_addr = addr,
.addr_mask = ~(hwaddr)0,
@@ -1322,6 +1323,8 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
SMMUSecSID sec_sid)
{
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
+ SMMUv3State *s3 = sdev->smmu;
+ SMMUState *bs = &(s3->smmu_state);
SMMUEventInfo eventinfo = {.sec_sid = sec_sid,
.inval_ste_allowed = true};
SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, sec_sid);
@@ -1369,7 +1372,7 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
}
event.type = IOMMU_NOTIFIER_UNMAP;
- event.entry.target_as = smmu_get_address_space(sec_sid);
+ event.entry.target_as = smmu_get_address_space(bs, sec_sid);
event.entry.iova = iova;
event.entry.addr_mask = num_pages * (1 << granule) - 1;
event.entry.perm = IOMMU_NONE;
@@ -1618,7 +1621,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
break;
}
- if (queue_read(q, &cmd, sec_sid) != MEMTX_OK) {
+ if (queue_read(&s->smmu_state, q, &cmd, sec_sid) != MEMTX_OK) {
cmd_error = SMMU_CERROR_ABT;
break;
}
@@ -1649,7 +1652,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
SMMUTransCfg *cfg = g_new0(SMMUTransCfg, 1);
cfg->sec_sid = sec_sid;
cfg->txattrs = smmu_get_txattrs(sec_sid);
- cfg->as = smmu_get_address_space(sec_sid);
+ cfg->as = smmu_get_address_space(bs, sec_sid);
if (!cfg->as) {
g_free(cfg);
qemu_log_mask(LOG_GUEST_ERROR, "SMMUv3 Can't get address space\n");
--
2.47.3
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses
2025-12-11 22:12 ` Pierrick Bouvier
@ 2025-12-11 22:19 ` Pierrick Bouvier
0 siblings, 0 replies; 67+ messages in thread
From: Pierrick Bouvier @ 2025-12-11 22:19 UTC (permalink / raw)
To: Tao Tang, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Richard Henderson
On 12/11/25 2:12 PM, Pierrick Bouvier wrote:
> Hi Tao,
>
> On 10/12/25 8:06 AM, Tao Tang wrote:
>> According to the Arm architecture, SMMU-originated memory accesses,
>> such as fetching commands or writing events for a secure stream, must
>> target the Secure Physical Address (PA) space. The existing model sends
>> all DMA to the global non-secure address_space_memory.
>>
>> This patch introduces the infrastructure to differentiate between secure
>> and non-secure memory accesses. Firstly, SMMU_SEC_SID_S is added in
>> SMMUSecSID enum to represent the secure context. Then a weak global
>> symbol, arm_secure_address_space, is added, which can be provided by the
>> machine model to represent the Secure PA space.
>>
>> A new helper, smmu_get_address_space(), selects the target address
>> space based on SEC_SID. All internal DMA calls
>> (dma_memory_read/write) will be updated to use this helper in follow-up
>> patches.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmu-common.c | 8 ++++++++
>> hw/arm/virt.c | 5 +++++
>> include/hw/arm/smmu-common.h | 27 +++++++++++++++++++++++++++
>> 3 files changed, 40 insertions(+)
>>
>> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
>> index 62a7612184..24db448683 100644
>> --- a/hw/arm/smmu-common.c
>> +++ b/hw/arm/smmu-common.c
>> @@ -30,6 +30,14 @@
>> #include "hw/arm/smmu-common.h"
>> #include "smmu-internal.h"
>>
>> +/* Global state for secure address space availability */
>> +bool arm_secure_as_available;
>> +
>> +void smmu_enable_secure_address_space(void)
>> +{
>> + arm_secure_as_available = true;
>> +}
>> +
>> /* IOTLB Management */
>>
>> static guint smmu_iotlb_key_hash(gconstpointer v)
>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>> index 175023897a..83dc62a095 100644
>> --- a/hw/arm/virt.c
>> +++ b/hw/arm/virt.c
>> @@ -92,6 +92,8 @@
>> #include "hw/cxl/cxl_host.h"
>> #include "qemu/guest-random.h"
>>
>> +AddressSpace arm_secure_address_space;
>> +
>> static GlobalProperty arm_virt_compat[] = {
>> { TYPE_VIRTIO_IOMMU_PCI, "aw-bits", "48" },
>> };
>> @@ -2257,6 +2259,9 @@ static void machvirt_init(MachineState *machine)
>> memory_region_init(secure_sysmem, OBJECT(machine), "secure-memory",
>> UINT64_MAX);
>> memory_region_add_subregion_overlap(secure_sysmem, 0, sysmem, -1);
>> + address_space_init(&arm_secure_address_space, secure_sysmem,
>> + "secure-memory-space");
>> + smmu_enable_secure_address_space();
>> }
>>
>> firmware_loaded = virt_firmware_init(vms, sysmem,
>> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
>> index b0dae18a62..d54558f94b 100644
>> --- a/include/hw/arm/smmu-common.h
>> +++ b/include/hw/arm/smmu-common.h
>> @@ -43,9 +43,36 @@
>> /* StreamID Security state */
>> typedef enum SMMUSecSID {
>> SMMU_SEC_SID_NS = 0,
>> + SMMU_SEC_SID_S,
>> SMMU_SEC_SID_NUM,
>> } SMMUSecSID;
>>
>> +extern AddressSpace __attribute__((weak)) arm_secure_address_space;
>> +extern bool arm_secure_as_available;
>> +void smmu_enable_secure_address_space(void);
>> +
>> +/*
>> + * Return the address space corresponding to the SEC_SID.
>> + * If SEC_SID is Secure, but secure address space is not available,
>> + * return NULL and print a warning message.
>> + */
>> +static inline AddressSpace *smmu_get_address_space(SMMUSecSID sec_sid)
>> +{
>> + switch (sec_sid) {
>> + case SMMU_SEC_SID_NS:
>> + return &address_space_memory;
>> + case SMMU_SEC_SID_S:
>> + if (!arm_secure_as_available || arm_secure_address_space.root == NULL) {
>> + printf("Secure address space requested but not available");
>> + return NULL;
>> + }
>> + return &arm_secure_address_space;
>> + default:
>> + printf("Unknown SEC_SID value %d", sec_sid);
>> + return NULL;
>> + }
>> +}
>> +
>> /*
>> * Page table walk error types
>> */
>
> I ran into the same issue, when adding Granule Protection Check to the
> SMMU, for RME support. It requires access to secure memory, where
> Granule Protection Table is kept, and thus, access secure address space.
>
> After talking with Richard and Philippe, I have been suggested a better
> way. Similar to how arm cpus handle this, boards (virt & sbsa-ref) are
> simply passing pointers to MemoryRegion for global and secure memory.
> Then, the SMMU can create its own address spaces, based on those regions.
>
> It's clean, does not require any weak variable, and mimic what is
> already done for cpus. Please see the two patches attached.
>
> First one define properties, and pass memory regions from boards to
> SMMU. Second one replace global address spaces with SMMU ones.
>
> I'll send patch 1 as it's own series, and you can take inspiration from
> patch 2 for this series. SMMU unit tests will need to be modified to be
> passed the memory regions also.
>
> Regards,
> Pierrick
Sent patch 1 here:
https://lore.kernel.org/qemu-devel/20251211221715.2206662-1-pierrick.bouvier@linaro.org/T/#u
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 09/21] hw/arm/smmuv3: Plumb transaction attributes into config helpers
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (7 preceding siblings ...)
2025-10-12 15:06 ` [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-12-02 14:03 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 10/21] hw/arm/smmu-common: Key configuration cache on SMMUDevice and SEC_SID Tao Tang
` (11 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
As a preliminary step towards a multi-security-state configuration
cache, introduce MemTxAttrs and AddressSpace * members to the
SMMUTransCfg struct. The goal is to cache these attributes so that
internal DMA calls (dma_memory_read/write) can use them directly.
To facilitate this, hw/arm/arm-security.h is now included in
smmu-common.h. This is a notable change, as it marks the first time
these Arm CPU-specific security space definitions are used outside of
cpu.h, making them more generally available for device models.
The decode helpers (smmu_get_ste, smmu_get_cd, smmu_find_ste,
smmuv3_get_config) are updated to use these new attributes for memory
accesses. This ensures that reads of SMMU structures from memory, such
as the Stream Table, use the correct security context.
For now, the configuration cache lookup key remains unchanged and is
still based solely on the SMMUDevice pointer. The new attributes are
populated during a cache miss in smmuv3_get_config.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-common.c | 19 ++++++++++++++++++
hw/arm/smmuv3.c | 38 ++++++++++++++++++++++--------------
include/hw/arm/smmu-common.h | 6 ++++++
3 files changed, 48 insertions(+), 15 deletions(-)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 24db448683..82308f0e33 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -30,6 +30,25 @@
#include "hw/arm/smmu-common.h"
#include "smmu-internal.h"
+ARMSecuritySpace smmu_get_security_space(SMMUSecSID sec_sid)
+{
+ switch (sec_sid) {
+ case SMMU_SEC_SID_S:
+ return ARMSS_Secure;
+ case SMMU_SEC_SID_NS:
+ default:
+ return ARMSS_NonSecure;
+ }
+}
+
+MemTxAttrs smmu_get_txattrs(SMMUSecSID sec_sid)
+{
+ return (MemTxAttrs) {
+ .secure = sec_sid > SMMU_SEC_SID_NS ? 1 : 0,
+ .space = smmu_get_security_space(sec_sid),
+ };
+}
+
/* Global state for secure address space availability */
bool arm_secure_as_available;
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index a87ae36e8b..351bbf1ae9 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -333,14 +333,13 @@ static void smmuv3_init_regs(SMMUv3State *s)
}
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
- SMMUEventInfo *event)
+ SMMUEventInfo *event, SMMUTransCfg *cfg)
{
int ret, i;
trace_smmuv3_get_ste(addr);
/* TODO: guarantee 64-bit single-copy atomicity */
- ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
- MEMTXATTRS_UNSPECIFIED);
+ ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
if (ret != MEMTX_OK) {
qemu_log_mask(LOG_GUEST_ERROR,
"Cannot fetch pte at address=0x%"PRIx64"\n", addr);
@@ -385,8 +384,7 @@ static int smmu_get_cd(SMMUv3State *s, STE *ste, SMMUTransCfg *cfg,
}
/* TODO: guarantee 64-bit single-copy atomicity */
- ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
- MEMTXATTRS_UNSPECIFIED);
+ ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
if (ret != MEMTX_OK) {
qemu_log_mask(LOG_GUEST_ERROR,
"Cannot fetch pte at address=0x%"PRIx64"\n", addr);
@@ -639,18 +637,19 @@ bad_ste:
* @sid: stream ID
* @ste: returned stream table entry
* @event: handle to an event info
+ * @cfg: translation configuration cache
*
* Supports linear and 2-level stream table
* Return 0 on success, -EINVAL otherwise
*/
static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
- SMMUEventInfo *event)
+ SMMUEventInfo *event, SMMUTransCfg *cfg)
{
dma_addr_t addr, strtab_base;
uint32_t log2size;
int strtab_size_shift;
int ret;
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUSecSID sec_sid = cfg->sec_sid;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
trace_smmuv3_find_ste(sid, bank->features, bank->sid_split);
@@ -678,8 +677,8 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
l2_ste_offset = sid & ((1 << bank->sid_split) - 1);
l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
/* TODO: guarantee 64-bit single-copy atomicity */
- ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
- sizeof(l1std), MEMTXATTRS_UNSPECIFIED);
+ ret = dma_memory_read(cfg->as, l1ptr, &l1std, sizeof(l1std),
+ cfg->txattrs);
if (ret != MEMTX_OK) {
qemu_log_mask(LOG_GUEST_ERROR,
"Could not read L1PTR at 0X%"PRIx64"\n", l1ptr);
@@ -721,7 +720,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
addr = strtab_base + sid * sizeof(*ste);
}
- if (smmu_get_ste(s, addr, ste, event)) {
+ if (smmu_get_ste(s, addr, ste, event, cfg)) {
return -EINVAL;
}
@@ -850,7 +849,7 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
/* ASID defaults to -1 (if s1 is not supported). */
cfg->asid = -1;
- ret = smmu_find_ste(s, sid, &ste, event);
+ ret = smmu_find_ste(s, sid, &ste, event, cfg);
if (ret) {
return ret;
}
@@ -884,7 +883,8 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
* decoding under the form of an SMMUTransCfg struct. The hash table is indexed
* by the SMMUDevice handle.
*/
-static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
+static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
+ SMMUSecSID sec_sid)
{
SMMUv3State *s = sdev->smmu;
SMMUState *bc = &s->smmu_state;
@@ -904,7 +904,15 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
100 * sdev->cfg_cache_hits /
(sdev->cfg_cache_hits + sdev->cfg_cache_misses));
cfg = g_new0(SMMUTransCfg, 1);
- cfg->sec_sid = SMMU_SEC_SID_NS;
+ cfg->sec_sid = sec_sid;
+ cfg->txattrs = smmu_get_txattrs(sec_sid);
+ cfg->as = smmu_get_address_space(sec_sid);
+ if (!cfg->as) {
+ /* Can't get AddressSpace, free cfg and return. */
+ g_free(cfg);
+ cfg = NULL;
+ return cfg;
+ }
if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
g_hash_table_insert(bc->configs, sdev, cfg);
@@ -1086,7 +1094,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
goto epilogue;
}
- cfg = smmuv3_get_config(sdev, &event);
+ cfg = smmuv3_get_config(sdev, &event, sec_sid);
if (!cfg) {
status = SMMU_TRANS_ERROR;
goto epilogue;
@@ -1168,7 +1176,7 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUEventInfo eventinfo = {.sec_sid = sec_sid,
.inval_ste_allowed = true};
- SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo);
+ SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, sec_sid);
IOMMUTLBEvent event;
uint8_t granule;
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index d54558f94b..c17c7db6e5 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -22,6 +22,7 @@
#include "hw/sysbus.h"
#include "hw/pci/pci.h"
#include "qom/object.h"
+#include "hw/arm/arm-security.h"
#define SMMU_PCI_BUS_MAX 256
#define SMMU_PCI_DEVFN_MAX 256
@@ -47,6 +48,9 @@ typedef enum SMMUSecSID {
SMMU_SEC_SID_NUM,
} SMMUSecSID;
+MemTxAttrs smmu_get_txattrs(SMMUSecSID sec_sid);
+ARMSecuritySpace smmu_get_security_space(SMMUSecSID sec_sid);
+
extern AddressSpace __attribute__((weak)) arm_secure_address_space;
extern bool arm_secure_as_available;
void smmu_enable_secure_address_space(void);
@@ -150,6 +154,8 @@ typedef struct SMMUTransCfg {
SMMUTransTableInfo tt[2];
/* Used by stage-2 only. */
struct SMMUS2Cfg s2cfg;
+ MemTxAttrs txattrs; /* cached transaction attributes */
+ AddressSpace *as; /* cached address space */
} SMMUTransCfg;
typedef struct SMMUDevice {
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 09/21] hw/arm/smmuv3: Plumb transaction attributes into config helpers
2025-10-12 15:06 ` [RFC v3 09/21] hw/arm/smmuv3: Plumb transaction attributes into config helpers Tao Tang
@ 2025-12-02 14:03 ` Eric Auger
2025-12-03 14:03 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-02 14:03 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:06 PM, Tao Tang wrote:
> As a preliminary step towards a multi-security-state configuration
> cache, introduce MemTxAttrs and AddressSpace * members to the
> SMMUTransCfg struct. The goal is to cache these attributes so that
> internal DMA calls (dma_memory_read/write) can use them directly.
>
> To facilitate this, hw/arm/arm-security.h is now included in
> smmu-common.h. This is a notable change, as it marks the first time
> these Arm CPU-specific security space definitions are used outside of
> cpu.h, making them more generally available for device models.
>
> The decode helpers (smmu_get_ste, smmu_get_cd, smmu_find_ste,
> smmuv3_get_config) are updated to use these new attributes for memory
> accesses. This ensures that reads of SMMU structures from memory, such
> as the Stream Table, use the correct security context.
>
> For now, the configuration cache lookup key remains unchanged and is
> still based solely on the SMMUDevice pointer. The new attributes are
> populated during a cache miss in smmuv3_get_config.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-common.c | 19 ++++++++++++++++++
> hw/arm/smmuv3.c | 38 ++++++++++++++++++++++--------------
> include/hw/arm/smmu-common.h | 6 ++++++
> 3 files changed, 48 insertions(+), 15 deletions(-)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index 24db448683..82308f0e33 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -30,6 +30,25 @@
> #include "hw/arm/smmu-common.h"
> #include "smmu-internal.h"
>
> +ARMSecuritySpace smmu_get_security_space(SMMUSecSID sec_sid)
> +{
> + switch (sec_sid) {
> + case SMMU_SEC_SID_S:
> + return ARMSS_Secure;
> + case SMMU_SEC_SID_NS:
> + default:
> + return ARMSS_NonSecure;
> + }
> +}
> +
> +MemTxAttrs smmu_get_txattrs(SMMUSecSID sec_sid)
> +{
> + return (MemTxAttrs) {
> + .secure = sec_sid > SMMU_SEC_SID_NS ? 1 : 0,
> + .space = smmu_get_security_space(sec_sid),
> + };
> +}
> +
> /* Global state for secure address space availability */
> bool arm_secure_as_available;
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index a87ae36e8b..351bbf1ae9 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -333,14 +333,13 @@ static void smmuv3_init_regs(SMMUv3State *s)
> }
>
> static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
> - SMMUEventInfo *event)
> + SMMUEventInfo *event, SMMUTransCfg *cfg)
> {
> int ret, i;
>
> trace_smmuv3_get_ste(addr);
> /* TODO: guarantee 64-bit single-copy atomicity */
> - ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
> - MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
> if (ret != MEMTX_OK) {
> qemu_log_mask(LOG_GUEST_ERROR,
> "Cannot fetch pte at address=0x%"PRIx64"\n", addr);
> @@ -385,8 +384,7 @@ static int smmu_get_cd(SMMUv3State *s, STE *ste, SMMUTransCfg *cfg,
> }
>
> /* TODO: guarantee 64-bit single-copy atomicity */
> - ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
> - MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
> if (ret != MEMTX_OK) {
> qemu_log_mask(LOG_GUEST_ERROR,
> "Cannot fetch pte at address=0x%"PRIx64"\n", addr);
> @@ -639,18 +637,19 @@ bad_ste:
> * @sid: stream ID
> * @ste: returned stream table entry
> * @event: handle to an event info
> + * @cfg: translation configuration cache
> *
> * Supports linear and 2-level stream table
> * Return 0 on success, -EINVAL otherwise
> */
> static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> - SMMUEventInfo *event)
> + SMMUEventInfo *event, SMMUTransCfg *cfg)
> {
> dma_addr_t addr, strtab_base;
> uint32_t log2size;
> int strtab_size_shift;
> int ret;
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUSecSID sec_sid = cfg->sec_sid;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>
> trace_smmuv3_find_ste(sid, bank->features, bank->sid_split);
> @@ -678,8 +677,8 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> l2_ste_offset = sid & ((1 << bank->sid_split) - 1);
> l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
> /* TODO: guarantee 64-bit single-copy atomicity */
> - ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
> - sizeof(l1std), MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_read(cfg->as, l1ptr, &l1std, sizeof(l1std),
> + cfg->txattrs);
> if (ret != MEMTX_OK) {
> qemu_log_mask(LOG_GUEST_ERROR,
> "Could not read L1PTR at 0X%"PRIx64"\n", l1ptr);
> @@ -721,7 +720,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> addr = strtab_base + sid * sizeof(*ste);
> }
>
> - if (smmu_get_ste(s, addr, ste, event)) {
> + if (smmu_get_ste(s, addr, ste, event, cfg)) {
> return -EINVAL;
> }
>
> @@ -850,7 +849,7 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
> /* ASID defaults to -1 (if s1 is not supported). */
> cfg->asid = -1;
>
> - ret = smmu_find_ste(s, sid, &ste, event);
> + ret = smmu_find_ste(s, sid, &ste, event, cfg);
> if (ret) {
> return ret;
> }
> @@ -884,7 +883,8 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
> * decoding under the form of an SMMUTransCfg struct. The hash table is indexed
> * by the SMMUDevice handle.
> */
> -static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
> +static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
> + SMMUSecSID sec_sid)
> {
> SMMUv3State *s = sdev->smmu;
> SMMUState *bc = &s->smmu_state;
> @@ -904,7 +904,15 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
> 100 * sdev->cfg_cache_hits /
> (sdev->cfg_cache_hits + sdev->cfg_cache_misses));
> cfg = g_new0(SMMUTransCfg, 1);
> - cfg->sec_sid = SMMU_SEC_SID_NS;
> + cfg->sec_sid = sec_sid;
> + cfg->txattrs = smmu_get_txattrs(sec_sid);
> + cfg->as = smmu_get_address_space(sec_sid);
> + if (!cfg->as) {
> + /* Can't get AddressSpace, free cfg and return. */
> + g_free(cfg);
> + cfg = NULL;
> + return cfg;
don't you want to report an error in that case. Which type?
> + }
>
> if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
> g_hash_table_insert(bc->configs, sdev, cfg);
> @@ -1086,7 +1094,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> goto epilogue;
> }
>
> - cfg = smmuv3_get_config(sdev, &event);
> + cfg = smmuv3_get_config(sdev, &event, sec_sid);
> if (!cfg) {
> status = SMMU_TRANS_ERROR;
> goto epilogue;
> @@ -1168,7 +1176,7 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
> SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUEventInfo eventinfo = {.sec_sid = sec_sid,
> .inval_ste_allowed = true};
> - SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo);
> + SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, sec_sid);
> IOMMUTLBEvent event;
> uint8_t granule;
>
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index d54558f94b..c17c7db6e5 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -22,6 +22,7 @@
> #include "hw/sysbus.h"
> #include "hw/pci/pci.h"
> #include "qom/object.h"
> +#include "hw/arm/arm-security.h"
>
> #define SMMU_PCI_BUS_MAX 256
> #define SMMU_PCI_DEVFN_MAX 256
> @@ -47,6 +48,9 @@ typedef enum SMMUSecSID {
> SMMU_SEC_SID_NUM,
> } SMMUSecSID;
>
> +MemTxAttrs smmu_get_txattrs(SMMUSecSID sec_sid);
> +ARMSecuritySpace smmu_get_security_space(SMMUSecSID sec_sid);
> +
> extern AddressSpace __attribute__((weak)) arm_secure_address_space;
> extern bool arm_secure_as_available;
> void smmu_enable_secure_address_space(void);
> @@ -150,6 +154,8 @@ typedef struct SMMUTransCfg {
> SMMUTransTableInfo tt[2];
> /* Used by stage-2 only. */
> struct SMMUS2Cfg s2cfg;
> + MemTxAttrs txattrs; /* cached transaction attributes */
> + AddressSpace *as; /* cached address space */
> } SMMUTransCfg;
>
> typedef struct SMMUDevice {
Besides looks good to me
Eric
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 09/21] hw/arm/smmuv3: Plumb transaction attributes into config helpers
2025-12-02 14:03 ` Eric Auger
@ 2025-12-03 14:03 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-03 14:03 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/12/2 22:03, Eric Auger wrote:
>
> On 10/12/25 5:06 PM, Tao Tang wrote:
>> As a preliminary step towards a multi-security-state configuration
>> cache, introduce MemTxAttrs and AddressSpace * members to the
>> SMMUTransCfg struct. The goal is to cache these attributes so that
>> internal DMA calls (dma_memory_read/write) can use them directly.
>>
>> To facilitate this, hw/arm/arm-security.h is now included in
>> smmu-common.h. This is a notable change, as it marks the first time
>> these Arm CPU-specific security space definitions are used outside of
>> cpu.h, making them more generally available for device models.
>>
>> The decode helpers (smmu_get_ste, smmu_get_cd, smmu_find_ste,
>> smmuv3_get_config) are updated to use these new attributes for memory
>> accesses. This ensures that reads of SMMU structures from memory, such
>> as the Stream Table, use the correct security context.
>>
>> For now, the configuration cache lookup key remains unchanged and is
>> still based solely on the SMMUDevice pointer. The new attributes are
>> populated during a cache miss in smmuv3_get_config.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmu-common.c | 19 ++++++++++++++++++
>> hw/arm/smmuv3.c | 38 ++++++++++++++++++++++--------------
>> include/hw/arm/smmu-common.h | 6 ++++++
>> 3 files changed, 48 insertions(+), 15 deletions(-)
>>
>> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
>> index 24db448683..82308f0e33 100644
>> --- a/hw/arm/smmu-common.c
>> +++ b/hw/arm/smmu-common.c
>> @@ -30,6 +30,25 @@
>> #include "hw/arm/smmu-common.h"
>> #include "smmu-internal.h"
>>
>> +ARMSecuritySpace smmu_get_security_space(SMMUSecSID sec_sid)
>> +{
>> + switch (sec_sid) {
>> + case SMMU_SEC_SID_S:
>> + return ARMSS_Secure;
>> + case SMMU_SEC_SID_NS:
>> + default:
>> + return ARMSS_NonSecure;
>> + }
>> +}
>> +
>> +MemTxAttrs smmu_get_txattrs(SMMUSecSID sec_sid)
>> +{
>> + return (MemTxAttrs) {
>> + .secure = sec_sid > SMMU_SEC_SID_NS ? 1 : 0,
>> + .space = smmu_get_security_space(sec_sid),
>> + };
>> +}
>> +
>> /* Global state for secure address space availability */
>> bool arm_secure_as_available;
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index a87ae36e8b..351bbf1ae9 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -333,14 +333,13 @@ static void smmuv3_init_regs(SMMUv3State *s)
>> }
>>
>> static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
>> - SMMUEventInfo *event)
>> + SMMUEventInfo *event, SMMUTransCfg *cfg)
>> {
>> int ret, i;
>>
>> trace_smmuv3_get_ste(addr);
>> /* TODO: guarantee 64-bit single-copy atomicity */
>> - ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
>> - MEMTXATTRS_UNSPECIFIED);
>> + ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
>> if (ret != MEMTX_OK) {
>> qemu_log_mask(LOG_GUEST_ERROR,
>> "Cannot fetch pte at address=0x%"PRIx64"\n", addr);
>> @@ -385,8 +384,7 @@ static int smmu_get_cd(SMMUv3State *s, STE *ste, SMMUTransCfg *cfg,
>> }
>>
>> /* TODO: guarantee 64-bit single-copy atomicity */
>> - ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
>> - MEMTXATTRS_UNSPECIFIED);
>> + ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
>> if (ret != MEMTX_OK) {
>> qemu_log_mask(LOG_GUEST_ERROR,
>> "Cannot fetch pte at address=0x%"PRIx64"\n", addr);
>> @@ -639,18 +637,19 @@ bad_ste:
>> * @sid: stream ID
>> * @ste: returned stream table entry
>> * @event: handle to an event info
>> + * @cfg: translation configuration cache
>> *
>> * Supports linear and 2-level stream table
>> * Return 0 on success, -EINVAL otherwise
>> */
>> static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
>> - SMMUEventInfo *event)
>> + SMMUEventInfo *event, SMMUTransCfg *cfg)
>> {
>> dma_addr_t addr, strtab_base;
>> uint32_t log2size;
>> int strtab_size_shift;
>> int ret;
>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>> + SMMUSecSID sec_sid = cfg->sec_sid;
>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>
>> trace_smmuv3_find_ste(sid, bank->features, bank->sid_split);
>> @@ -678,8 +677,8 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
>> l2_ste_offset = sid & ((1 << bank->sid_split) - 1);
>> l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
>> /* TODO: guarantee 64-bit single-copy atomicity */
>> - ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
>> - sizeof(l1std), MEMTXATTRS_UNSPECIFIED);
>> + ret = dma_memory_read(cfg->as, l1ptr, &l1std, sizeof(l1std),
>> + cfg->txattrs);
>> if (ret != MEMTX_OK) {
>> qemu_log_mask(LOG_GUEST_ERROR,
>> "Could not read L1PTR at 0X%"PRIx64"\n", l1ptr);
>> @@ -721,7 +720,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
>> addr = strtab_base + sid * sizeof(*ste);
>> }
>>
>> - if (smmu_get_ste(s, addr, ste, event)) {
>> + if (smmu_get_ste(s, addr, ste, event, cfg)) {
>> return -EINVAL;
>> }
>>
>> @@ -850,7 +849,7 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
>> /* ASID defaults to -1 (if s1 is not supported). */
>> cfg->asid = -1;
>>
>> - ret = smmu_find_ste(s, sid, &ste, event);
>> + ret = smmu_find_ste(s, sid, &ste, event, cfg);
>> if (ret) {
>> return ret;
>> }
>> @@ -884,7 +883,8 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
>> * decoding under the form of an SMMUTransCfg struct. The hash table is indexed
>> * by the SMMUDevice handle.
>> */
>> -static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
>> +static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
>> + SMMUSecSID sec_sid)
>> {
>> SMMUv3State *s = sdev->smmu;
>> SMMUState *bc = &s->smmu_state;
>> @@ -904,7 +904,15 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
>> 100 * sdev->cfg_cache_hits /
>> (sdev->cfg_cache_hits + sdev->cfg_cache_misses));
>> cfg = g_new0(SMMUTransCfg, 1);
>> - cfg->sec_sid = SMMU_SEC_SID_NS;
>> + cfg->sec_sid = sec_sid;
>> + cfg->txattrs = smmu_get_txattrs(sec_sid);
>> + cfg->as = smmu_get_address_space(sec_sid);
>> + if (!cfg->as) {
>> + /* Can't get AddressSpace, free cfg and return. */
>> + g_free(cfg);
>> + cfg = NULL;
>> + return cfg;
> don't you want to report an error in that case. Which type?
Thanks for your review!
Honestly I’m not actually sure there is an architecturally correct event
type for this case.
Here smmu_get_address_space(sec_sid) returning NULL means the machine
didn’t provide an AddressSpace for SMMU-originated DMA in that security
context, which feels like a QEMU board/integration bug rather than
something the SMMU spec defines an event for (the existing events are
all driven by STE/CD and transaction attributes). Synthesizing something
like F_STREAM_DISABLED would therefore be misleading from the guest’s
point of view.
Maybe we could use g_assert() hard failure here? But I also look forward
to different opinions.
Yours,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 10/21] hw/arm/smmu-common: Key configuration cache on SMMUDevice and SEC_SID
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (8 preceding siblings ...)
2025-10-12 15:06 ` [RFC v3 09/21] hw/arm/smmuv3: Plumb transaction attributes into config helpers Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-12-02 14:18 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 11/21] hw/arm/smmuv3: Decode security attributes from descriptors Tao Tang
` (10 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Adapt the configuration cache to support multiple security states by
introducing a composite key, SMMUConfigKey. This key combines the
SMMUDevice with SEC_SID, preventing aliasing between Secure and
Non-secure configurations for the same device, also the future Realm and
Root configurations.
The cache lookup, insertion, and invalidation mechanisms are updated
to use this new keying infrastructure. This change is critical for
ensuring correct translation when a device is active in more than one
security world.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-common.c | 45 ++++++++++++++++++++++++++++++++++--
hw/arm/smmuv3.c | 13 +++++++----
include/hw/arm/smmu-common.h | 7 ++++++
3 files changed, 58 insertions(+), 7 deletions(-)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 82308f0e33..5fabe30c75 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -30,6 +30,26 @@
#include "hw/arm/smmu-common.h"
#include "smmu-internal.h"
+/* Configuration Cache Management */
+static guint smmu_config_key_hash(gconstpointer key)
+{
+ const SMMUConfigKey *k = key;
+ return g_direct_hash(k->sdev) ^ (guint)k->sec_sid;
+}
+
+static gboolean smmu_config_key_equal(gconstpointer a, gconstpointer b)
+{
+ const SMMUConfigKey *ka = a;
+ const SMMUConfigKey *kb = b;
+ return ka->sdev == kb->sdev && ka->sec_sid == kb->sec_sid;
+}
+
+SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecSID sec_sid)
+{
+ SMMUConfigKey key = {.sdev = sdev, .sec_sid = sec_sid};
+ return key;
+}
+
ARMSecuritySpace smmu_get_security_space(SMMUSecSID sec_sid)
{
switch (sec_sid) {
@@ -256,7 +276,8 @@ static gboolean smmu_hash_remove_by_vmid_ipa(gpointer key, gpointer value,
static gboolean
smmu_hash_remove_by_sid_range(gpointer key, gpointer value, gpointer user_data)
{
- SMMUDevice *sdev = (SMMUDevice *)key;
+ SMMUConfigKey *config_key = (SMMUConfigKey *)key;
+ SMMUDevice *sdev = config_key->sdev;
uint32_t sid = smmu_get_sid(sdev);
SMMUSIDRange *sid_range = (SMMUSIDRange *)user_data;
@@ -274,6 +295,24 @@ void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range)
&sid_range);
}
+static gboolean smmu_hash_remove_by_sdev(gpointer key, gpointer value,
+ gpointer user_data)
+{
+ SMMUConfigKey *config_key = (SMMUConfigKey *)key;
+ SMMUDevice *target = (SMMUDevice *)user_data;
+
+ if (config_key->sdev != target) {
+ return false;
+ }
+ trace_smmu_config_cache_inv(smmu_get_sid(target));
+ return true;
+}
+
+void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev)
+{
+ g_hash_table_foreach_remove(s->configs, smmu_hash_remove_by_sdev, sdev);
+}
+
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
uint8_t tg, uint64_t num_pages, uint8_t ttl)
{
@@ -961,7 +1000,9 @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
error_propagate(errp, local_err);
return;
}
- s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
+ s->configs = g_hash_table_new_full(smmu_config_key_hash,
+ smmu_config_key_equal,
+ g_free, g_free);
s->iotlb = g_hash_table_new_full(smmu_iotlb_key_hash, smmu_iotlb_key_equal,
g_free, g_free);
s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 351bbf1ae9..55f4ad1757 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -878,10 +878,11 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
*
* @sdev: SMMUDevice handle
* @event: output event info
+ * @sec_sid: StreamID Security state
*
* The configuration cache contains data resulting from both STE and CD
* decoding under the form of an SMMUTransCfg struct. The hash table is indexed
- * by the SMMUDevice handle.
+ * by a composite key of the SMMUDevice and the sec_sid.
*/
static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
SMMUSecSID sec_sid)
@@ -889,8 +890,9 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
SMMUv3State *s = sdev->smmu;
SMMUState *bc = &s->smmu_state;
SMMUTransCfg *cfg;
+ SMMUConfigKey lookup_key = smmu_get_config_key(sdev, sec_sid);
- cfg = g_hash_table_lookup(bc->configs, sdev);
+ cfg = g_hash_table_lookup(bc->configs, &lookup_key);
if (cfg) {
sdev->cfg_cache_hits++;
trace_smmuv3_config_cache_hit(smmu_get_sid(sdev),
@@ -915,7 +917,9 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
}
if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
- g_hash_table_insert(bc->configs, sdev, cfg);
+ SMMUConfigKey *persistent_key = g_new(SMMUConfigKey, 1);
+ *persistent_key = lookup_key;
+ g_hash_table_insert(bc->configs, persistent_key, cfg);
} else {
g_free(cfg);
cfg = NULL;
@@ -929,8 +933,7 @@ static void smmuv3_flush_config(SMMUDevice *sdev)
SMMUv3State *s = sdev->smmu;
SMMUState *bc = &s->smmu_state;
- trace_smmu_config_cache_inv(smmu_get_sid(sdev));
- g_hash_table_remove(bc->configs, sdev);
+ smmu_configs_inv_sdev(bc, sdev);
}
/* Do translation with TLB lookup. */
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index c17c7db6e5..bccbbe0115 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -182,6 +182,11 @@ typedef struct SMMUIOTLBKey {
uint8_t level;
} SMMUIOTLBKey;
+typedef struct SMMUConfigKey {
+ SMMUDevice *sdev;
+ SMMUSecSID sec_sid;
+} SMMUConfigKey;
+
typedef struct SMMUSIDRange {
uint32_t start;
uint32_t end;
@@ -257,6 +262,7 @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
uint8_t tg, uint8_t level);
+SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecSID sec_sid);
void smmu_iotlb_inv_all(SMMUState *s);
void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
@@ -266,6 +272,7 @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
uint64_t num_pages, uint8_t ttl);
void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range);
+void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev);
/* Unmap the range of all the notifiers registered to any IOMMU mr */
void smmu_inv_notifiers_all(SMMUState *s);
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 10/21] hw/arm/smmu-common: Key configuration cache on SMMUDevice and SEC_SID
2025-10-12 15:06 ` [RFC v3 10/21] hw/arm/smmu-common: Key configuration cache on SMMUDevice and SEC_SID Tao Tang
@ 2025-12-02 14:18 ` Eric Auger
0 siblings, 0 replies; 67+ messages in thread
From: Eric Auger @ 2025-12-02 14:18 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 10/12/25 5:06 PM, Tao Tang wrote:
> Adapt the configuration cache to support multiple security states by
> introducing a composite key, SMMUConfigKey. This key combines the
> SMMUDevice with SEC_SID, preventing aliasing between Secure and
> Non-secure configurations for the same device, also the future Realm and
> Root configurations.
>
> The cache lookup, insertion, and invalidation mechanisms are updated
> to use this new keying infrastructure. This change is critical for
> ensuring correct translation when a device is active in more than one
> security world.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Eric
> ---
> hw/arm/smmu-common.c | 45 ++++++++++++++++++++++++++++++++++--
> hw/arm/smmuv3.c | 13 +++++++----
> include/hw/arm/smmu-common.h | 7 ++++++
> 3 files changed, 58 insertions(+), 7 deletions(-)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index 82308f0e33..5fabe30c75 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -30,6 +30,26 @@
> #include "hw/arm/smmu-common.h"
> #include "smmu-internal.h"
>
> +/* Configuration Cache Management */
> +static guint smmu_config_key_hash(gconstpointer key)
> +{
> + const SMMUConfigKey *k = key;
> + return g_direct_hash(k->sdev) ^ (guint)k->sec_sid;
> +}
> +
> +static gboolean smmu_config_key_equal(gconstpointer a, gconstpointer b)
> +{
> + const SMMUConfigKey *ka = a;
> + const SMMUConfigKey *kb = b;
> + return ka->sdev == kb->sdev && ka->sec_sid == kb->sec_sid;
> +}
> +
> +SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecSID sec_sid)
> +{
> + SMMUConfigKey key = {.sdev = sdev, .sec_sid = sec_sid};
> + return key;
> +}
> +
> ARMSecuritySpace smmu_get_security_space(SMMUSecSID sec_sid)
> {
> switch (sec_sid) {
> @@ -256,7 +276,8 @@ static gboolean smmu_hash_remove_by_vmid_ipa(gpointer key, gpointer value,
> static gboolean
> smmu_hash_remove_by_sid_range(gpointer key, gpointer value, gpointer user_data)
> {
> - SMMUDevice *sdev = (SMMUDevice *)key;
> + SMMUConfigKey *config_key = (SMMUConfigKey *)key;
> + SMMUDevice *sdev = config_key->sdev;
> uint32_t sid = smmu_get_sid(sdev);
> SMMUSIDRange *sid_range = (SMMUSIDRange *)user_data;
>
> @@ -274,6 +295,24 @@ void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range)
> &sid_range);
> }
>
> +static gboolean smmu_hash_remove_by_sdev(gpointer key, gpointer value,
> + gpointer user_data)
> +{
> + SMMUConfigKey *config_key = (SMMUConfigKey *)key;
> + SMMUDevice *target = (SMMUDevice *)user_data;
> +
> + if (config_key->sdev != target) {
> + return false;
> + }
> + trace_smmu_config_cache_inv(smmu_get_sid(target));
> + return true;
> +}
> +
> +void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev)
> +{
> + g_hash_table_foreach_remove(s->configs, smmu_hash_remove_by_sdev, sdev);
> +}
> +
> void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> uint8_t tg, uint64_t num_pages, uint8_t ttl)
> {
> @@ -961,7 +1000,9 @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
> error_propagate(errp, local_err);
> return;
> }
> - s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
> + s->configs = g_hash_table_new_full(smmu_config_key_hash,
> + smmu_config_key_equal,
> + g_free, g_free);
> s->iotlb = g_hash_table_new_full(smmu_iotlb_key_hash, smmu_iotlb_key_equal,
> g_free, g_free);
> s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 351bbf1ae9..55f4ad1757 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -878,10 +878,11 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
> *
> * @sdev: SMMUDevice handle
> * @event: output event info
> + * @sec_sid: StreamID Security state
> *
> * The configuration cache contains data resulting from both STE and CD
> * decoding under the form of an SMMUTransCfg struct. The hash table is indexed
> - * by the SMMUDevice handle.
> + * by a composite key of the SMMUDevice and the sec_sid.
> */
> static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
> SMMUSecSID sec_sid)
> @@ -889,8 +890,9 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
> SMMUv3State *s = sdev->smmu;
> SMMUState *bc = &s->smmu_state;
> SMMUTransCfg *cfg;
> + SMMUConfigKey lookup_key = smmu_get_config_key(sdev, sec_sid);
>
> - cfg = g_hash_table_lookup(bc->configs, sdev);
> + cfg = g_hash_table_lookup(bc->configs, &lookup_key);
> if (cfg) {
> sdev->cfg_cache_hits++;
> trace_smmuv3_config_cache_hit(smmu_get_sid(sdev),
> @@ -915,7 +917,9 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
> }
>
> if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
> - g_hash_table_insert(bc->configs, sdev, cfg);
> + SMMUConfigKey *persistent_key = g_new(SMMUConfigKey, 1);
> + *persistent_key = lookup_key;
> + g_hash_table_insert(bc->configs, persistent_key, cfg);
> } else {
> g_free(cfg);
> cfg = NULL;
> @@ -929,8 +933,7 @@ static void smmuv3_flush_config(SMMUDevice *sdev)
> SMMUv3State *s = sdev->smmu;
> SMMUState *bc = &s->smmu_state;
>
> - trace_smmu_config_cache_inv(smmu_get_sid(sdev));
> - g_hash_table_remove(bc->configs, sdev);
> + smmu_configs_inv_sdev(bc, sdev);
> }
>
> /* Do translation with TLB lookup. */
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index c17c7db6e5..bccbbe0115 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -182,6 +182,11 @@ typedef struct SMMUIOTLBKey {
> uint8_t level;
> } SMMUIOTLBKey;
>
> +typedef struct SMMUConfigKey {
> + SMMUDevice *sdev;
> + SMMUSecSID sec_sid;
> +} SMMUConfigKey;
> +
> typedef struct SMMUSIDRange {
> uint32_t start;
> uint32_t end;
> @@ -257,6 +262,7 @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
> void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
> SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
> uint8_t tg, uint8_t level);
> +SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecSID sec_sid);
> void smmu_iotlb_inv_all(SMMUState *s);
> void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
> void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
> @@ -266,6 +272,7 @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
> uint64_t num_pages, uint8_t ttl);
> void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range);
> +void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev);
> /* Unmap the range of all the notifiers registered to any IOMMU mr */
> void smmu_inv_notifiers_all(SMMUState *s);
>
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 11/21] hw/arm/smmuv3: Decode security attributes from descriptors
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (9 preceding siblings ...)
2025-10-12 15:06 ` [RFC v3 10/21] hw/arm/smmu-common: Key configuration cache on SMMUDevice and SEC_SID Tao Tang
@ 2025-10-12 15:06 ` Tao Tang
2025-12-02 15:19 ` Eric Auger
2025-10-12 15:12 ` [RFC v3 12/21] hw/arm/smmu-common: Implement secure state handling in ptw Tao Tang
` (9 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:06 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
As the first step in implementing secure page table walks, this patch
introduces the logic to decode security-related attributes from various
SMMU structures.
The NSCFG bits from the Context Descriptor are now decoded and stored.
These bits control the security attribute of the starting-level
translation table, which is crucial for managing secure and non-secure
memory accesses.
The SMMU_S_IDR1.SEL2 bit is read to determine if Secure stage 2
translations are supported. This capability is cached in the
SMMUTransCfg structure for the page table walker's use.
Finally, new macros (PTE_NS, PTE_NSTABLE) are added to prepare for
extracting attributes from page and table descriptors. To improve
clarity, these different attribute bits are organized into distinct
subsections in the header file.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-internal.h | 16 ++++++++++++++--
hw/arm/smmuv3-internal.h | 2 ++
hw/arm/smmuv3.c | 2 ++
include/hw/arm/smmu-common.h | 3 +++
4 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
index d143d296f3..a0454f720d 100644
--- a/hw/arm/smmu-internal.h
+++ b/hw/arm/smmu-internal.h
@@ -58,16 +58,28 @@
((level == 3) && \
((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_L3_PTE_TYPE_PAGE))
+/* Block & page descriptor attributes */
+/* Non-secure bit */
+#define PTE_NS(pte) \
+ (extract64(pte, 5, 1))
+
/* access permissions */
#define PTE_AP(pte) \
(extract64(pte, 6, 2))
+/* access flag */
+#define PTE_AF(pte) \
+ (extract64(pte, 10, 1))
+
+
+/* Table descriptor attributes */
#define PTE_APTABLE(pte) \
(extract64(pte, 61, 2))
-#define PTE_AF(pte) \
- (extract64(pte, 10, 1))
+#define PTE_NSTABLE(pte) \
+ (extract64(pte, 63, 1))
+
/*
* TODO: At the moment all transactions are considered as privileged (EL1)
* as IOMMU translation callback does not pass user/priv attributes.
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 99fdbcf3f5..1e757af459 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -703,6 +703,8 @@ static inline int oas2bits(int oas_field)
#define CD_R(x) extract32((x)->word[1], 13, 1)
#define CD_A(x) extract32((x)->word[1], 14, 1)
#define CD_AARCH64(x) extract32((x)->word[1], 9 , 1)
+#define CD_NSCFG0(x) extract32((x)->word[2], 0, 1)
+#define CD_NSCFG1(x) extract32((x)->word[4], 0, 1)
/**
* tg2granule - Decodes the CD translation granule size field according
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 55f4ad1757..3686056d8e 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -812,6 +812,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
tt->ttb = CACHED_ENTRY_TO_ADDR(entry, tt->ttb);
}
+ tt->nscfg = i ? CD_NSCFG1(cd) : CD_NSCFG0(cd);
tt->had = CD_HAD(cd, i);
trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz, tt->had);
}
@@ -915,6 +916,7 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
cfg = NULL;
return cfg;
}
+ cfg->sel2 = FIELD_EX32(s->bank[SMMU_SEC_SID_S].idr[1], S_IDR1, SEL2);
if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
SMMUConfigKey *persistent_key = g_new(SMMUConfigKey, 1);
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index bccbbe0115..90a37fe32d 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -109,6 +109,7 @@ typedef struct SMMUTransTableInfo {
uint8_t tsz; /* input range, ie. 2^(64 -tsz)*/
uint8_t granule_sz; /* granule page shift */
bool had; /* hierarchical attribute disable */
+ int nscfg; /* Non-secure attribute of Starting-level TT */
} SMMUTransTableInfo;
typedef struct SMMUTLBEntry {
@@ -116,6 +117,7 @@ typedef struct SMMUTLBEntry {
uint8_t level;
uint8_t granule;
IOMMUAccessFlags parent_perm;
+ SMMUSecSID sec_sid;
} SMMUTLBEntry;
/* Stage-2 configuration. */
@@ -156,6 +158,7 @@ typedef struct SMMUTransCfg {
struct SMMUS2Cfg s2cfg;
MemTxAttrs txattrs; /* cached transaction attributes */
AddressSpace *as; /* cached address space */
+ int sel2; /* Secure EL2 and Secure stage 2 support */
} SMMUTransCfg;
typedef struct SMMUDevice {
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 11/21] hw/arm/smmuv3: Decode security attributes from descriptors
2025-10-12 15:06 ` [RFC v3 11/21] hw/arm/smmuv3: Decode security attributes from descriptors Tao Tang
@ 2025-12-02 15:19 ` Eric Auger
2025-12-03 14:30 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-02 15:19 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 10/12/25 5:06 PM, Tao Tang wrote:
> As the first step in implementing secure page table walks, this patch
> introduces the logic to decode security-related attributes from various
> SMMU structures.
>
> The NSCFG bits from the Context Descriptor are now decoded and stored.
> These bits control the security attribute of the starting-level
> translation table, which is crucial for managing secure and non-secure
> memory accesses.
>
> The SMMU_S_IDR1.SEL2 bit is read to determine if Secure stage 2
> translations are supported. This capability is cached in the
> SMMUTransCfg structure for the page table walker's use.
>
> Finally, new macros (PTE_NS, PTE_NSTABLE) are added to prepare for
> extracting attributes from page and table descriptors. To improve
> clarity, these different attribute bits are organized into distinct
> subsections in the header file.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-internal.h | 16 ++++++++++++++--
> hw/arm/smmuv3-internal.h | 2 ++
> hw/arm/smmuv3.c | 2 ++
> include/hw/arm/smmu-common.h | 3 +++
> 4 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
> index d143d296f3..a0454f720d 100644
> --- a/hw/arm/smmu-internal.h
> +++ b/hw/arm/smmu-internal.h
> @@ -58,16 +58,28 @@
> ((level == 3) && \
> ((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_L3_PTE_TYPE_PAGE))
>
> +/* Block & page descriptor attributes */
> +/* Non-secure bit */
> +#define PTE_NS(pte) \
> + (extract64(pte, 5, 1))
> +
> /* access permissions */
>
> #define PTE_AP(pte) \
> (extract64(pte, 6, 2))
>
> +/* access flag */
> +#define PTE_AF(pte) \
> + (extract64(pte, 10, 1))
> +
> +
> +/* Table descriptor attributes */
> #define PTE_APTABLE(pte) \
> (extract64(pte, 61, 2))
>
> -#define PTE_AF(pte) \
> - (extract64(pte, 10, 1))
> +#define PTE_NSTABLE(pte) \
> + (extract64(pte, 63, 1))
> +
> /*
> * TODO: At the moment all transactions are considered as privileged (EL1)
> * as IOMMU translation callback does not pass user/priv attributes.
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index 99fdbcf3f5..1e757af459 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -703,6 +703,8 @@ static inline int oas2bits(int oas_field)
> #define CD_R(x) extract32((x)->word[1], 13, 1)
> #define CD_A(x) extract32((x)->word[1], 14, 1)
> #define CD_AARCH64(x) extract32((x)->word[1], 9 , 1)
> +#define CD_NSCFG0(x) extract32((x)->word[2], 0, 1)
> +#define CD_NSCFG1(x) extract32((x)->word[4], 0, 1)
>
> /**
> * tg2granule - Decodes the CD translation granule size field according
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 55f4ad1757..3686056d8e 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -812,6 +812,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
> tt->ttb = CACHED_ENTRY_TO_ADDR(entry, tt->ttb);
> }
>
> + tt->nscfg = i ? CD_NSCFG1(cd) : CD_NSCFG0(cd);
> tt->had = CD_HAD(cd, i);
> trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz, tt->had);
> }
> @@ -915,6 +916,7 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
> cfg = NULL;
> return cfg;
> }
> + cfg->sel2 = FIELD_EX32(s->bank[SMMU_SEC_SID_S].idr[1], S_IDR1, SEL2);
I don't get why we store sel2 in the cfg as it does not vary.
Thanks
Eric
>
> if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
> SMMUConfigKey *persistent_key = g_new(SMMUConfigKey, 1);
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index bccbbe0115..90a37fe32d 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -109,6 +109,7 @@ typedef struct SMMUTransTableInfo {
> uint8_t tsz; /* input range, ie. 2^(64 -tsz)*/
> uint8_t granule_sz; /* granule page shift */
> bool had; /* hierarchical attribute disable */
> + int nscfg; /* Non-secure attribute of Starting-level TT */
> } SMMUTransTableInfo;
>
> typedef struct SMMUTLBEntry {
> @@ -116,6 +117,7 @@ typedef struct SMMUTLBEntry {
> uint8_t level;
> uint8_t granule;
> IOMMUAccessFlags parent_perm;
> + SMMUSecSID sec_sid;
> } SMMUTLBEntry;
>
> /* Stage-2 configuration. */
> @@ -156,6 +158,7 @@ typedef struct SMMUTransCfg {
> struct SMMUS2Cfg s2cfg;
> MemTxAttrs txattrs; /* cached transaction attributes */
> AddressSpace *as; /* cached address space */
> + int sel2; /* Secure EL2 and Secure stage 2 support */
> } SMMUTransCfg;
>
> typedef struct SMMUDevice {
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 11/21] hw/arm/smmuv3: Decode security attributes from descriptors
2025-12-02 15:19 ` Eric Auger
@ 2025-12-03 14:30 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-03 14:30 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric
On 2025/12/2 23:19, Eric Auger wrote:
> Hi Tao,
>
> On 10/12/25 5:06 PM, Tao Tang wrote:
>> As the first step in implementing secure page table walks, this patch
>> introduces the logic to decode security-related attributes from various
>> SMMU structures.
>>
>> The NSCFG bits from the Context Descriptor are now decoded and stored.
>> These bits control the security attribute of the starting-level
>> translation table, which is crucial for managing secure and non-secure
>> memory accesses.
>>
>> The SMMU_S_IDR1.SEL2 bit is read to determine if Secure stage 2
>> translations are supported. This capability is cached in the
>> SMMUTransCfg structure for the page table walker's use.
>>
>> Finally, new macros (PTE_NS, PTE_NSTABLE) are added to prepare for
>> extracting attributes from page and table descriptors. To improve
>> clarity, these different attribute bits are organized into distinct
>> subsections in the header file.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmu-internal.h | 16 ++++++++++++++--
>> hw/arm/smmuv3-internal.h | 2 ++
>> hw/arm/smmuv3.c | 2 ++
>> include/hw/arm/smmu-common.h | 3 +++
>> 4 files changed, 21 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
>> index d143d296f3..a0454f720d 100644
>> --- a/hw/arm/smmu-internal.h
>> +++ b/hw/arm/smmu-internal.h
>> @@ -58,16 +58,28 @@
>> ((level == 3) && \
>> ((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_L3_PTE_TYPE_PAGE))
>>
>> +/* Block & page descriptor attributes */
>> +/* Non-secure bit */
>> +#define PTE_NS(pte) \
>> + (extract64(pte, 5, 1))
>> +
>> /* access permissions */
>>
>> #define PTE_AP(pte) \
>> (extract64(pte, 6, 2))
>>
>> +/* access flag */
>> +#define PTE_AF(pte) \
>> + (extract64(pte, 10, 1))
>> +
>> +
>> +/* Table descriptor attributes */
>> #define PTE_APTABLE(pte) \
>> (extract64(pte, 61, 2))
>>
>> -#define PTE_AF(pte) \
>> - (extract64(pte, 10, 1))
>> +#define PTE_NSTABLE(pte) \
>> + (extract64(pte, 63, 1))
>> +
>> /*
>> * TODO: At the moment all transactions are considered as privileged (EL1)
>> * as IOMMU translation callback does not pass user/priv attributes.
>> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
>> index 99fdbcf3f5..1e757af459 100644
>> --- a/hw/arm/smmuv3-internal.h
>> +++ b/hw/arm/smmuv3-internal.h
>> @@ -703,6 +703,8 @@ static inline int oas2bits(int oas_field)
>> #define CD_R(x) extract32((x)->word[1], 13, 1)
>> #define CD_A(x) extract32((x)->word[1], 14, 1)
>> #define CD_AARCH64(x) extract32((x)->word[1], 9 , 1)
>> +#define CD_NSCFG0(x) extract32((x)->word[2], 0, 1)
>> +#define CD_NSCFG1(x) extract32((x)->word[4], 0, 1)
>>
>> /**
>> * tg2granule - Decodes the CD translation granule size field according
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index 55f4ad1757..3686056d8e 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -812,6 +812,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
>> tt->ttb = CACHED_ENTRY_TO_ADDR(entry, tt->ttb);
>> }
>>
>> + tt->nscfg = i ? CD_NSCFG1(cd) : CD_NSCFG0(cd);
>> tt->had = CD_HAD(cd, i);
>> trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz, tt->had);
>> }
>> @@ -915,6 +916,7 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
>> cfg = NULL;
>> return cfg;
>> }
>> + cfg->sel2 = FIELD_EX32(s->bank[SMMU_SEC_SID_S].idr[1], S_IDR1, SEL2);
> I don't get why we store sel2 in the cfg as it does not vary.
>
> Thanks
>
> Eric
You're absolutely right—caching SEL2 in SMMUTransCfg was unnecessary. I
didn’t think it through carefully at the time. I’ll drop that change in
the next revision.
Thanks,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 12/21] hw/arm/smmu-common: Implement secure state handling in ptw
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (10 preceding siblings ...)
2025-10-12 15:06 ` [RFC v3 11/21] hw/arm/smmuv3: Decode security attributes from descriptors Tao Tang
@ 2025-10-12 15:12 ` Tao Tang
2025-12-02 15:53 ` Eric Auger
2025-10-12 15:12 ` [RFC v3 13/21] hw/arm/smmuv3: Tag IOTLB cache keys with SEC_SID Tao Tang
` (8 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:12 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Enhance the page table walker to correctly handle secure and non-secure
memory accesses. This change introduces logic to select the appropriate
address space and enforce architectural security policies during walks.
The page table walker now correctly processes Secure Stage 1
translations. Key changes include:
- The get_pte function now uses the security context to fetch table
entries from either the Secure or Non-secure address space.
- The stage 1 walker tracks the security state, respecting the NSCFG
and NSTable attributes. It correctly handles the hierarchical security
model: if a table descriptor in a secure walk has NSTable=1, all
subsequent lookups for that walk are forced into the Non-secure space.
This is a one-way transition, as specified by the architecture.
- A check is added to fault nested translations that produce a Secure
IPA when Secure stage 2 is not supported (SMMU_S_IDR1.SEL2 == 0).
- The final TLB entry is tagged with the correct output address space,
ensuring proper memory isolation.
Stage 2 translations are currently limited to Non-secure lookups. Full
support for Secure Stage 2 translation will be added in a future series.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-common.c | 64 +++++++++++++++++++++++++++++++++++++++-----
hw/arm/trace-events | 2 +-
2 files changed, 59 insertions(+), 7 deletions(-)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 5fabe30c75..a092bb5a8d 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -399,20 +399,26 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
* @base_addr[@index]
*/
static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
- SMMUPTWEventInfo *info)
+ SMMUPTWEventInfo *info, SMMUSecSID sec_sid)
{
int ret;
dma_addr_t addr = baseaddr + index * sizeof(*pte);
-
+ MemTxAttrs attrs = smmu_get_txattrs(sec_sid);
+ AddressSpace *as = smmu_get_address_space(sec_sid);
+ if (!as) {
+ info->type = SMMU_PTW_ERR_WALK_EABT;
+ info->addr = addr;
+ return -EINVAL;
+ }
/* TODO: guarantee 64-bit single-copy atomicity */
- ret = ldq_le_dma(&address_space_memory, addr, pte, MEMTXATTRS_UNSPECIFIED);
+ ret = ldq_le_dma(as, addr, pte, attrs);
if (ret != MEMTX_OK) {
info->type = SMMU_PTW_ERR_WALK_EABT;
info->addr = addr;
return -EINVAL;
}
- trace_smmu_get_pte(baseaddr, index, addr, *pte);
+ trace_smmu_get_pte(sec_sid, baseaddr, index, addr, *pte);
return 0;
}
@@ -543,6 +549,8 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
baseaddr = extract64(tt->ttb, 0, cfg->oas);
baseaddr &= ~indexmask;
+ int nscfg = tt->nscfg;
+ bool forced_ns = false; /* Track if NSTable=1 forced NS mode */
while (level < VMSA_LEVELS) {
uint64_t subpage_size = 1ULL << level_shift(level, granule_sz);
@@ -552,7 +560,10 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
uint8_t ap;
- if (get_pte(baseaddr, offset, &pte, info)) {
+ /* Use NS if forced by previous NSTable=1 or current nscfg */
+ int current_ns = forced_ns || nscfg;
+ SMMUSecSID sec_sid = current_ns ? SMMU_SEC_SID_NS : SMMU_SEC_SID_S;
+ if (get_pte(baseaddr, offset, &pte, info, sec_sid)) {
goto error;
}
trace_smmu_ptw_level(stage, level, iova, subpage_size,
@@ -577,6 +588,26 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
goto error;
}
}
+
+ /*
+ * Hierarchical control of Secure/Non-secure accesses:
+ * If NSTable=1 from Secure space, force all subsequent lookups to
+ * Non-secure space and ignore future NSTable according to
+ * (IHI 0070G.b)13.4.1 Stage 1 page permissions and
+ * (DDI 0487H.a)D8.4.2 Control of Secure or Non-secure memory access
+ */
+ if (!forced_ns) {
+ int new_nstable = PTE_NSTABLE(pte);
+ if (!current_ns && new_nstable) {
+ /* First transition from Secure to Non-secure */
+ forced_ns = true;
+ nscfg = 1;
+ } else if (!forced_ns) {
+ /* Still in original mode, update nscfg normally */
+ nscfg = new_nstable;
+ }
+ /* If forced_ns is already true, ignore NSTable bit */
+ }
level++;
continue;
} else if (is_page_pte(pte, level)) {
@@ -619,6 +650,13 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
goto error;
}
+ tlbe->sec_sid = PTE_NS(pte) ? SMMU_SEC_SID_NS : SMMU_SEC_SID_S;
+ tlbe->entry.target_as = smmu_get_address_space(tlbe->sec_sid);
+ if (!tlbe->entry.target_as) {
+ info->type = SMMU_PTW_ERR_WALK_EABT;
+ info->addr = gpa;
+ goto error;
+ }
tlbe->entry.translated_addr = gpa;
tlbe->entry.iova = iova & ~mask;
tlbe->entry.addr_mask = mask;
@@ -688,7 +726,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
uint8_t s2ap;
- if (get_pte(baseaddr, offset, &pte, info)) {
+ /* Use NS as Secure Stage 2 is not implemented (SMMU_S_IDR1.SEL2 == 0)*/
+ if (get_pte(baseaddr, offset, &pte, info, SMMU_SEC_SID_NS)) {
goto error;
}
trace_smmu_ptw_level(stage, level, ipa, subpage_size,
@@ -741,6 +780,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
goto error_ipa;
}
+ tlbe->sec_sid = SMMU_SEC_SID_NS;
+ tlbe->entry.target_as = &address_space_memory;
tlbe->entry.translated_addr = gpa;
tlbe->entry.iova = ipa & ~mask;
tlbe->entry.addr_mask = mask;
@@ -825,6 +866,17 @@ int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
return ret;
}
+ if (!cfg->sel2 && tlbe->sec_sid > SMMU_SEC_SID_NS) {
+ /*
+ * Nested translation with Secure IPA output is not supported if
+ * Secure Stage 2 is not implemented.
+ */
+ info->type = SMMU_PTW_ERR_TRANSLATION;
+ info->stage = SMMU_STAGE_1;
+ tlbe->entry.perm = IOMMU_NONE;
+ return -EINVAL;
+ }
+
ipa = CACHED_ENTRY_TO_ADDR(tlbe, iova);
ret = smmu_ptw_64_s2(cfg, ipa, perm, &tlbe_s2, info);
if (ret) {
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index 96ebd1b11b..a37e894766 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -16,7 +16,7 @@ smmu_ptw_level(int stage, int level, uint64_t iova, size_t subpage_size, uint64_
smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint32_t offset, uint64_t pte) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" offset=%d pte=0x%"PRIx64
smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
-smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
+smmu_get_pte(int sec_sid, uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "sec_sid=%d baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64""
smmu_iotlb_inv_all(void) "IOTLB invalidate all"
smmu_iotlb_inv_asid_vmid(int asid, int vmid) "IOTLB invalidate asid=%d vmid=%d"
smmu_iotlb_inv_vmid(int vmid) "IOTLB invalidate vmid=%d"
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 12/21] hw/arm/smmu-common: Implement secure state handling in ptw
2025-10-12 15:12 ` [RFC v3 12/21] hw/arm/smmu-common: Implement secure state handling in ptw Tao Tang
@ 2025-12-02 15:53 ` Eric Auger
2025-12-03 15:10 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-02 15:53 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:12 PM, Tao Tang wrote:
> Enhance the page table walker to correctly handle secure and non-secure
> memory accesses. This change introduces logic to select the appropriate
> address space and enforce architectural security policies during walks.
>
> The page table walker now correctly processes Secure Stage 1
> translations. Key changes include:
>
> - The get_pte function now uses the security context to fetch table
> entries from either the Secure or Non-secure address space.
>
> - The stage 1 walker tracks the security state, respecting the NSCFG
> and NSTable attributes. It correctly handles the hierarchical security
> model: if a table descriptor in a secure walk has NSTable=1, all
> subsequent lookups for that walk are forced into the Non-secure space.
> This is a one-way transition, as specified by the architecture.
>
> - A check is added to fault nested translations that produce a Secure
> IPA when Secure stage 2 is not supported (SMMU_S_IDR1.SEL2 == 0).
>
> - The final TLB entry is tagged with the correct output address space,
> ensuring proper memory isolation.
>
> Stage 2 translations are currently limited to Non-secure lookups. Full
> support for Secure Stage 2 translation will be added in a future series.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-common.c | 64 +++++++++++++++++++++++++++++++++++++++-----
> hw/arm/trace-events | 2 +-
> 2 files changed, 59 insertions(+), 7 deletions(-)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index 5fabe30c75..a092bb5a8d 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -399,20 +399,26 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
> * @base_addr[@index]
> */
> static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
> - SMMUPTWEventInfo *info)
> + SMMUPTWEventInfo *info, SMMUSecSID sec_sid)
> {
> int ret;
> dma_addr_t addr = baseaddr + index * sizeof(*pte);
> -
> + MemTxAttrs attrs = smmu_get_txattrs(sec_sid);
> + AddressSpace *as = smmu_get_address_space(sec_sid);
> + if (!as) {
> + info->type = SMMU_PTW_ERR_WALK_EABT;
is it WALK_EABT or PERMISSION in that case? I fail to find where it is
specified in the spec. Add a reference once?
> + info->addr = addr;
> + return -EINVAL;
> + }
> /* TODO: guarantee 64-bit single-copy atomicity */
> - ret = ldq_le_dma(&address_space_memory, addr, pte, MEMTXATTRS_UNSPECIFIED);
> + ret = ldq_le_dma(as, addr, pte, attrs);
>
> if (ret != MEMTX_OK) {
> info->type = SMMU_PTW_ERR_WALK_EABT;
> info->addr = addr;
> return -EINVAL;
> }
> - trace_smmu_get_pte(baseaddr, index, addr, *pte);
> + trace_smmu_get_pte(sec_sid, baseaddr, index, addr, *pte);
> return 0;
> }
>
> @@ -543,6 +549,8 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
>
> baseaddr = extract64(tt->ttb, 0, cfg->oas);
> baseaddr &= ~indexmask;
> + int nscfg = tt->nscfg;
> + bool forced_ns = false; /* Track if NSTable=1 forced NS mode */
>
> while (level < VMSA_LEVELS) {
> uint64_t subpage_size = 1ULL << level_shift(level, granule_sz);
> @@ -552,7 +560,10 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
> dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
> uint8_t ap;
>
> - if (get_pte(baseaddr, offset, &pte, info)) {
> + /* Use NS if forced by previous NSTable=1 or current nscfg */
> + int current_ns = forced_ns || nscfg;
> + SMMUSecSID sec_sid = current_ns ? SMMU_SEC_SID_NS : SMMU_SEC_SID_S;
> + if (get_pte(baseaddr, offset, &pte, info, sec_sid)) {
> goto error;
> }
> trace_smmu_ptw_level(stage, level, iova, subpage_size,
> @@ -577,6 +588,26 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
> goto error;
> }
> }
> +
> + /*
> + * Hierarchical control of Secure/Non-secure accesses:
> + * If NSTable=1 from Secure space, force all subsequent lookups to
> + * Non-secure space and ignore future NSTable according to
> + * (IHI 0070G.b)13.4.1 Stage 1 page permissions and
> + * (DDI 0487H.a)D8.4.2 Control of Secure or Non-secure memory access
> + */
> + if (!forced_ns) {
> + int new_nstable = PTE_NSTABLE(pte);
> + if (!current_ns && new_nstable) {
> + /* First transition from Secure to Non-secure */
> + forced_ns = true;
> + nscfg = 1;
> + } else if (!forced_ns) {
> + /* Still in original mode, update nscfg normally */
> + nscfg = new_nstable;
> + }
> + /* If forced_ns is already true, ignore NSTable bit */
> + }
> level++;
> continue;
> } else if (is_page_pte(pte, level)) {
> @@ -619,6 +650,13 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
> goto error;
> }
>
> + tlbe->sec_sid = PTE_NS(pte) ? SMMU_SEC_SID_NS : SMMU_SEC_SID_S;
> + tlbe->entry.target_as = smmu_get_address_space(tlbe->sec_sid);
> + if (!tlbe->entry.target_as) {
> + info->type = SMMU_PTW_ERR_WALK_EABT;
> + info->addr = gpa;
> + goto error;
> + }
> tlbe->entry.translated_addr = gpa;
> tlbe->entry.iova = iova & ~mask;
> tlbe->entry.addr_mask = mask;
> @@ -688,7 +726,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
> dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
> uint8_t s2ap;
>
> - if (get_pte(baseaddr, offset, &pte, info)) {
> + /* Use NS as Secure Stage 2 is not implemented (SMMU_S_IDR1.SEL2 == 0)*/
I don't really get this as you passed the sel2 in the cfg?
> + if (get_pte(baseaddr, offset, &pte, info, SMMU_SEC_SID_NS)) {
> goto error;
> }
> trace_smmu_ptw_level(stage, level, ipa, subpage_size,
> @@ -741,6 +780,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
> goto error_ipa;
> }
>
> + tlbe->sec_sid = SMMU_SEC_SID_NS;
> + tlbe->entry.target_as = &address_space_memory;
> tlbe->entry.translated_addr = gpa;
> tlbe->entry.iova = ipa & ~mask;
> tlbe->entry.addr_mask = mask;
> @@ -825,6 +866,17 @@ int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
> return ret;
> }
>
> + if (!cfg->sel2 && tlbe->sec_sid > SMMU_SEC_SID_NS) {
> + /*
> + * Nested translation with Secure IPA output is not supported if
> + * Secure Stage 2 is not implemented.
> + */
> + info->type = SMMU_PTW_ERR_TRANSLATION;
pointer to the spec for TRANSLATION error?
Otherwise looks good
Eric
> + info->stage = SMMU_STAGE_1;
> + tlbe->entry.perm = IOMMU_NONE;
> + return -EINVAL;
> + }
> +
> ipa = CACHED_ENTRY_TO_ADDR(tlbe, iova);
> ret = smmu_ptw_64_s2(cfg, ipa, perm, &tlbe_s2, info);
> if (ret) {
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index 96ebd1b11b..a37e894766 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -16,7 +16,7 @@ smmu_ptw_level(int stage, int level, uint64_t iova, size_t subpage_size, uint64_
> smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint32_t offset, uint64_t pte) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" offset=%d pte=0x%"PRIx64
> smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
> smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
> -smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
> +smmu_get_pte(int sec_sid, uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "sec_sid=%d baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64""
> smmu_iotlb_inv_all(void) "IOTLB invalidate all"
> smmu_iotlb_inv_asid_vmid(int asid, int vmid) "IOTLB invalidate asid=%d vmid=%d"
> smmu_iotlb_inv_vmid(int vmid) "IOTLB invalidate vmid=%d"
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 12/21] hw/arm/smmu-common: Implement secure state handling in ptw
2025-12-02 15:53 ` Eric Auger
@ 2025-12-03 15:10 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-03 15:10 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/12/2 23:53, Eric Auger wrote:
>
> On 10/12/25 5:12 PM, Tao Tang wrote:
>> Enhance the page table walker to correctly handle secure and non-secure
>> memory accesses. This change introduces logic to select the appropriate
>> address space and enforce architectural security policies during walks.
>>
>> The page table walker now correctly processes Secure Stage 1
>> translations. Key changes include:
>>
>> - The get_pte function now uses the security context to fetch table
>> entries from either the Secure or Non-secure address space.
>>
>> - The stage 1 walker tracks the security state, respecting the NSCFG
>> and NSTable attributes. It correctly handles the hierarchical security
>> model: if a table descriptor in a secure walk has NSTable=1, all
>> subsequent lookups for that walk are forced into the Non-secure space.
>> This is a one-way transition, as specified by the architecture.
>>
>> - A check is added to fault nested translations that produce a Secure
>> IPA when Secure stage 2 is not supported (SMMU_S_IDR1.SEL2 == 0).
>>
>> - The final TLB entry is tagged with the correct output address space,
>> ensuring proper memory isolation.
>>
>> Stage 2 translations are currently limited to Non-secure lookups. Full
>> support for Secure Stage 2 translation will be added in a future series.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmu-common.c | 64 +++++++++++++++++++++++++++++++++++++++-----
>> hw/arm/trace-events | 2 +-
>> 2 files changed, 59 insertions(+), 7 deletions(-)
>>
>> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
>> index 5fabe30c75..a092bb5a8d 100644
>> --- a/hw/arm/smmu-common.c
>> +++ b/hw/arm/smmu-common.c
>> @@ -399,20 +399,26 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
>> * @base_addr[@index]
>> */
>> static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
>> - SMMUPTWEventInfo *info)
>> + SMMUPTWEventInfo *info, SMMUSecSID sec_sid)
>> {
>> int ret;
>> dma_addr_t addr = baseaddr + index * sizeof(*pte);
>> -
>> + MemTxAttrs attrs = smmu_get_txattrs(sec_sid);
>> + AddressSpace *as = smmu_get_address_space(sec_sid);
>> + if (!as) {
>> + info->type = SMMU_PTW_ERR_WALK_EABT;
> is it WALK_EABT or PERMISSION in that case? I fail to find where it is
> specified in the spec. Add a reference once?
Maybe this is the same situation I described earlier in the previous
thread [1]? I’m still not confident there is a clear architected
mapping for this condition to a specific PTW event type. Rather than
arbitrarily picking WALK_EABT or PERMISSION, I am leaning towards
treating it as a pure model bug:
I’ll switch this to a g_assert(as) so we don’t report an architected
event for something that should never happen on a correctly wired
machine model.
[1]
https://lore.kernel.org/qemu-devel/e80c6fbc-47a4-490a-8615-be2ee122eb94@phytium.com.cn/
>> + info->addr = addr;
>> + return -EINVAL;
>> + }
>> /* TODO: guarantee 64-bit single-copy atomicity */
>> - ret = ldq_le_dma(&address_space_memory, addr, pte, MEMTXATTRS_UNSPECIFIED);
>> + ret = ldq_le_dma(as, addr, pte, attrs);
>>
>> ------------------------------<snip>------------------------------
>>
>>
>>
>> ------------------------------<snip>------------------------------
>> tlbe->entry.translated_addr = gpa;
>> tlbe->entry.iova = iova & ~mask;
>> tlbe->entry.addr_mask = mask;
>> @@ -688,7 +726,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
>> dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
>> uint8_t s2ap;
>>
>> - if (get_pte(baseaddr, offset, &pte, info)) {
>> + /* Use NS as Secure Stage 2 is not implemented (SMMU_S_IDR1.SEL2 == 0)*/
> I don't really get this as you passed the sel2 in the cfg?
In the next revision I’ll simplify the story. SMMUTransCfg will no
longer carry a sel2 field, and this series will explicitly not support
Secure Stage 2. In that context, the Stage-2 PTW will be hard-coded to
use SMMU_SEC_SID_NS. If/when we add SEL2 support in a follow-up series,
we can then make this driven by configuration instead of hard-coded.
>> + if (get_pte(baseaddr, offset, &pte, info, SMMU_SEC_SID_NS)) {
>> goto error;
>> }
>> trace_smmu_ptw_level(stage, level, ipa, subpage_size,
>> @@ -741,6 +780,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
>> goto error_ipa;
>> }
>>
>> + tlbe->sec_sid = SMMU_SEC_SID_NS;
>> + tlbe->entry.target_as = &address_space_memory;
>> tlbe->entry.translated_addr = gpa;
>> tlbe->entry.iova = ipa & ~mask;
>> tlbe->entry.addr_mask = mask;
>> @@ -825,6 +866,17 @@ int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
>> return ret;
>> }
>>
>> + if (!cfg->sel2 && tlbe->sec_sid > SMMU_SEC_SID_NS) {
>> + /*
>> + * Nested translation with Secure IPA output is not supported if
>> + * Secure Stage 2 is not implemented.
>> + */
>> + info->type = SMMU_PTW_ERR_TRANSLATION;
> pointer to the spec for TRANSLATION error?
>
> Otherwise looks good
>
> Eric
After re-reading the spec, I think we should move the check earlier,
when decoding the STE/CD, and use the combination of SMMU_S_IDR1.SEL2,
Config == 0b11x, and the Secure Stream table context to detect an
architecturally illegal nested configuration.
In that case I’ll report a C_BAD_STE-style configuration error and bail
out before running any Secure Stage-1 page walk. That both matches the
spec more closely and avoids doing extra work in this unsupported
configuration. How do you think about this?
Thanks again your review.
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 13/21] hw/arm/smmuv3: Tag IOTLB cache keys with SEC_SID
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (11 preceding siblings ...)
2025-10-12 15:12 ` [RFC v3 12/21] hw/arm/smmu-common: Implement secure state handling in ptw Tao Tang
@ 2025-10-12 15:12 ` Tao Tang
2025-12-02 16:08 ` Eric Auger
2025-10-12 15:13 ` [RFC v3 14/21] hw/arm/smmuv3: Add access checks for MMIO registers Tao Tang
` (7 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:12 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
To prevent aliasing between secure and non-secure translations for the
same address space, the IOTLB lookup key must incorporate the security
state of the transaction. This commit expands SMMUIOTLBKey with the
SEC_SID, plumbs the new argument through common helpers, and ensures
that secure and non-secure TLB entries are treated as distinct
entities within the cache.
As a final step, this patch ensures the target address space
(target_as) from a cached IOTLB entry is correctly propagated to the
final translation result. Previously, the result defaulted to the
non-secure address space, nullifying the benefits of the
security-aware cache key.
This change provides robust management for secure TLB entries,
preventing TLB pollution between security worlds and allowing for proper
initialization by secure software.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-common.c | 25 ++++++++++++---------
hw/arm/smmuv3.c | 42 +++++++++++++++++++++---------------
hw/arm/trace-events | 2 +-
include/hw/arm/smmu-common.h | 9 +++++---
4 files changed, 47 insertions(+), 31 deletions(-)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index a092bb5a8d..4131a31ae0 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -86,7 +86,7 @@ static guint smmu_iotlb_key_hash(gconstpointer v)
/* Jenkins hash */
a = b = c = JHASH_INITVAL + sizeof(*key);
- a += key->asid + key->vmid + key->level + key->tg;
+ a += key->asid + key->vmid + key->level + key->tg + key->sec_sid;
b += extract64(key->iova, 0, 32);
c += extract64(key->iova, 32, 32);
@@ -102,14 +102,15 @@ static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
return (k1->asid == k2->asid) && (k1->iova == k2->iova) &&
(k1->level == k2->level) && (k1->tg == k2->tg) &&
- (k1->vmid == k2->vmid);
+ (k1->vmid == k2->vmid) && (k1->sec_sid == k2->sec_sid);
}
SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
- uint8_t tg, uint8_t level)
+ uint8_t tg, uint8_t level,
+ SMMUSecSID sec_sid)
{
SMMUIOTLBKey key = {.asid = asid, .vmid = vmid, .iova = iova,
- .tg = tg, .level = level};
+ .tg = tg, .level = level, .sec_sid = sec_sid};
return key;
}
@@ -131,7 +132,7 @@ static SMMUTLBEntry *smmu_iotlb_lookup_all_levels(SMMUState *bs,
SMMUIOTLBKey key;
key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid,
- iova & ~mask, tg, level);
+ iova & ~mask, tg, level, cfg->sec_sid);
entry = g_hash_table_lookup(bs->iotlb, &key);
if (entry) {
break;
@@ -195,7 +196,7 @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
}
*key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
- tg, new->level);
+ tg, new->level, cfg->sec_sid);
trace_smmu_iotlb_insert(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
tg, new->level);
g_hash_table_insert(bs->iotlb, key, new);
@@ -314,13 +315,15 @@ void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev)
}
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
- uint8_t tg, uint64_t num_pages, uint8_t ttl)
+ uint8_t tg, uint64_t num_pages, uint8_t ttl,
+ SMMUSecSID sec_sid)
{
/* if tg is not set we use 4KB range invalidation */
uint8_t granule = tg ? tg * 2 + 10 : 12;
if (ttl && (num_pages == 1) && (asid >= 0)) {
- SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova, tg, ttl);
+ SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova,
+ tg, ttl, sec_sid);
if (g_hash_table_remove(s->iotlb, &key)) {
return;
@@ -346,13 +349,15 @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
* in Stage-1 invalidation ASID = -1, means don't care.
*/
void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
- uint64_t num_pages, uint8_t ttl)
+ uint64_t num_pages, uint8_t ttl,
+ SMMUSecSID sec_sid)
{
uint8_t granule = tg ? tg * 2 + 10 : 12;
int asid = -1;
if (ttl && (num_pages == 1)) {
- SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, ipa, tg, ttl);
+ SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, ipa,
+ tg, ttl, sec_sid);
if (g_hash_table_remove(s->iotlb, &key)) {
return;
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 3686056d8e..f9395c3821 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1125,6 +1125,7 @@ epilogue:
entry.perm = cached_entry->entry.perm;
entry.translated_addr = CACHED_ENTRY_TO_ADDR(cached_entry, addr);
entry.addr_mask = cached_entry->entry.addr_mask;
+ entry.target_as = cached_entry->entry.target_as;
trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
entry.translated_addr, entry.perm,
cfg->stage);
@@ -1170,15 +1171,16 @@ epilogue:
* @tg: translation granule (if communicated through range invalidation)
* @num_pages: number of @granule sized pages (if tg != 0), otherwise 1
* @stage: Which stage(1 or 2) is used
+ * @sec_sid: security stream ID
*/
static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
IOMMUNotifier *n,
int asid, int vmid,
dma_addr_t iova, uint8_t tg,
- uint64_t num_pages, int stage)
+ uint64_t num_pages, int stage,
+ SMMUSecSID sec_sid)
{
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUEventInfo eventinfo = {.sec_sid = sec_sid,
.inval_ste_allowed = true};
SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, sec_sid);
@@ -1226,7 +1228,7 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
}
event.type = IOMMU_NOTIFIER_UNMAP;
- event.entry.target_as = &address_space_memory;
+ event.entry.target_as = smmu_get_address_space(sec_sid);
event.entry.iova = iova;
event.entry.addr_mask = num_pages * (1 << granule) - 1;
event.entry.perm = IOMMU_NONE;
@@ -1237,7 +1239,8 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
/* invalidate an asid/vmid/iova range tuple in all mr's */
static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
dma_addr_t iova, uint8_t tg,
- uint64_t num_pages, int stage)
+ uint64_t num_pages, int stage,
+ SMMUSecSID sec_sid)
{
SMMUDevice *sdev;
@@ -1249,12 +1252,14 @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
iova, tg, num_pages, stage);
IOMMU_NOTIFIER_FOREACH(n, mr) {
- smmuv3_notify_iova(mr, n, asid, vmid, iova, tg, num_pages, stage);
+ smmuv3_notify_iova(mr, n, asid, vmid, iova, tg,
+ num_pages, stage, sec_sid);
}
}
}
-static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
+static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage,
+ SMMUSecSID sec_sid)
{
dma_addr_t end, addr = CMD_ADDR(cmd);
uint8_t type = CMD_TYPE(cmd);
@@ -1279,12 +1284,13 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
}
if (!tg) {
- trace_smmuv3_range_inval(vmid, asid, addr, tg, 1, ttl, leaf, stage);
- smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1, stage);
+ trace_smmuv3_range_inval(sec_sid, vmid, asid, addr,
+ tg, 1, ttl, leaf, stage);
+ smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1, stage, sec_sid);
if (stage == SMMU_STAGE_1) {
- smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
+ smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl, sec_sid);
} else {
- smmu_iotlb_inv_ipa(s, vmid, addr, tg, 1, ttl);
+ smmu_iotlb_inv_ipa(s, vmid, addr, tg, 1, ttl, sec_sid);
}
return;
}
@@ -1301,13 +1307,15 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
uint64_t mask = dma_aligned_pow2_mask(addr, end, 64);
num_pages = (mask + 1) >> granule;
- trace_smmuv3_range_inval(vmid, asid, addr, tg, num_pages,
- ttl, leaf, stage);
- smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, num_pages, stage);
+ trace_smmuv3_range_inval(sec_sid, vmid, asid, addr, tg,
+ num_pages, ttl, leaf, stage);
+ smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg,
+ num_pages, stage, sec_sid);
if (stage == SMMU_STAGE_1) {
- smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
+ smmu_iotlb_inv_iova(s, asid, vmid, addr, tg,
+ num_pages, ttl, sec_sid);
} else {
- smmu_iotlb_inv_ipa(s, vmid, addr, tg, num_pages, ttl);
+ smmu_iotlb_inv_ipa(s, vmid, addr, tg, num_pages, ttl, sec_sid);
}
addr += mask + 1;
}
@@ -1474,7 +1482,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
cmd_error = SMMU_CERROR_ILL;
break;
}
- smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1);
+ smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, SMMU_SEC_SID_NS);
break;
case SMMU_CMD_TLBI_S12_VMALL:
{
@@ -1499,7 +1507,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
* As currently only either s1 or s2 are supported
* we can reuse same function for s2.
*/
- smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2);
+ smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, SMMU_SEC_SID_NS);
break;
case SMMU_CMD_TLBI_EL3_ALL:
case SMMU_CMD_TLBI_EL3_VA:
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index a37e894766..434d6abfc2 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -56,7 +56,7 @@ smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%x - end=0x%x"
smmuv3_cmdq_cfgi_cd(uint32_t sid) "sid=0x%x"
smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
-smmuv3_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf, int stage) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d stage=%d"
+smmuv3_range_inval(int sec_sid, int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf, int stage) "sec_sid=%d vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d stage=%d"
smmuv3_cmdq_tlbi_nh(int vmid) "vmid=%d"
smmuv3_cmdq_tlbi_nsnh(void) ""
smmuv3_cmdq_tlbi_nh_asid(int asid) "asid=%d"
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index 90a37fe32d..211fc7c2d0 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -183,6 +183,7 @@ typedef struct SMMUIOTLBKey {
int vmid;
uint8_t tg;
uint8_t level;
+ SMMUSecSID sec_sid;
} SMMUIOTLBKey;
typedef struct SMMUConfigKey {
@@ -264,16 +265,18 @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
SMMUTransTableInfo *tt, hwaddr iova);
void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
- uint8_t tg, uint8_t level);
+ uint8_t tg, uint8_t level, SMMUSecSID sec_sid);
SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecSID sec_sid);
void smmu_iotlb_inv_all(SMMUState *s);
void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid);
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
- uint8_t tg, uint64_t num_pages, uint8_t ttl);
+ uint8_t tg, uint64_t num_pages, uint8_t ttl,
+ SMMUSecSID sec_sid);
void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
- uint64_t num_pages, uint8_t ttl);
+ uint64_t num_pages, uint8_t ttl,
+ SMMUSecSID sec_sid);
void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range);
void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev);
/* Unmap the range of all the notifiers registered to any IOMMU mr */
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 13/21] hw/arm/smmuv3: Tag IOTLB cache keys with SEC_SID
2025-10-12 15:12 ` [RFC v3 13/21] hw/arm/smmuv3: Tag IOTLB cache keys with SEC_SID Tao Tang
@ 2025-12-02 16:08 ` Eric Auger
2025-12-03 15:28 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-02 16:08 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 10/12/25 5:12 PM, Tao Tang wrote:
> To prevent aliasing between secure and non-secure translations for the
> same address space, the IOTLB lookup key must incorporate the security
> state of the transaction. This commit expands SMMUIOTLBKey with the
> SEC_SID, plumbs the new argument through common helpers, and ensures
> that secure and non-secure TLB entries are treated as distinct
> entities within the cache.
>
> As a final step, this patch ensures the target address space
> (target_as) from a cached IOTLB entry is correctly propagated to the
> final translation result. Previously, the result defaulted to the
> non-secure address space, nullifying the benefits of the
> security-aware cache key.
>
> This change provides robust management for secure TLB entries,
> preventing TLB pollution between security worlds and allowing for proper
> initialization by secure software.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn
> ---
> hw/arm/smmu-common.c | 25 ++++++++++++---------
> hw/arm/smmuv3.c | 42 +++++++++++++++++++++---------------
> hw/arm/trace-events | 2 +-
> include/hw/arm/smmu-common.h | 9 +++++---
> 4 files changed, 47 insertions(+), 31 deletions(-)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index a092bb5a8d..4131a31ae0 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -86,7 +86,7 @@ static guint smmu_iotlb_key_hash(gconstpointer v)
>
> /* Jenkins hash */
> a = b = c = JHASH_INITVAL + sizeof(*key);
> - a += key->asid + key->vmid + key->level + key->tg;
> + a += key->asid + key->vmid + key->level + key->tg + key->sec_sid;
> b += extract64(key->iova, 0, 32);
> c += extract64(key->iova, 32, 32);
>
> @@ -102,14 +102,15 @@ static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
>
> return (k1->asid == k2->asid) && (k1->iova == k2->iova) &&
> (k1->level == k2->level) && (k1->tg == k2->tg) &&
> - (k1->vmid == k2->vmid);
> + (k1->vmid == k2->vmid) && (k1->sec_sid == k2->sec_sid);
> }
>
> SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
> - uint8_t tg, uint8_t level)
> + uint8_t tg, uint8_t level,
> + SMMUSecSID sec_sid)
> {
> SMMUIOTLBKey key = {.asid = asid, .vmid = vmid, .iova = iova,
> - .tg = tg, .level = level};
> + .tg = tg, .level = level, .sec_sid = sec_sid};
>
> return key;
> }
> @@ -131,7 +132,7 @@ static SMMUTLBEntry *smmu_iotlb_lookup_all_levels(SMMUState *bs,
> SMMUIOTLBKey key;
>
> key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid,
> - iova & ~mask, tg, level);
> + iova & ~mask, tg, level, cfg->sec_sid);
> entry = g_hash_table_lookup(bs->iotlb, &key);
> if (entry) {
> break;
> @@ -195,7 +196,7 @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
> }
>
> *key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
> - tg, new->level);
> + tg, new->level, cfg->sec_sid);
> trace_smmu_iotlb_insert(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
> tg, new->level);
> g_hash_table_insert(bs->iotlb, key, new);
> @@ -314,13 +315,15 @@ void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev)
> }
>
> void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> - uint8_t tg, uint64_t num_pages, uint8_t ttl)
> + uint8_t tg, uint64_t num_pages, uint8_t ttl,
> + SMMUSecSID sec_sid)
> {
> /* if tg is not set we use 4KB range invalidation */
> uint8_t granule = tg ? tg * 2 + 10 : 12;
>
> if (ttl && (num_pages == 1) && (asid >= 0)) {
> - SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova, tg, ttl);
> + SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova,
> + tg, ttl, sec_sid);
what about the other invalidation commands?
I see CMD_TLBI_NH_ASID(VMID, ASID), NH_ALL are selective depending on
the cmd queue it is issued from?
Thanks
Eric
>
> if (g_hash_table_remove(s->iotlb, &key)) {
> return;
> @@ -346,13 +349,15 @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> * in Stage-1 invalidation ASID = -1, means don't care.
> */
> void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
> - uint64_t num_pages, uint8_t ttl)
> + uint64_t num_pages, uint8_t ttl,
> + SMMUSecSID sec_sid)
> {
> uint8_t granule = tg ? tg * 2 + 10 : 12;
> int asid = -1;
>
> if (ttl && (num_pages == 1)) {
> - SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, ipa, tg, ttl);
> + SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, ipa,
> + tg, ttl, sec_sid);
>
> if (g_hash_table_remove(s->iotlb, &key)) {
> return;
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 3686056d8e..f9395c3821 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -1125,6 +1125,7 @@ epilogue:
> entry.perm = cached_entry->entry.perm;
> entry.translated_addr = CACHED_ENTRY_TO_ADDR(cached_entry, addr);
> entry.addr_mask = cached_entry->entry.addr_mask;
> + entry.target_as = cached_entry->entry.target_as;
> trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
> entry.translated_addr, entry.perm,
> cfg->stage);
> @@ -1170,15 +1171,16 @@ epilogue:
> * @tg: translation granule (if communicated through range invalidation)
> * @num_pages: number of @granule sized pages (if tg != 0), otherwise 1
> * @stage: Which stage(1 or 2) is used
> + * @sec_sid: security stream ID
> */
> static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
> IOMMUNotifier *n,
> int asid, int vmid,
> dma_addr_t iova, uint8_t tg,
> - uint64_t num_pages, int stage)
> + uint64_t num_pages, int stage,
> + SMMUSecSID sec_sid)
> {
> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUEventInfo eventinfo = {.sec_sid = sec_sid,
> .inval_ste_allowed = true};
> SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, sec_sid);
> @@ -1226,7 +1228,7 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
> }
>
> event.type = IOMMU_NOTIFIER_UNMAP;
> - event.entry.target_as = &address_space_memory;
> + event.entry.target_as = smmu_get_address_space(sec_sid);
> event.entry.iova = iova;
> event.entry.addr_mask = num_pages * (1 << granule) - 1;
> event.entry.perm = IOMMU_NONE;
> @@ -1237,7 +1239,8 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
> /* invalidate an asid/vmid/iova range tuple in all mr's */
> static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
> dma_addr_t iova, uint8_t tg,
> - uint64_t num_pages, int stage)
> + uint64_t num_pages, int stage,
> + SMMUSecSID sec_sid)
> {
> SMMUDevice *sdev;
>
> @@ -1249,12 +1252,14 @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
> iova, tg, num_pages, stage);
>
> IOMMU_NOTIFIER_FOREACH(n, mr) {
> - smmuv3_notify_iova(mr, n, asid, vmid, iova, tg, num_pages, stage);
> + smmuv3_notify_iova(mr, n, asid, vmid, iova, tg,
> + num_pages, stage, sec_sid);
> }
> }
> }
>
> -static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
> +static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage,
> + SMMUSecSID sec_sid)
> {
> dma_addr_t end, addr = CMD_ADDR(cmd);
> uint8_t type = CMD_TYPE(cmd);
> @@ -1279,12 +1284,13 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
> }
>
> if (!tg) {
> - trace_smmuv3_range_inval(vmid, asid, addr, tg, 1, ttl, leaf, stage);
> - smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1, stage);
> + trace_smmuv3_range_inval(sec_sid, vmid, asid, addr,
> + tg, 1, ttl, leaf, stage);
> + smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1, stage, sec_sid);
> if (stage == SMMU_STAGE_1) {
> - smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
> + smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl, sec_sid);
> } else {
> - smmu_iotlb_inv_ipa(s, vmid, addr, tg, 1, ttl);
> + smmu_iotlb_inv_ipa(s, vmid, addr, tg, 1, ttl, sec_sid);
> }
> return;
> }
> @@ -1301,13 +1307,15 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
> uint64_t mask = dma_aligned_pow2_mask(addr, end, 64);
>
> num_pages = (mask + 1) >> granule;
> - trace_smmuv3_range_inval(vmid, asid, addr, tg, num_pages,
> - ttl, leaf, stage);
> - smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, num_pages, stage);
> + trace_smmuv3_range_inval(sec_sid, vmid, asid, addr, tg,
> + num_pages, ttl, leaf, stage);
> + smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg,
> + num_pages, stage, sec_sid);
> if (stage == SMMU_STAGE_1) {
> - smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
> + smmu_iotlb_inv_iova(s, asid, vmid, addr, tg,
> + num_pages, ttl, sec_sid);
> } else {
> - smmu_iotlb_inv_ipa(s, vmid, addr, tg, num_pages, ttl);
> + smmu_iotlb_inv_ipa(s, vmid, addr, tg, num_pages, ttl, sec_sid);
> }
> addr += mask + 1;
> }
> @@ -1474,7 +1482,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> cmd_error = SMMU_CERROR_ILL;
> break;
> }
> - smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1);
> + smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, SMMU_SEC_SID_NS);
> break;
> case SMMU_CMD_TLBI_S12_VMALL:
> {
> @@ -1499,7 +1507,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> * As currently only either s1 or s2 are supported
> * we can reuse same function for s2.
> */
> - smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2);
> + smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, SMMU_SEC_SID_NS);
> break;
> case SMMU_CMD_TLBI_EL3_ALL:
> case SMMU_CMD_TLBI_EL3_VA:
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index a37e894766..434d6abfc2 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -56,7 +56,7 @@ smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%x - end=0x%x"
> smmuv3_cmdq_cfgi_cd(uint32_t sid) "sid=0x%x"
> smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
> smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
> -smmuv3_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf, int stage) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d stage=%d"
> +smmuv3_range_inval(int sec_sid, int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf, int stage) "sec_sid=%d vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d stage=%d"
> smmuv3_cmdq_tlbi_nh(int vmid) "vmid=%d"
> smmuv3_cmdq_tlbi_nsnh(void) ""
> smmuv3_cmdq_tlbi_nh_asid(int asid) "asid=%d"
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index 90a37fe32d..211fc7c2d0 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -183,6 +183,7 @@ typedef struct SMMUIOTLBKey {
> int vmid;
> uint8_t tg;
> uint8_t level;
> + SMMUSecSID sec_sid;
> } SMMUIOTLBKey;
>
> typedef struct SMMUConfigKey {
> @@ -264,16 +265,18 @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
> SMMUTransTableInfo *tt, hwaddr iova);
> void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
> SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
> - uint8_t tg, uint8_t level);
> + uint8_t tg, uint8_t level, SMMUSecSID sec_sid);
> SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecSID sec_sid);
> void smmu_iotlb_inv_all(SMMUState *s);
> void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
> void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
> void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid);
> void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> - uint8_t tg, uint64_t num_pages, uint8_t ttl);
> + uint8_t tg, uint64_t num_pages, uint8_t ttl,
> + SMMUSecSID sec_sid);
> void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
> - uint64_t num_pages, uint8_t ttl);
> + uint64_t num_pages, uint8_t ttl,
> + SMMUSecSID sec_sid);
> void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range);
> void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev);
> /* Unmap the range of all the notifiers registered to any IOMMU mr */
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 13/21] hw/arm/smmuv3: Tag IOTLB cache keys with SEC_SID
2025-12-02 16:08 ` Eric Auger
@ 2025-12-03 15:28 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-03 15:28 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/12/3 00:08, Eric Auger wrote:
> Hi Tao,
>
> On 10/12/25 5:12 PM, Tao Tang wrote:
>> To prevent aliasing between secure and non-secure translations for the
>> same address space, the IOTLB lookup key must incorporate the security
>> state of the transaction. This commit expands SMMUIOTLBKey with the
>> SEC_SID, plumbs the new argument through common helpers, and ensures
>> that secure and non-secure TLB entries are treated as distinct
>> entities within the cache.
>>
>> As a final step, this patch ensures the target address space
>> (target_as) from a cached IOTLB entry is correctly propagated to the
>> final translation result. Previously, the result defaulted to the
>> non-secure address space, nullifying the benefits of the
>> security-aware cache key.
>>
>> This change provides robust management for secure TLB entries,
>> preventing TLB pollution between security worlds and allowing for proper
>> initialization by secure software.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn
>> ---
>> hw/arm/smmu-common.c | 25 ++++++++++++---------
>> hw/arm/smmuv3.c | 42 +++++++++++++++++++++---------------
>> hw/arm/trace-events | 2 +-
>> include/hw/arm/smmu-common.h | 9 +++++---
>> 4 files changed, 47 insertions(+), 31 deletions(-)
>>
>> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
>> index a092bb5a8d..4131a31ae0 100644
>> --- a/hw/arm/smmu-common.c
>> +++ b/hw/arm/smmu-common.c
>> @@ -86,7 +86,7 @@ static guint smmu_iotlb_key_hash(gconstpointer v)
>>
>> /* Jenkins hash */
>> a = b = c = JHASH_INITVAL + sizeof(*key);
>> - a += key->asid + key->vmid + key->level + key->tg;
>> + a += key->asid + key->vmid + key->level + key->tg + key->sec_sid;
>> b += extract64(key->iova, 0, 32);
>> c += extract64(key->iova, 32, 32);
>>
>> @@ -102,14 +102,15 @@ static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
>>
>> return (k1->asid == k2->asid) && (k1->iova == k2->iova) &&
>> (k1->level == k2->level) && (k1->tg == k2->tg) &&
>> - (k1->vmid == k2->vmid);
>> + (k1->vmid == k2->vmid) && (k1->sec_sid == k2->sec_sid);
>> }
>>
>> SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
>> - uint8_t tg, uint8_t level)
>> + uint8_t tg, uint8_t level,
>> + SMMUSecSID sec_sid)
>> {
>> SMMUIOTLBKey key = {.asid = asid, .vmid = vmid, .iova = iova,
>> - .tg = tg, .level = level};
>> + .tg = tg, .level = level, .sec_sid = sec_sid};
>>
>> return key;
>> }
>> @@ -131,7 +132,7 @@ static SMMUTLBEntry *smmu_iotlb_lookup_all_levels(SMMUState *bs,
>> SMMUIOTLBKey key;
>>
>> key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid,
>> - iova & ~mask, tg, level);
>> + iova & ~mask, tg, level, cfg->sec_sid);
>> entry = g_hash_table_lookup(bs->iotlb, &key);
>> if (entry) {
>> break;
>> @@ -195,7 +196,7 @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
>> }
>>
>> *key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
>> - tg, new->level);
>> + tg, new->level, cfg->sec_sid);
>> trace_smmu_iotlb_insert(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
>> tg, new->level);
>> g_hash_table_insert(bs->iotlb, key, new);
>> @@ -314,13 +315,15 @@ void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev)
>> }
>>
>> void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
>> - uint8_t tg, uint64_t num_pages, uint8_t ttl)
>> + uint8_t tg, uint64_t num_pages, uint8_t ttl,
>> + SMMUSecSID sec_sid)
>> {
>> /* if tg is not set we use 4KB range invalidation */
>> uint8_t granule = tg ? tg * 2 + 10 : 12;
>>
>> if (ttl && (num_pages == 1) && (asid >= 0)) {
>> - SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova, tg, ttl);
>> + SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova,
>> + tg, ttl, sec_sid);
> what about the other invalidation commands?
>
> I see CMD_TLBI_NH_ASID(VMID, ASID), NH_ALL are selective depending on
> the cmd queue it is issued from?
>
> Thanks
>
> Eric
>
Thanks for the pointer. I’m going to extend the security-state plumbing
you highlighted so that all TLBI commands honor the queue’s security
domain in the next version, refactoring relative smmu_iotlb_inv_* and
smmu_hash_* functions for example.
Thanks,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 14/21] hw/arm/smmuv3: Add access checks for MMIO registers
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (12 preceding siblings ...)
2025-10-12 15:12 ` [RFC v3 13/21] hw/arm/smmuv3: Tag IOTLB cache keys with SEC_SID Tao Tang
@ 2025-10-12 15:13 ` Tao Tang
2025-12-02 16:31 ` Eric Auger
2025-10-12 15:13 ` [RFC v3 15/21] hw/arm/smmuv3: Determine register bank from MMIO offset Tao Tang
` (6 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:13 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
The SMMUv3 model was missing checks for register accessibility under
certain conditions. This allowed guest software to write to registers
like STRTAB_BASE when they should be read-only, or read from
GERROR_IRQ_CFG registers when they should be RES0.
This patch fixes this by introducing helper functions, such as the
smmu_(reg_name)_writable pattern, to encapsulate the architectural
access rules. In addition, writes to registers like STRTAB_BASE,
queue bases, and IRQ configurations are now masked to correctly
handle reserved bits.
The MMIO handlers are updated to call these functions before accessing
registers. To purely fix the missing checks without introducing new
functionality, the security context in the MMIO handlers is explicitly
fixed to Non-secure. This ensures that the scope of this patch is
limited to fixing existing Non-secure logic.
If a register is not accessible, the access is now correctly handled
and a guest error is logged, bringing the model's behavior in line with
the specification.
Fixes: fae4be38b35d ("hw/arm/smmuv3: Implement MMIO write operations")
Fixes: 10a83cb9887e ("hw/arm/smmuv3: Skeleton")
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 304 +++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 298 insertions(+), 6 deletions(-)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index f9395c3821..f161dd3eff 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1321,6 +1321,127 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage,
}
}
+static inline int smmuv3_get_cr0ack_smmuen(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ return FIELD_EX32(s->bank[sec_sid].cr0ack, CR0, SMMUEN);
+}
+
+static inline bool smmuv3_is_smmu_enabled(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ int cr0_smmuen = smmu_enabled(s, sec_sid);
+ int cr0ack_smmuen = smmuv3_get_cr0ack_smmuen(s, sec_sid);
+ return (cr0_smmuen == 0 && cr0ack_smmuen == 0);
+}
+
+/* Check if STRTAB_BASE register is writable */
+static bool smmu_strtab_base_writable(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ /* Check TABLES_PRESET - use NS bank as it's the global setting */
+ if (FIELD_EX32(s->bank[sec_sid].idr[1], IDR1, TABLES_PRESET)) {
+ return false;
+ }
+
+ /* Check SMMUEN conditions for the specific security domain */
+ return smmuv3_is_smmu_enabled(s, sec_sid);
+}
+
+static inline int smmuv3_get_cr0_cmdqen(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ return FIELD_EX32(s->bank[sec_sid].cr[0], CR0, CMDQEN);
+}
+
+static inline int smmuv3_get_cr0ack_cmdqen(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ return FIELD_EX32(s->bank[sec_sid].cr0ack, CR0, CMDQEN);
+}
+
+static inline int smmuv3_get_cr0_eventqen(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ return FIELD_EX32(s->bank[sec_sid].cr[0], CR0, EVENTQEN);
+}
+
+static inline int smmuv3_get_cr0ack_eventqen(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ return FIELD_EX32(s->bank[sec_sid].cr0ack, CR0, EVENTQEN);
+}
+
+/* Check if MSI is supported */
+static inline bool smmu_msi_supported(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ return FIELD_EX32(s->bank[sec_sid].idr[0], IDR0, MSI);
+}
+
+/* Check if secure GERROR_IRQ_CFGx registers are writable */
+static inline bool smmu_gerror_irq_cfg_writable(SMMUv3State *s,
+ SMMUSecSID sec_sid)
+{
+ if (!smmu_msi_supported(s, sec_sid)) {
+ return false;
+ }
+
+ bool ctrl_en = FIELD_EX32(s->bank[sec_sid].irq_ctrl,
+ IRQ_CTRL, GERROR_IRQEN);
+ return !ctrl_en;
+}
+
+/* Check if CMDQEN is disabled */
+static bool smmu_cmdqen_disabled(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ int cr0_cmdqen = smmuv3_get_cr0_cmdqen(s, sec_sid);
+ int cr0ack_cmdqen = smmuv3_get_cr0ack_cmdqen(s, sec_sid);
+ return (cr0_cmdqen == 0 && cr0ack_cmdqen == 0);
+}
+
+/* Check if CMDQ_BASE register is writable */
+static bool smmu_cmdq_base_writable(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ /* Check TABLES_PRESET - use NS bank as it's the global setting */
+ if (FIELD_EX32(s->bank[sec_sid].idr[1], IDR1, QUEUES_PRESET)) {
+ return false;
+ }
+
+ return smmu_cmdqen_disabled(s, sec_sid);
+}
+
+/* Check if EVENTQEN is disabled */
+static bool smmu_eventqen_disabled(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ int cr0_eventqen = smmuv3_get_cr0_eventqen(s, sec_sid);
+ int cr0ack_eventqen = smmuv3_get_cr0ack_eventqen(s, sec_sid);
+ return (cr0_eventqen == 0 && cr0ack_eventqen == 0);
+}
+
+static bool smmu_idr1_queue_preset(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ return FIELD_EX32(s->bank[sec_sid].idr[1], IDR1, QUEUES_PRESET);
+}
+
+/* Check if EVENTQ_BASE register is writable */
+static bool smmu_eventq_base_writable(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ /* Check TABLES_PRESET - use NS bank as it's the global setting */
+ if (smmu_idr1_queue_preset(s, sec_sid)) {
+ return false;
+ }
+
+ return smmu_eventqen_disabled(s, sec_sid);
+}
+
+static bool smmu_irq_ctl_evtq_irqen_disabled(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ return FIELD_EX32(s->bank[sec_sid].irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
+}
+
+/* Check if EVENTQ_IRQ_CFGx is writable */
+static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s, SMMUSecSID sec_sid)
+{
+ if (smmu_msi_supported(s, sec_sid)) {
+ return false;
+ }
+
+ return smmu_irq_ctl_evtq_irqen_disabled(s, sec_sid);
+}
+
static int smmuv3_cmdq_consume(SMMUv3State *s)
{
SMMUState *bs = ARM_SMMU(s);
@@ -1561,27 +1682,59 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
switch (offset) {
case A_GERROR_IRQ_CFG0:
- bank->gerror_irq_cfg0 = data;
+ if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
+ /* SMMU_(*_)_IRQ_CTRL.GERROR_IRQEN == 1: IGNORED this write */
+ qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
+ "register is RO when IRQ enabled\n");
+ return MEMTX_OK;
+ }
+
+ bank->gerror_irq_cfg0 = data & SMMU_GERROR_IRQ_CFG0_RESERVED;
return MEMTX_OK;
case A_STRTAB_BASE:
- bank->strtab_base = data;
+ if (!smmu_strtab_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "STRTAB_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ /* Clear reserved bits according to spec */
+ bank->strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
return MEMTX_OK;
case A_CMDQ_BASE:
- bank->cmdq.base = data;
+ if (!smmu_cmdq_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "CMDQ_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ bank->cmdq.base = data & SMMU_QUEUE_BASE_RESERVED;
bank->cmdq.log2size = extract64(bank->cmdq.base, 0, 5);
if (bank->cmdq.log2size > SMMU_CMDQS) {
bank->cmdq.log2size = SMMU_CMDQS;
}
return MEMTX_OK;
case A_EVENTQ_BASE:
- bank->eventq.base = data;
+ if (!smmu_eventq_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ bank->eventq.base = data & SMMU_QUEUE_BASE_RESERVED;
bank->eventq.log2size = extract64(bank->eventq.base, 0, 5);
if (bank->eventq.log2size > SMMU_EVENTQS) {
bank->eventq.log2size = SMMU_EVENTQS;
}
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0:
- bank->eventq_irq_cfg0 = data;
+ if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ bank->eventq_irq_cfg0 = data & SMMU_EVENTQ_IRQ_CFG0_RESERVED;
return MEMTX_OK;
default:
qemu_log_mask(LOG_UNIMP,
@@ -1608,7 +1761,15 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
bank->cr[1] = data;
return MEMTX_OK;
case A_CR2:
- bank->cr[2] = data;
+ if (smmuv3_is_smmu_enabled(s, reg_sec_sid)) {
+ /* Allow write: SMMUEN is 0 in both CR0 and CR0ACK */
+ bank->cr[2] = data;
+ } else {
+ /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "CR2 write ignored: register is read-only when "
+ "CR0.SMMUEN or CR0ACK.SMMUEN is set\n");
+ }
return MEMTX_OK;
case A_IRQ_CTRL:
bank->irq_ctrl = data;
@@ -1622,12 +1783,31 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
smmuv3_cmdq_consume(s);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0: /* 64b */
+ if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
+ "register is RO when IRQ enabled\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_GERROR_IRQ_CFG0_RESERVED;
bank->gerror_irq_cfg0 = deposit64(bank->gerror_irq_cfg0, 0, 32, data);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0 + 4:
+ if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 + 4 write ignored: "
+ "register is RO when IRQ enabled\n");
+ return MEMTX_OK;
+ }
+
bank->gerror_irq_cfg0 = deposit64(bank->gerror_irq_cfg0, 32, 32, data);
return MEMTX_OK;
case A_GERROR_IRQ_CFG1:
+ if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG1 write ignored: "
+ "register is RO when IRQ enabled\n");
+ return MEMTX_OK;
+ }
+
bank->gerror_irq_cfg1 = data;
return MEMTX_OK;
case A_GERROR_IRQ_CFG2:
@@ -1644,12 +1824,32 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
}
return MEMTX_OK;
case A_STRTAB_BASE: /* 64b */
+ if (!smmu_strtab_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "STRTAB_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_STRTAB_BASE_RESERVED;
bank->strtab_base = deposit64(bank->strtab_base, 0, 32, data);
return MEMTX_OK;
case A_STRTAB_BASE + 4:
+ if (!smmu_strtab_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "STRTAB_BASE + 4 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_STRTAB_BASE_RESERVED;
bank->strtab_base = deposit64(bank->strtab_base, 32, 32, data);
return MEMTX_OK;
case A_STRTAB_BASE_CFG:
+ if (!smmu_strtab_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "STRTAB_BASE_CFG write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
bank->strtab_base_cfg = data;
if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
bank->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
@@ -1657,6 +1857,13 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
}
return MEMTX_OK;
case A_CMDQ_BASE: /* 64b */
+ if (!smmu_cmdq_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "CMDQ_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_QUEUE_BASE_RESERVED;
bank->cmdq.base = deposit64(bank->cmdq.base, 0, 32, data);
bank->cmdq.log2size = extract64(bank->cmdq.base, 0, 5);
if (bank->cmdq.log2size > SMMU_CMDQS) {
@@ -1664,6 +1871,13 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
}
return MEMTX_OK;
case A_CMDQ_BASE + 4: /* 64b */
+ if (!smmu_cmdq_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "CMDQ_BASE + 4 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_QUEUE_BASE_RESERVED;
bank->cmdq.base = deposit64(bank->cmdq.base, 32, 32, data);
return MEMTX_OK;
case A_CMDQ_PROD:
@@ -1671,9 +1885,22 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
smmuv3_cmdq_consume(s);
return MEMTX_OK;
case A_CMDQ_CONS:
+ if (!smmu_cmdqen_disabled(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "CMDQ_CONS write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
bank->cmdq.cons = data;
return MEMTX_OK;
case A_EVENTQ_BASE: /* 64b */
+ if (!smmu_eventq_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_QUEUE_BASE_RESERVED;
bank->eventq.base = deposit64(bank->eventq.base, 0, 32, data);
bank->eventq.log2size = extract64(bank->eventq.base, 0, 5);
if (bank->eventq.log2size > SMMU_EVENTQS) {
@@ -1681,24 +1908,63 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
}
return MEMTX_OK;
case A_EVENTQ_BASE + 4:
+ if (!smmu_eventq_base_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_BASE + 4 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_QUEUE_BASE_RESERVED;
bank->eventq.base = deposit64(bank->eventq.base, 32, 32, data);
return MEMTX_OK;
case A_EVENTQ_PROD:
+ if (!smmu_eventqen_disabled(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_PROD write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
bank->eventq.prod = data;
return MEMTX_OK;
case A_EVENTQ_CONS:
bank->eventq.cons = data;
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0: /* 64b */
+ if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
bank->eventq_irq_cfg0 = deposit64(bank->eventq_irq_cfg0, 0, 32, data);
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0 + 4:
+ if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_IRQ_CFG0+4 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
bank->eventq_irq_cfg0 = deposit64(bank->eventq_irq_cfg0, 32, 32, data);
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG1:
+ if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_IRQ_CFG1 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
bank->eventq_irq_cfg1 = data;
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG2:
+ if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_IRQ_CFG2 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
bank->eventq_irq_cfg2 = data;
return MEMTX_OK;
default:
@@ -1743,6 +2009,12 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
switch (offset) {
case A_GERROR_IRQ_CFG0:
+ /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
+ if (!smmu_msi_supported(s, reg_sec_sid)) {
+ *data = 0; /* RES0 */
+ return MEMTX_OK;
+ }
+
*data = bank->gerror_irq_cfg0;
return MEMTX_OK;
case A_STRTAB_BASE:
@@ -1811,15 +2083,35 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
*data = bank->gerrorn;
return MEMTX_OK;
case A_GERROR_IRQ_CFG0: /* 64b */
+ if (!smmu_msi_supported(s, reg_sec_sid)) {
+ *data = 0; /* RES0 */
+ return MEMTX_OK;
+ }
+
*data = extract64(bank->gerror_irq_cfg0, 0, 32);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0 + 4:
+ if (!smmu_msi_supported(s, reg_sec_sid)) {
+ *data = 0; /* RES0 */
+ return MEMTX_OK;
+ }
+
*data = extract64(bank->gerror_irq_cfg0, 32, 32);
return MEMTX_OK;
case A_GERROR_IRQ_CFG1:
+ if (!smmu_msi_supported(s, reg_sec_sid)) {
+ *data = 0; /* RES0 */
+ return MEMTX_OK;
+ }
+
*data = bank->gerror_irq_cfg1;
return MEMTX_OK;
case A_GERROR_IRQ_CFG2:
+ if (!smmu_msi_supported(s, reg_sec_sid)) {
+ *data = 0; /* RES0 */
+ return MEMTX_OK;
+ }
+
*data = bank->gerror_irq_cfg2;
return MEMTX_OK;
case A_STRTAB_BASE: /* 64b */
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 14/21] hw/arm/smmuv3: Add access checks for MMIO registers
2025-10-12 15:13 ` [RFC v3 14/21] hw/arm/smmuv3: Add access checks for MMIO registers Tao Tang
@ 2025-12-02 16:31 ` Eric Auger
2025-12-03 15:32 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-02 16:31 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 10/12/25 5:13 PM, Tao Tang wrote:
> The SMMUv3 model was missing checks for register accessibility under
> certain conditions. This allowed guest software to write to registers
> like STRTAB_BASE when they should be read-only, or read from
> GERROR_IRQ_CFG registers when they should be RES0.
>
> This patch fixes this by introducing helper functions, such as the
> smmu_(reg_name)_writable pattern, to encapsulate the architectural
> access rules. In addition, writes to registers like STRTAB_BASE,
> queue bases, and IRQ configurations are now masked to correctly
> handle reserved bits.
>
> The MMIO handlers are updated to call these functions before accessing
> registers. To purely fix the missing checks without introducing new
> functionality, the security context in the MMIO handlers is explicitly
> fixed to Non-secure. This ensures that the scope of this patch is
> limited to fixing existing Non-secure logic.
>
> If a register is not accessible, the access is now correctly handled
> and a guest error is logged, bringing the model's behavior in line with
> the specification.
>
> Fixes: fae4be38b35d ("hw/arm/smmuv3: Implement MMIO write operations")
> Fixes: 10a83cb9887e ("hw/arm/smmuv3: Skeleton")
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 304 +++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 298 insertions(+), 6 deletions(-)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index f9395c3821..f161dd3eff 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -1321,6 +1321,127 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage,
> }
> }
>
> +static inline int smmuv3_get_cr0ack_smmuen(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + return FIELD_EX32(s->bank[sec_sid].cr0ack, CR0, SMMUEN);
> +}
> +
> +static inline bool smmuv3_is_smmu_enabled(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + int cr0_smmuen = smmu_enabled(s, sec_sid);
> + int cr0ack_smmuen = smmuv3_get_cr0ack_smmuen(s, sec_sid);
> + return (cr0_smmuen == 0 && cr0ack_smmuen == 0);
> +}
> +
> +/* Check if STRTAB_BASE register is writable */
> +static bool smmu_strtab_base_writable(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
> + if (FIELD_EX32(s->bank[sec_sid].idr[1], IDR1, TABLES_PRESET)) {
> + return false;
> + }
> +
> + /* Check SMMUEN conditions for the specific security domain */
> + return smmuv3_is_smmu_enabled(s, sec_sid);
> +}
> +
> +static inline int smmuv3_get_cr0_cmdqen(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + return FIELD_EX32(s->bank[sec_sid].cr[0], CR0, CMDQEN);
> +}
> +
> +static inline int smmuv3_get_cr0ack_cmdqen(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + return FIELD_EX32(s->bank[sec_sid].cr0ack, CR0, CMDQEN);
> +}
> +
> +static inline int smmuv3_get_cr0_eventqen(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + return FIELD_EX32(s->bank[sec_sid].cr[0], CR0, EVENTQEN);
> +}
> +
> +static inline int smmuv3_get_cr0ack_eventqen(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + return FIELD_EX32(s->bank[sec_sid].cr0ack, CR0, EVENTQEN);
> +}
> +
> +/* Check if MSI is supported */
> +static inline bool smmu_msi_supported(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + return FIELD_EX32(s->bank[sec_sid].idr[0], IDR0, MSI);
> +}
> +
> +/* Check if secure GERROR_IRQ_CFGx registers are writable */
> +static inline bool smmu_gerror_irq_cfg_writable(SMMUv3State *s,
> + SMMUSecSID sec_sid)
> +{
> + if (!smmu_msi_supported(s, sec_sid)) {
> + return false;
> + }
> +
> + bool ctrl_en = FIELD_EX32(s->bank[sec_sid].irq_ctrl,
> + IRQ_CTRL, GERROR_IRQEN);
> + return !ctrl_en;
> +}
> +
> +/* Check if CMDQEN is disabled */
> +static bool smmu_cmdqen_disabled(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + int cr0_cmdqen = smmuv3_get_cr0_cmdqen(s, sec_sid);
> + int cr0ack_cmdqen = smmuv3_get_cr0ack_cmdqen(s, sec_sid);
> + return (cr0_cmdqen == 0 && cr0ack_cmdqen == 0);
> +}
> +
> +/* Check if CMDQ_BASE register is writable */
> +static bool smmu_cmdq_base_writable(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
> + if (FIELD_EX32(s->bank[sec_sid].idr[1], IDR1, QUEUES_PRESET)) {
> + return false;
> + }
> +
> + return smmu_cmdqen_disabled(s, sec_sid);
> +}
> +
> +/* Check if EVENTQEN is disabled */
> +static bool smmu_eventqen_disabled(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + int cr0_eventqen = smmuv3_get_cr0_eventqen(s, sec_sid);
> + int cr0ack_eventqen = smmuv3_get_cr0ack_eventqen(s, sec_sid);
> + return (cr0_eventqen == 0 && cr0ack_eventqen == 0);
> +}
> +
> +static bool smmu_idr1_queue_preset(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + return FIELD_EX32(s->bank[sec_sid].idr[1], IDR1, QUEUES_PRESET);
> +}
> +
> +/* Check if EVENTQ_BASE register is writable */
> +static bool smmu_eventq_base_writable(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
> + if (smmu_idr1_queue_preset(s, sec_sid)) {
> + return false;
> + }
> +
> + return smmu_eventqen_disabled(s, sec_sid);
> +}
> +
> +static bool smmu_irq_ctl_evtq_irqen_disabled(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + return FIELD_EX32(s->bank[sec_sid].irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
> +}
> +
> +/* Check if EVENTQ_IRQ_CFGx is writable */
> +static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s, SMMUSecSID sec_sid)
> +{
> + if (smmu_msi_supported(s, sec_sid)) {
> + return false;
> + }
> +
> + return smmu_irq_ctl_evtq_irqen_disabled(s, sec_sid);
> +}
> +
> static int smmuv3_cmdq_consume(SMMUv3State *s)
> {
> SMMUState *bs = ARM_SMMU(s);
> @@ -1561,27 +1682,59 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>
> switch (offset) {
> case A_GERROR_IRQ_CFG0:
> - bank->gerror_irq_cfg0 = data;
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
> + /* SMMU_(*_)_IRQ_CTRL.GERROR_IRQEN == 1: IGNORED this write */
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> +
> + bank->gerror_irq_cfg0 = data & SMMU_GERROR_IRQ_CFG0_RESERVED;
> return MEMTX_OK;
> case A_STRTAB_BASE:
> - bank->strtab_base = data;
> + if (!smmu_strtab_base_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "STRTAB_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + /* Clear reserved bits according to spec */
> + bank->strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
> return MEMTX_OK;
> case A_CMDQ_BASE:
> - bank->cmdq.base = data;
> + if (!smmu_cmdq_base_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + bank->cmdq.base = data & SMMU_QUEUE_BASE_RESERVED;
> bank->cmdq.log2size = extract64(bank->cmdq.base, 0, 5);
> if (bank->cmdq.log2size > SMMU_CMDQS) {
> bank->cmdq.log2size = SMMU_CMDQS;
> }
> return MEMTX_OK;
> case A_EVENTQ_BASE:
> - bank->eventq.base = data;
> + if (!smmu_eventq_base_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + bank->eventq.base = data & SMMU_QUEUE_BASE_RESERVED;
> bank->eventq.log2size = extract64(bank->eventq.base, 0, 5);
> if (bank->eventq.log2size > SMMU_EVENTQS) {
> bank->eventq.log2size = SMMU_EVENTQS;
> }
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0:
> - bank->eventq_irq_cfg0 = data;
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + bank->eventq_irq_cfg0 = data & SMMU_EVENTQ_IRQ_CFG0_RESERVED;
> return MEMTX_OK;
> default:
> qemu_log_mask(LOG_UNIMP,
> @@ -1608,7 +1761,15 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> bank->cr[1] = data;
> return MEMTX_OK;
> case A_CR2:
> - bank->cr[2] = data;
> + if (smmuv3_is_smmu_enabled(s, reg_sec_sid)) {
> + /* Allow write: SMMUEN is 0 in both CR0 and CR0ACK */
> + bank->cr[2] = data;
> + } else {
> + /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CR2 write ignored: register is read-only when "
> + "CR0.SMMUEN or CR0ACK.SMMUEN is set\n");
> + }
> return MEMTX_OK;
> case A_IRQ_CTRL:
> bank->irq_ctrl = data;
> @@ -1622,12 +1783,31 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> smmuv3_cmdq_consume(s);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_GERROR_IRQ_CFG0_RESERVED;
> bank->gerror_irq_cfg0 = deposit64(bank->gerror_irq_cfg0, 0, 32, data);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 + 4 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> +
> bank->gerror_irq_cfg0 = deposit64(bank->gerror_irq_cfg0, 32, 32, data);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG1 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> +
> bank->gerror_irq_cfg1 = data;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> @@ -1644,12 +1824,32 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_STRTAB_BASE: /* 64b */
> + if (!smmu_strtab_base_writable(s, reg_sec_sid)) {
would you mind splitting this patch into 2, changes related to
smmu_gerror_irq_cfg_writable and changes related to smmu_strtab_base_writable if confirmed they are effectively independent on each others
Eric
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "STRTAB_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_STRTAB_BASE_RESERVED;
> bank->strtab_base = deposit64(bank->strtab_base, 0, 32, data);
> return MEMTX_OK;
> case A_STRTAB_BASE + 4:
> + if (!smmu_strtab_base_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "STRTAB_BASE + 4 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_STRTAB_BASE_RESERVED;
> bank->strtab_base = deposit64(bank->strtab_base, 32, 32, data);
> return MEMTX_OK;
> case A_STRTAB_BASE_CFG:
> + if (!smmu_strtab_base_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "STRTAB_BASE_CFG write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> bank->strtab_base_cfg = data;
> if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
> bank->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
> @@ -1657,6 +1857,13 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_CMDQ_BASE: /* 64b */
> + if (!smmu_cmdq_base_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> bank->cmdq.base = deposit64(bank->cmdq.base, 0, 32, data);
> bank->cmdq.log2size = extract64(bank->cmdq.base, 0, 5);
> if (bank->cmdq.log2size > SMMU_CMDQS) {
> @@ -1664,6 +1871,13 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_CMDQ_BASE + 4: /* 64b */
> + if (!smmu_cmdq_base_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_BASE + 4 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> bank->cmdq.base = deposit64(bank->cmdq.base, 32, 32, data);
> return MEMTX_OK;
> case A_CMDQ_PROD:
> @@ -1671,9 +1885,22 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> smmuv3_cmdq_consume(s);
> return MEMTX_OK;
> case A_CMDQ_CONS:
> + if (!smmu_cmdqen_disabled(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_CONS write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> bank->cmdq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_BASE: /* 64b */
> + if (!smmu_eventq_base_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> bank->eventq.base = deposit64(bank->eventq.base, 0, 32, data);
> bank->eventq.log2size = extract64(bank->eventq.base, 0, 5);
> if (bank->eventq.log2size > SMMU_EVENTQS) {
> @@ -1681,24 +1908,63 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_EVENTQ_BASE + 4:
> + if (!smmu_eventq_base_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_BASE + 4 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> bank->eventq.base = deposit64(bank->eventq.base, 32, 32, data);
> return MEMTX_OK;
> case A_EVENTQ_PROD:
> + if (!smmu_eventqen_disabled(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_PROD write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> bank->eventq.prod = data;
> return MEMTX_OK;
> case A_EVENTQ_CONS:
> bank->eventq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0: /* 64b */
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
> bank->eventq_irq_cfg0 = deposit64(bank->eventq_irq_cfg0, 0, 32, data);
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0 + 4:
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG0+4 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
> bank->eventq_irq_cfg0 = deposit64(bank->eventq_irq_cfg0, 32, 32, data);
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG1:
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG1 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> bank->eventq_irq_cfg1 = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG2:
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_sid)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG2 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> bank->eventq_irq_cfg2 = data;
> return MEMTX_OK;
> default:
> @@ -1743,6 +2009,12 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
>
> switch (offset) {
> case A_GERROR_IRQ_CFG0:
> + /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
> + if (!smmu_msi_supported(s, reg_sec_sid)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> +
> *data = bank->gerror_irq_cfg0;
> return MEMTX_OK;
> case A_STRTAB_BASE:
> @@ -1811,15 +2083,35 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> *data = bank->gerrorn;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> + if (!smmu_msi_supported(s, reg_sec_sid)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> +
> *data = extract64(bank->gerror_irq_cfg0, 0, 32);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> + if (!smmu_msi_supported(s, reg_sec_sid)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> +
> *data = extract64(bank->gerror_irq_cfg0, 32, 32);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> + if (!smmu_msi_supported(s, reg_sec_sid)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> +
> *data = bank->gerror_irq_cfg1;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> + if (!smmu_msi_supported(s, reg_sec_sid)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> +
> *data = bank->gerror_irq_cfg2;
> return MEMTX_OK;
> case A_STRTAB_BASE: /* 64b */
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 14/21] hw/arm/smmuv3: Add access checks for MMIO registers
2025-12-02 16:31 ` Eric Auger
@ 2025-12-03 15:32 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-03 15:32 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 2025/12/3 00:31, Eric Auger wrote:
> Hi Tao,
>
> On 10/12/25 5:13 PM, Tao Tang wrote:
>> The SMMUv3 model was missing checks for register accessibility under
>> certain conditions. This allowed guest software to write to registers
>> like STRTAB_BASE when they should be read-only, or read from
>> GERROR_IRQ_CFG registers when they should be RES0.
>>
>> This patch fixes this by introducing helper functions, such as the
>> smmu_(reg_name)_writable pattern, to encapsulate the architectural
>> access rules. In addition, writes to registers like STRTAB_BASE,
>> queue bases, and IRQ configurations are now masked to correctly
>> handle reserved bits.
>>
>> The MMIO handlers are updated to call these functions before accessing
>> registers. To purely fix the missing checks without introducing new
>> functionality, the security context in the MMIO handlers is explicitly
>> fixed to Non-secure. This ensures that the scope of this patch is
>> limited to fixing existing Non-secure logic.
>>
>> If a register is not accessible, the access is now correctly handled
>> and a guest error is logged, bringing the model's behavior in line with
>> the specification.
>>
>> Fixes: fae4be38b35d ("hw/arm/smmuv3: Implement MMIO write operations")
>> Fixes: 10a83cb9887e ("hw/arm/smmuv3: Skeleton")
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 304 +++++++++++++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 298 insertions(+), 6 deletions(-)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index f9395c3821..f161dd3eff 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -1321,6 +1321,127 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage,
>> }
>> ------------------------------<snip>------------------------------
>>
>>
>>
>> ------------------------------<snip>------------------------------
>> +
>> bank->gerror_irq_cfg1 = data;
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG2:
>> @@ -1644,12 +1824,32 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
>> }
>> return MEMTX_OK;
>> case A_STRTAB_BASE: /* 64b */
>> + if (!smmu_strtab_base_writable(s, reg_sec_sid)) {
> would you mind splitting this patch into 2, changes related to
>
> smmu_gerror_irq_cfg_writable and changes related to smmu_strtab_base_writable if confirmed they are effectively independent on each others
>
> Eric
Sure. I'll split it in V4. Thanks for your suggestion.
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 15/21] hw/arm/smmuv3: Determine register bank from MMIO offset
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (13 preceding siblings ...)
2025-10-12 15:13 ` [RFC v3 14/21] hw/arm/smmuv3: Add access checks for MMIO registers Tao Tang
@ 2025-10-12 15:13 ` Tao Tang
2025-10-14 23:31 ` Pierrick Bouvier
2025-12-04 14:21 ` Eric Auger
2025-10-12 15:13 ` [RFC v3 16/21] hw/arm/smmuv3: Implement SMMU_S_INIT register Tao Tang
` (5 subsequent siblings)
20 siblings, 2 replies; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:13 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Modify the main MMIO handlers (smmu_write_mmio, smmu_read_mmio)
to determine the security state of the target register based on its
memory-mapped offset.
By checking if the offset is within the secure register space (>=
SMMU_SECURE_REG_START), the handlers can deduce the register's
SEC_SID (reg_sec_sid). This SID is then passed down to the register
access helper functions (smmu_writel, smmu_readl, etc.).
Inside these helpers, the switch statement now operates on a masked,
relative offset:
uint32_t reg_offset = offset & 0xfff;
switch (reg_offset) {
...
}
This design leverages a key feature of the SMMU specification: registers
with the same function across different security states
(Non-secure, Realm, Root) share the same relative offset. This avoids
significant code duplication. The reg_sec_sid passed from the MMIO
handler determines which security bank to operate on, while the masked
offset identifies the specific register within that bank.
It is important to distinguish between the security state of the
register itself and the security state of the access. A
higher-privilege security state is permitted to access registers
belonging to a lower-privilege state, but the reverse is not allowed.
This patch lays the groundwork for enforcing such rules.
For future compatibility with Realm and Root states, the logic in the
else block corresponding to the secure offset check:
if (offset >= SMMU_SECURE_REG_START) {
reg_sec_sid = SMMU_SEC_SID_S;
} else {
/* Future Realm/Root handling */
}
will need to be expanded. This will be necessary to differentiate
between the Root Control Page and Realm Register Pages, especially since
the Root Control Page is IMPLEMENTATION DEFINED.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 61 ++++++++++++++++++++++++++++++++++++-------------
1 file changed, 45 insertions(+), 16 deletions(-)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index f161dd3eff..100caeeb35 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1675,12 +1675,13 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
}
static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
- uint64_t data, MemTxAttrs attrs)
+ uint64_t data, MemTxAttrs attrs,
+ SMMUSecSID reg_sec_sid)
{
- SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
+ uint32_t reg_offset = offset & 0xfff;
- switch (offset) {
+ switch (reg_offset) {
case A_GERROR_IRQ_CFG0:
if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
/* SMMU_(*_)_IRQ_CTRL.GERROR_IRQEN == 1: IGNORED this write */
@@ -1745,12 +1746,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
}
static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
- uint64_t data, MemTxAttrs attrs)
+ uint64_t data, MemTxAttrs attrs,
+ SMMUSecSID reg_sec_sid)
{
- SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
+ uint32_t reg_offset = offset & 0xfff;
- switch (offset) {
+ switch (reg_offset) {
case A_CR0:
bank->cr[0] = data;
bank->cr0ack = data & ~SMMU_CR0_RESERVED;
@@ -1985,12 +1987,33 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
/* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
offset &= ~0x10000;
+ SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
+ /*
+ * The security state of the access (MemTxAttrs attrs) may differ from the
+ * security state to which the register belongs. Future support will include
+ * Realm and Root security states.
+ *
+ * The SMMU architecture specifies that Realm, Root, and Non-secure
+ * registers share the same offset layout within the last 4 hexadecimal
+ * digits (16 bits) of the address. Only Secure state registers are
+ * mapped to a higher address space, starting from
+ * SMMU_SECURE_REG_START (0x8000).
+ *
+ * Therefore, we can directly use the offset to distinguish Secure
+ * registers. When Realm and Root support is needed in the future, we
+ * only need to enhance the 'else' block of the corresponding 'if'
+ * statement to handle those specific security states.
+ */
+ if (offset >= SMMU_SECURE_REG_START) {
+ reg_sec_sid = SMMU_SEC_SID_S;
+ }
+
switch (size) {
case 8:
- r = smmu_writell(s, offset, data, attrs);
+ r = smmu_writell(s, offset, data, attrs, reg_sec_sid);
break;
case 4:
- r = smmu_writel(s, offset, data, attrs);
+ r = smmu_writel(s, offset, data, attrs, reg_sec_sid);
break;
default:
r = MEMTX_ERROR;
@@ -2002,12 +2025,13 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
}
static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
- uint64_t *data, MemTxAttrs attrs)
+ uint64_t *data, MemTxAttrs attrs,
+ SMMUSecSID reg_sec_sid)
{
- SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
+ uint32_t reg_offset = offset & 0xfff;
- switch (offset) {
+ switch (reg_offset) {
case A_GERROR_IRQ_CFG0:
/* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
if (!smmu_msi_supported(s, reg_sec_sid)) {
@@ -2036,12 +2060,13 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
}
static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
- uint64_t *data, MemTxAttrs attrs)
+ uint64_t *data, MemTxAttrs attrs,
+ SMMUSecSID reg_sec_sid)
{
- SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
+ uint32_t reg_offset = offset & 0xfff;
- switch (offset) {
+ switch (reg_offset) {
case A_IDREGS ... A_IDREGS + 0x2f:
*data = smmuv3_idreg(offset - A_IDREGS);
return MEMTX_OK;
@@ -2165,13 +2190,17 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
/* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
offset &= ~0x10000;
+ SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
+ if (offset >= SMMU_SECURE_REG_START) {
+ reg_sec_sid = SMMU_SEC_SID_S;
+ }
switch (size) {
case 8:
- r = smmu_readll(s, offset, data, attrs);
+ r = smmu_readll(s, offset, data, attrs, reg_sec_sid);
break;
case 4:
- r = smmu_readl(s, offset, data, attrs);
+ r = smmu_readl(s, offset, data, attrs, reg_sec_sid);
break;
default:
r = MEMTX_ERROR;
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 15/21] hw/arm/smmuv3: Determine register bank from MMIO offset
2025-10-12 15:13 ` [RFC v3 15/21] hw/arm/smmuv3: Determine register bank from MMIO offset Tao Tang
@ 2025-10-14 23:31 ` Pierrick Bouvier
2025-12-04 14:21 ` Eric Auger
1 sibling, 0 replies; 67+ messages in thread
From: Pierrick Bouvier @ 2025-10-14 23:31 UTC (permalink / raw)
To: Tao Tang, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 10/12/25 8:13 AM, Tao Tang wrote:
> Modify the main MMIO handlers (smmu_write_mmio, smmu_read_mmio)
> to determine the security state of the target register based on its
> memory-mapped offset.
>
> By checking if the offset is within the secure register space (>=
> SMMU_SECURE_REG_START), the handlers can deduce the register's
> SEC_SID (reg_sec_sid). This SID is then passed down to the register
> access helper functions (smmu_writel, smmu_readl, etc.).
>
> Inside these helpers, the switch statement now operates on a masked,
> relative offset:
>
> uint32_t reg_offset = offset & 0xfff;
> switch (reg_offset) {
> ...
> }
>
> This design leverages a key feature of the SMMU specification: registers
> with the same function across different security states
> (Non-secure, Realm, Root) share the same relative offset. This avoids
> significant code duplication. The reg_sec_sid passed from the MMIO
> handler determines which security bank to operate on, while the masked
> offset identifies the specific register within that bank.
>
> It is important to distinguish between the security state of the
> register itself and the security state of the access. A
> higher-privilege security state is permitted to access registers
> belonging to a lower-privilege state, but the reverse is not allowed.
> This patch lays the groundwork for enforcing such rules.
>
> For future compatibility with Realm and Root states, the logic in the
> else block corresponding to the secure offset check:
>
> if (offset >= SMMU_SECURE_REG_START) {
> reg_sec_sid = SMMU_SEC_SID_S;
> } else {
> /* Future Realm/Root handling */
> }
>
> will need to be expanded. This will be necessary to differentiate
> between the Root Control Page and Realm Register Pages, especially since
> the Root Control Page is IMPLEMENTATION DEFINED.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 61 ++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 45 insertions(+), 16 deletions(-)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index f161dd3eff..100caeeb35 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -1675,12 +1675,13 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> }
>
> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> - uint64_t data, MemTxAttrs attrs)
> + uint64_t data, MemTxAttrs attrs,
> + SMMUSecSID reg_sec_sid)
> {
> - SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> + uint32_t reg_offset = offset & 0xfff;
>
> - switch (offset) {
> + switch (reg_offset) {
> case A_GERROR_IRQ_CFG0:
> if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
> /* SMMU_(*_)_IRQ_CTRL.GERROR_IRQEN == 1: IGNORED this write */
> @@ -1745,12 +1746,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> }
>
> static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> - uint64_t data, MemTxAttrs attrs)
> + uint64_t data, MemTxAttrs attrs,
> + SMMUSecSID reg_sec_sid)
> {
> - SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> + uint32_t reg_offset = offset & 0xfff;
>
> - switch (offset) {
> + switch (reg_offset) {
> case A_CR0:
> bank->cr[0] = data;
> bank->cr0ack = data & ~SMMU_CR0_RESERVED;
> @@ -1985,12 +1987,33 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
> offset &= ~0x10000;
>
> + SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> + /*
> + * The security state of the access (MemTxAttrs attrs) may differ from the
> + * security state to which the register belongs. Future support will include
> + * Realm and Root security states.
> + *
> + * The SMMU architecture specifies that Realm, Root, and Non-secure
> + * registers share the same offset layout within the last 4 hexadecimal
> + * digits (16 bits) of the address. Only Secure state registers are
> + * mapped to a higher address space, starting from
> + * SMMU_SECURE_REG_START (0x8000).
> + *
This is not exact, as Root registers have their own subset and offsets.
Thus, they will need a specific bank and adapted read/write functions to
deal with that.
It's not a big deal, but better to edit the comment.
...
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 15/21] hw/arm/smmuv3: Determine register bank from MMIO offset
2025-10-12 15:13 ` [RFC v3 15/21] hw/arm/smmuv3: Determine register bank from MMIO offset Tao Tang
2025-10-14 23:31 ` Pierrick Bouvier
@ 2025-12-04 14:21 ` Eric Auger
2025-12-05 6:31 ` Tao Tang
1 sibling, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-04 14:21 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:13 PM, Tao Tang wrote:
> Modify the main MMIO handlers (smmu_write_mmio, smmu_read_mmio)
> to determine the security state of the target register based on its
> memory-mapped offset.
>
> By checking if the offset is within the secure register space (>=
> SMMU_SECURE_REG_START), the handlers can deduce the register's
> SEC_SID (reg_sec_sid). This SID is then passed down to the register
> access helper functions (smmu_writel, smmu_readl, etc.).
>
> Inside these helpers, the switch statement now operates on a masked,
> relative offset:
>
> uint32_t reg_offset = offset & 0xfff;
> switch (reg_offset) {
> ...
> }
>
> This design leverages a key feature of the SMMU specification: registers
> with the same function across different security states
> (Non-secure, Realm, Root) share the same relative offset. This avoids
> significant code duplication. The reg_sec_sid passed from the MMIO
> handler determines which security bank to operate on, while the masked
> offset identifies the specific register within that bank.
>
> It is important to distinguish between the security state of the
> register itself and the security state of the access. A
> higher-privilege security state is permitted to access registers
> belonging to a lower-privilege state, but the reverse is not allowed.
> This patch lays the groundwork for enforcing such rules.
>
> For future compatibility with Realm and Root states, the logic in the
> else block corresponding to the secure offset check:
>
> if (offset >= SMMU_SECURE_REG_START) {
> reg_sec_sid = SMMU_SEC_SID_S;
> } else {
> /* Future Realm/Root handling */
> }
>
> will need to be expanded. This will be necessary to differentiate
> between the Root Control Page and Realm Register Pages, especially since
> the Root Control Page is IMPLEMENTATION DEFINED.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 61 ++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 45 insertions(+), 16 deletions(-)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index f161dd3eff..100caeeb35 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -1675,12 +1675,13 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> }
>
> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> - uint64_t data, MemTxAttrs attrs)
> + uint64_t data, MemTxAttrs attrs,
> + SMMUSecSID reg_sec_sid)
> {
> - SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> + uint32_t reg_offset = offset & 0xfff;
>
> - switch (offset) {
> + switch (reg_offset) {
> case A_GERROR_IRQ_CFG0:
> if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
> /* SMMU_(*_)_IRQ_CTRL.GERROR_IRQEN == 1: IGNORED this write */
> @@ -1745,12 +1746,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> }
>
> static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> - uint64_t data, MemTxAttrs attrs)
> + uint64_t data, MemTxAttrs attrs,
> + SMMUSecSID reg_sec_sid)
> {
> - SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> + uint32_t reg_offset = offset & 0xfff;
>
> - switch (offset) {
> + switch (reg_offset) {
> case A_CR0:
> bank->cr[0] = data;
> bank->cr0ack = data & ~SMMU_CR0_RESERVED;
> @@ -1985,12 +1987,33 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
> offset &= ~0x10000;
>
> + SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> + /*
> + * The security state of the access (MemTxAttrs attrs) may differ from the
> + * security state to which the register belongs. Future support will include
> + * Realm and Root security states.
> + *
> + * The SMMU architecture specifies that Realm, Root, and Non-secure
> + * registers share the same offset layout within the last 4 hexadecimal
> + * digits (16 bits) of the address. Only Secure state registers are
> + * mapped to a higher address space, starting from
> + * SMMU_SECURE_REG_START (0x8000).
> + *
> + * Therefore, we can directly use the offset to distinguish Secure
> + * registers. When Realm and Root support is needed in the future, we
> + * only need to enhance the 'else' block of the corresponding 'if'
> + * statement to handle those specific security states.
I wouldn't add that many references to the Realm and Root support in the
very code. Maybe move that in the commit desc.
> + */
> + if (offset >= SMMU_SECURE_REG_START) {
> + reg_sec_sid = SMMU_SEC_SID_S;
> + }
> +
> switch (size) {
> case 8:
> - r = smmu_writell(s, offset, data, attrs);
> + r = smmu_writell(s, offset, data, attrs, reg_sec_sid);
> break;
> case 4:
> - r = smmu_writel(s, offset, data, attrs);
> + r = smmu_writel(s, offset, data, attrs, reg_sec_sid);
> break;
> default:
> r = MEMTX_ERROR;
> @@ -2002,12 +2025,13 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
> }
>
> static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> - uint64_t *data, MemTxAttrs attrs)
> + uint64_t *data, MemTxAttrs attrs,
> + SMMUSecSID reg_sec_sid)
> {
> - SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> + uint32_t reg_offset = offset & 0xfff;
>
> - switch (offset) {
> + switch (reg_offset) {
> case A_GERROR_IRQ_CFG0:
> /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
> if (!smmu_msi_supported(s, reg_sec_sid)) {
> @@ -2036,12 +2060,13 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> }
>
> static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> - uint64_t *data, MemTxAttrs attrs)
> + uint64_t *data, MemTxAttrs attrs,
> + SMMUSecSID reg_sec_sid)
> {
> - SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
> + uint32_t reg_offset = offset & 0xfff;
>
> - switch (offset) {
> + switch (reg_offset) {
> case A_IDREGS ... A_IDREGS + 0x2f:
> *data = smmuv3_idreg(offset - A_IDREGS);
> return MEMTX_OK;
> @@ -2165,13 +2190,17 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
>
> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
> offset &= ~0x10000;
> + SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
declaration should be at the beginning of the function, ie before the
offset setting
> + if (offset >= SMMU_SECURE_REG_START) {
> + reg_sec_sid = SMMU_SEC_SID_S;
> + }
>
> switch (size) {
> case 8:
> - r = smmu_readll(s, offset, data, attrs);
> + r = smmu_readll(s, offset, data, attrs, reg_sec_sid);
> break;
> case 4:
> - r = smmu_readl(s, offset, data, attrs);
> + r = smmu_readl(s, offset, data, attrs, reg_sec_sid);
> break;
> default:
> r = MEMTX_ERROR;
Thanks
Eric
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 15/21] hw/arm/smmuv3: Determine register bank from MMIO offset
2025-12-04 14:21 ` Eric Auger
@ 2025-12-05 6:31 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-05 6:31 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/12/4 22:21, Eric Auger wrote:
>
> On 10/12/25 5:13 PM, Tao Tang wrote:
>> Modify the main MMIO handlers (smmu_write_mmio, smmu_read_mmio)
>> to determine the security state of the target register based on its
>> memory-mapped offset.
>>
>> By checking if the offset is within the secure register space (>=
>> SMMU_SECURE_REG_START), the handlers can deduce the register's
>> SEC_SID (reg_sec_sid). This SID is then passed down to the register
>> access helper functions (smmu_writel, smmu_readl, etc.).
>>
>> Inside these helpers, the switch statement now operates on a masked,
>> relative offset:
>>
>> uint32_t reg_offset = offset & 0xfff;
>> switch (reg_offset) {
>> ...
>> }
>>
>> This design leverages a key feature of the SMMU specification: registers
>> with the same function across different security states
>> (Non-secure, Realm, Root) share the same relative offset. This avoids
>> significant code duplication. The reg_sec_sid passed from the MMIO
>> handler determines which security bank to operate on, while the masked
>> offset identifies the specific register within that bank.
>>
>> It is important to distinguish between the security state of the
>> register itself and the security state of the access. A
>> higher-privilege security state is permitted to access registers
>> belonging to a lower-privilege state, but the reverse is not allowed.
>> This patch lays the groundwork for enforcing such rules.
>>
>> For future compatibility with Realm and Root states, the logic in the
>> else block corresponding to the secure offset check:
>>
>> if (offset >= SMMU_SECURE_REG_START) {
>> reg_sec_sid = SMMU_SEC_SID_S;
>> } else {
>> /* Future Realm/Root handling */
>> }
>>
>> will need to be expanded. This will be necessary to differentiate
>> between the Root Control Page and Realm Register Pages, especially since
>> the Root Control Page is IMPLEMENTATION DEFINED.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 61 ++++++++++++++++++++++++++++++++++++-------------
>> 1 file changed, 45 insertions(+), 16 deletions(-)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index f161dd3eff..100caeeb35 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> ------------------------------<snip>------------------------------
>>
>>
>>
>> ------------------------------<snip>------------------------------
>> bank->cr[0] = data;
>> bank->cr0ack = data & ~SMMU_CR0_RESERVED;
>> @@ -1985,12 +1987,33 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
>> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
>> offset &= ~0x10000;
>>
>> + SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
>> + /*
>> + * The security state of the access (MemTxAttrs attrs) may differ from the
>> + * security state to which the register belongs. Future support will include
>> + * Realm and Root security states.
>> + *
>> + * The SMMU architecture specifies that Realm, Root, and Non-secure
>> + * registers share the same offset layout within the last 4 hexadecimal
>> + * digits (16 bits) of the address. Only Secure state registers are
>> + * mapped to a higher address space, starting from
>> + * SMMU_SECURE_REG_START (0x8000).
>> + *
>> + * Therefore, we can directly use the offset to distinguish Secure
>> + * registers. When Realm and Root support is needed in the future, we
>> + * only need to enhance the 'else' block of the corresponding 'if'
>> + * statement to handle those specific security states.
> I wouldn't add that many references to the Realm and Root support in the
> very code. Maybe move that in the commit desc.
>> ------------------------------<snip>------------------------------
>>
>>
>>
>> ------------------------------<snip>------------------------------
>>
>> static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
>> - uint64_t *data, MemTxAttrs attrs)
>> + uint64_t *data, MemTxAttrs attrs,
>> + SMMUSecSID reg_sec_sid)
>> {
>> - SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
>> SMMUv3RegBank *bank = smmuv3_bank(s, reg_sec_sid);
>> + uint32_t reg_offset = offset & 0xfff;
>>
>> - switch (offset) {
>> + switch (reg_offset) {
>> case A_IDREGS ... A_IDREGS + 0x2f:
>> *data = smmuv3_idreg(offset - A_IDREGS);
>> return MEMTX_OK;
>> @@ -2165,13 +2190,17 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
>>
>> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
>> offset &= ~0x10000;
>> + SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> declaration should be at the beginning of the function, ie before the
> offset setting
Thanks for the review. I'll keep only a short comment in the code and
move the roadmap description into commit msg. I'll also move these
declarations to the beginning (I've found this issue in a few other places).
Best regards,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 16/21] hw/arm/smmuv3: Implement SMMU_S_INIT register
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (14 preceding siblings ...)
2025-10-12 15:13 ` [RFC v3 15/21] hw/arm/smmuv3: Determine register bank from MMIO offset Tao Tang
@ 2025-10-12 15:13 ` Tao Tang
2025-12-04 14:33 ` Eric Auger
2025-10-12 15:14 ` [RFC v3 17/21] hw/arm/smmuv3: Pass security state to command queue and IRQ logic Tao Tang
` (4 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:13 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Implement read/write handlers for the SMMU_S_INIT secure-only
register.
Writing to this register provides a mechanism for software to perform a
global invalidation of ALL caches within the SMMU. This includes the
IOTLBs and Configuration Caches across all security states.
This feature is critical for secure hypervisors like Hafnium, which
use it as a final step in their SMMU initialization sequence. It
provides a reliable, architecturally defined method to ensure a clean
and known-good cache state before enabling translations.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 33 +++++++++++++++++++++++++++++++++
hw/arm/trace-events | 1 +
2 files changed, 34 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 100caeeb35..432de88610 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -354,6 +354,21 @@ static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
}
+static void smmuv3_invalidate_all_caches(SMMUv3State *s)
+{
+ trace_smmuv3_invalidate_all_caches();
+ SMMUState *bs = &s->smmu_state;
+
+ /* Clear all cached configs including STE and CD */
+ if (bs->configs) {
+ g_hash_table_remove_all(bs->configs);
+ }
+
+ /* Invalidate all SMMU IOTLB entries */
+ smmu_inv_notifiers_all(&s->smmu_state);
+ smmu_iotlb_inv_all(bs);
+}
+
static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
SMMUTransCfg *cfg,
SMMUEventInfo *event,
@@ -1969,6 +1984,21 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
bank->eventq_irq_cfg2 = data;
return MEMTX_OK;
+ case (A_S_INIT & 0xfff):
+ if (data & R_S_INIT_INV_ALL_MASK) {
+ int cr0_smmuen = smmu_enabled(s, reg_sec_sid);
+ int s_cr0_smmuen = smmuv3_get_cr0ack_smmuen(s, reg_sec_sid);
+ if (cr0_smmuen || s_cr0_smmuen) {
+ /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
+ qemu_log_mask(LOG_GUEST_ERROR, "S_INIT write ignored: "
+ "CR0.SMMUEN=%d or S_CR0.SMMUEN=%d is set\n",
+ cr0_smmuen, s_cr0_smmuen);
+ return MEMTX_OK;
+ }
+ smmuv3_invalidate_all_caches(s);
+ }
+ /* Synchronous emulation: invalidation completed instantly. */
+ return MEMTX_OK;
default:
qemu_log_mask(LOG_UNIMP,
"%s Unexpected 32-bit access to 0x%"PRIx64" (WI)\n",
@@ -2172,6 +2202,9 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
case A_EVENTQ_CONS:
*data = bank->eventq.cons;
return MEMTX_OK;
+ case (A_S_INIT & 0xfff):
+ *data = 0;
+ return MEMTX_OK;
default:
*data = 0;
qemu_log_mask(LOG_UNIMP,
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index 434d6abfc2..0e7ad8fee3 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -64,6 +64,7 @@ smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d"
smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
smmuv3_inv_notifiers_iova(const char *name, int asid, int vmid, uint64_t iova, uint8_t tg, uint64_t num_pages, int stage) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" stage=%d"
+smmuv3_invalidate_all_caches(void) "Invalidate all SMMU caches and TLBs"
smmu_reset_exit(void) ""
# strongarm.c
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 16/21] hw/arm/smmuv3: Implement SMMU_S_INIT register
2025-10-12 15:13 ` [RFC v3 16/21] hw/arm/smmuv3: Implement SMMU_S_INIT register Tao Tang
@ 2025-12-04 14:33 ` Eric Auger
2025-12-05 8:23 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-04 14:33 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:13 PM, Tao Tang wrote:
> Implement read/write handlers for the SMMU_S_INIT secure-only
> register.
>
> Writing to this register provides a mechanism for software to perform a
> global invalidation of ALL caches within the SMMU. This includes the
> IOTLBs and Configuration Caches across all security states.
>
> This feature is critical for secure hypervisors like Hafnium, which
> use it as a final step in their SMMU initialization sequence. It
> provides a reliable, architecturally defined method to ensure a clean
> and known-good cache state before enabling translations.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 33 +++++++++++++++++++++++++++++++++
> hw/arm/trace-events | 1 +
> 2 files changed, 34 insertions(+)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 100caeeb35..432de88610 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -354,6 +354,21 @@ static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
>
> }
>
> +static void smmuv3_invalidate_all_caches(SMMUv3State *s)
> +{
> + trace_smmuv3_invalidate_all_caches();
> + SMMUState *bs = &s->smmu_state;
> +
> + /* Clear all cached configs including STE and CD */
> + if (bs->configs) {
> + g_hash_table_remove_all(bs->configs);
> + }
> +
> + /* Invalidate all SMMU IOTLB entries */
> + smmu_inv_notifiers_all(&s->smmu_state);
> + smmu_iotlb_inv_all(bs);
> +}
> +
> static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
> SMMUTransCfg *cfg,
> SMMUEventInfo *event,
> @@ -1969,6 +1984,21 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
>
> bank->eventq_irq_cfg2 = data;
> return MEMTX_OK;
> + case (A_S_INIT & 0xfff):
why do we apply & 0xfff ?
> + if (data & R_S_INIT_INV_ALL_MASK) {
> + int cr0_smmuen = smmu_enabled(s, reg_sec_sid);
> + int s_cr0_smmuen = smmuv3_get_cr0ack_smmuen(s, reg_sec_sid);
> + if (cr0_smmuen || s_cr0_smmuen) {
use smmuv3_is_smmu_enabled()?
> + /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
> + qemu_log_mask(LOG_GUEST_ERROR, "S_INIT write ignored: "
> + "CR0.SMMUEN=%d or S_CR0.SMMUEN=%d is set\n",
> + cr0_smmuen, s_cr0_smmuen);
> + return MEMTX_OK;
> + }
> + smmuv3_invalidate_all_caches(s);
> + }
> + /* Synchronous emulation: invalidation completed instantly. */
> + return MEMTX_OK;
> default:
> qemu_log_mask(LOG_UNIMP,
> "%s Unexpected 32-bit access to 0x%"PRIx64" (WI)\n",
> @@ -2172,6 +2202,9 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> case A_EVENTQ_CONS:
> *data = bank->eventq.cons;
> return MEMTX_OK;
> + case (A_S_INIT & 0xfff):
> + *data = 0;
> + return MEMTX_OK;
> default:
> *data = 0;
> qemu_log_mask(LOG_UNIMP,
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index 434d6abfc2..0e7ad8fee3 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -64,6 +64,7 @@ smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d"
> smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
> smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
> smmuv3_inv_notifiers_iova(const char *name, int asid, int vmid, uint64_t iova, uint8_t tg, uint64_t num_pages, int stage) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" stage=%d"
> +smmuv3_invalidate_all_caches(void) "Invalidate all SMMU caches and TLBs"
> smmu_reset_exit(void) ""
>
> # strongarm.c
Thanks
Eric
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 16/21] hw/arm/smmuv3: Implement SMMU_S_INIT register
2025-12-04 14:33 ` Eric Auger
@ 2025-12-05 8:23 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-05 8:23 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/12/4 22:33, Eric Auger wrote:
>
> On 10/12/25 5:13 PM, Tao Tang wrote:
>> Implement read/write handlers for the SMMU_S_INIT secure-only
>> register.
>>
>> Writing to this register provides a mechanism for software to perform a
>> global invalidation of ALL caches within the SMMU. This includes the
>> IOTLBs and Configuration Caches across all security states.
>>
>> This feature is critical for secure hypervisors like Hafnium, which
>> use it as a final step in their SMMU initialization sequence. It
>> provides a reliable, architecturally defined method to ensure a clean
>> and known-good cache state before enabling translations.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 33 +++++++++++++++++++++++++++++++++
>> hw/arm/trace-events | 1 +
>> 2 files changed, 34 insertions(+)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index 100caeeb35..432de88610 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -354,6 +354,21 @@ static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
>>
>> }
>>
>> +static void smmuv3_invalidate_all_caches(SMMUv3State *s)
>> +{
>> + trace_smmuv3_invalidate_all_caches();
>> + SMMUState *bs = &s->smmu_state;
>> +
>> + /* Clear all cached configs including STE and CD */
>> + if (bs->configs) {
>> + g_hash_table_remove_all(bs->configs);
>> + }
>> +
>> + /* Invalidate all SMMU IOTLB entries */
>> + smmu_inv_notifiers_all(&s->smmu_state);
>> + smmu_iotlb_inv_all(bs);
>> +}
>> +
>> static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
>> SMMUTransCfg *cfg,
>> SMMUEventInfo *event,
>> @@ -1969,6 +1984,21 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
>>
>> bank->eventq_irq_cfg2 = data;
>> return MEMTX_OK;
>> + case (A_S_INIT & 0xfff):
> why do we apply & 0xfff ?
Let me clarify what I was trying to do with the `(A_S_INIT & 0xfff)`
case, and then ask your opinion on how to clean this up.
According to the SMMUv3 spec , registers with the same function across
different security states (Non-secure, Realm, Root) share the same
*relative* offset within their 4KB bank window. In the MMIO handlers we
first decode the bank from the high bits of the offset (to get
`reg_sec_sid`), and then do:
uint32_t reg_offset = offset & 0xfff;
switch (reg_offset) {
...
}
So the `switch` is really operating on the per-bank 4KB-relative offset,
and the banked registers can share the same `case` logic while
`reg_sec_sid` selects which bank structure we actually touch.
Most of the `A_*` macros (e.g. `A_CR0`) are already defined as these
relative offsets (just the low 12 bits), so they fit naturally in this
scheme and we can simply write `case A_CR0:`.
`A_S_INIT`, however, is still defined as an *absolute* offset that
already includes the secure-bank base. Since the `switch` is matching on
`reg_offset = offset & 0xfff`, I ended up writing:
case (A_S_INIT & 0xfff):
as a shortcut to adapt an absolute macro to the “relative” decode.
S_INIT is also a secure-only register, so there is no NS twin that I
could reuse as the shared low-12-bit macro. In practice that makes
S_INIT a one-off special case in the current code, which is why the `&
0xfff` sticks out.
I agree this looks more like a hack than a clean design, so I’d like to
rework it in the next version. I see a couple of possible directions and
would appreciate your view on which one is preferable:
1) Stop using NS macros as the canonical case labels and list all
bank-specific variants explicitly.
Instead of relying on a single “NS” macro for the shared low-12-bit
offset, we could have:
switch (reg_offset) {
case A_STRTAB_BASE_CFG:
case A_S_STRTAB_BASE_CFG:
...
break;
}
This makes the banked nature explicit in the `switch` and avoids
relying on NS as the “reference” definition. The bank selection would
still be driven by `reg_sec_sid`, but the case labels themselves would
be per-bank macros where that makes sense.
2) Go back to the v2 [1] idea and introduce a dedicated S_INIT-relative
alias macro.
As I had in v2, we could define a separate macro that encodes only
the relative offset, e.g. something like:
REG32(S_INIT, 0x803c)
FIELD(S_INIT, INV_ALL, 0, 1)
/* Alias for the S_INIT offset to match in the dispatcher switch */
#define A_S_INIT_ALIAS 0x3c
and then the decode would simply use:
switch (reg_offset) {
case A_S_INIT_ALIAS:
...
break;
}
This keeps the “absolute” constant for the secure MMIO layout (if we
still find that useful elsewhere), but makes the decode logic work
entirely in terms of relative offsets and removes the inline.
3) Keep the current implementation and add some comment about it.
Which option would you prefer? Any other thoughts or suggestions would
be greatly appreciated.
[1]
https://lore.kernel.org/qemu-devel/7161c00c-c519-4e90-9dca-99bcf7518d40@redhat.com/
>> + if (data & R_S_INIT_INV_ALL_MASK) {
>> + int cr0_smmuen = smmu_enabled(s, reg_sec_sid);
>> + int s_cr0_smmuen = smmuv3_get_cr0ack_smmuen(s, reg_sec_sid);
>> + if (cr0_smmuen || s_cr0_smmuen) {
> use smmuv3_is_smmu_enabled()?
Yes using smmuv3_is_smmu_enabled is a better choice. And I'll check both
CR0.SMMUEN and S_CR0.SMMUEN in V4. As I found *any SMMU_(*_)CR0.SMMUEN
== 1* descriptions in 6.3.62 SMMU_S_INIT ARM IHI 0070 G.b:
> If SMMU_ROOT_CR0.GPCEN == 0, a write of 1 to INV_ALL when any
SMMU_(*_)CR0.SMMUEN == 1,
> or an Update of any SMMUEN to 1 is in progress ........ , is
CONSTRAINED UNPREDICTABLE......
Thanks for your review.
Yours,
Tao
>> + /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
>> + qemu_log_mask(LOG_GUEST_ERROR, "S_INIT write ignored: "
>> + "CR0.SMMUEN=%d or S_CR0.SMMUEN=%d is set\n",
>> + cr0_smmuen, s_cr0_smmuen);
>> + return MEMTX_OK;
>> + }
>> + smmuv3_invalidate_all_caches(s);
>> + }
>> + /* Synchronous emulation: invalidation completed instantly. */
>> + return MEMTX_OK;
>> default:
>> qemu_log_mask(LOG_UNIMP,
>> "%s Unexpected 32-bit access to 0x%"PRIx64" (WI)\n",
>> @@ -2172,6 +2202,9 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
>> case A_EVENTQ_CONS:
>> *data = bank->eventq.cons;
>> return MEMTX_OK;
>> + case (A_S_INIT & 0xfff):
>> + *data = 0;
>> + return MEMTX_OK;
>> default:
>> *data = 0;
>> qemu_log_mask(LOG_UNIMP,
>> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
>> index 434d6abfc2..0e7ad8fee3 100644
>> --- a/hw/arm/trace-events
>> +++ b/hw/arm/trace-events
>> @@ -64,6 +64,7 @@ smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d"
>> smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
>> smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
>> smmuv3_inv_notifiers_iova(const char *name, int asid, int vmid, uint64_t iova, uint8_t tg, uint64_t num_pages, int stage) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" stage=%d"
>> +smmuv3_invalidate_all_caches(void) "Invalidate all SMMU caches and TLBs"
>> smmu_reset_exit(void) ""
>>
>> # strongarm.c
> Thanks
>
> Eric
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 17/21] hw/arm/smmuv3: Pass security state to command queue and IRQ logic
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (15 preceding siblings ...)
2025-10-12 15:13 ` [RFC v3 16/21] hw/arm/smmuv3: Implement SMMU_S_INIT register Tao Tang
@ 2025-10-12 15:14 ` Tao Tang
2025-12-04 14:46 ` Eric Auger
2025-10-12 15:14 ` [RFC v3 18/21] hw/arm/smmuv3: Harden security checks in MMIO handlers Tao Tang
` (3 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:14 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
The command queue and interrupt logic must operate within the correct
security context. Handling a command queue in one security state can
have side effects, such as interrupts or errors, that need to be
processed in another. This requires the IRQ and GERROR logic to be
fully aware of the multi-security-state environment.
This patch refactors the command queue processing and interrupt handling
to be security-state aware. Besides, unlike the command queue, the
event queue logic was already updated to be security-state aware in a
previous change. The SMMUSecSID is now passed through the relevant
functions to ensure that:
- Command queue operations are performed on the correct register bank.
- Interrupts are triggered and checked against the correct security
state's configuration.
- Errors from command processing are reported in the correct GERROR
register bank.
- Architectural access controls, like preventing secure commands from a
non-secure queue, are correctly enforced.
- As Secure Stage 2 is not yet implemented, commands that target it
are now correctly aborted during command queue processing.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 61 +++++++++++++++++++++++++++++++--------------
hw/arm/trace-events | 2 +-
2 files changed, 43 insertions(+), 20 deletions(-)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 432de88610..4ac7a2f3c7 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -46,11 +46,11 @@
*
* @irq: irq type
* @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
+ * @sec_sid: SEC_SID of the bank
*/
static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
- uint32_t gerror_mask)
+ uint32_t gerror_mask, SMMUSecSID sec_sid)
{
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
bool pulse = false;
@@ -87,9 +87,9 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
}
}
-static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
+static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
+ SMMUSecSID sec_sid)
{
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
uint32_t pending = bank->gerror ^ bank->gerrorn;
uint32_t toggled = bank->gerrorn ^ new_gerrorn;
@@ -109,7 +109,7 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
trace_smmuv3_write_gerrorn(toggled & pending, bank->gerrorn);
}
-static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
+static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd, SMMUSecSID sec_sid)
{
dma_addr_t addr = Q_CONS_ENTRY(q);
MemTxResult ret;
@@ -167,7 +167,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, SMMUSecSID sec_sid,
}
if (!smmuv3_q_empty(q)) {
- smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0);
+ smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0, sec_sid);
}
return MEMTX_OK;
}
@@ -263,7 +263,8 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
info->sid);
r = smmuv3_write_eventq(s, sec_sid, &evt);
if (r != MEMTX_OK) {
- smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
+ smmuv3_trigger_irq(s, SMMU_IRQ_GERROR,
+ R_GERROR_EVENTQ_ABT_ERR_MASK, sec_sid);
}
info->recorded = true;
}
@@ -1457,11 +1458,10 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s, SMMUSecSID sec_sid)
return smmu_irq_ctl_evtq_irqen_disabled(s, sec_sid);
}
-static int smmuv3_cmdq_consume(SMMUv3State *s)
+static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
{
SMMUState *bs = ARM_SMMU(s);
SMMUCmdError cmd_error = SMMU_CERROR_NONE;
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
SMMUQueue *q = &bank->cmdq;
SMMUCommandType type = 0;
@@ -1480,14 +1480,14 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
uint32_t pending = bank->gerror ^ bank->gerrorn;
Cmd cmd;
- trace_smmuv3_cmdq_consume(Q_PROD(q), Q_CONS(q),
+ trace_smmuv3_cmdq_consume(sec_sid, Q_PROD(q), Q_CONS(q),
Q_PROD_WRAP(q), Q_CONS_WRAP(q));
if (FIELD_EX32(pending, GERROR, CMDQ_ERR)) {
break;
}
- if (queue_read(q, &cmd) != MEMTX_OK) {
+ if (queue_read(q, &cmd, sec_sid) != MEMTX_OK) {
cmd_error = SMMU_CERROR_ABT;
break;
}
@@ -1500,7 +1500,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
switch (type) {
case SMMU_CMD_SYNC:
if (CMD_SYNC_CS(&cmd) & CMD_SYNC_SIG_IRQ) {
- smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0);
+ smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0, sec_sid);
}
break;
case SMMU_CMD_PREFETCH_CONFIG:
@@ -1512,6 +1512,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
SMMUDevice *sdev = smmu_find_sdev(bs, sid);
if (CMD_SSEC(&cmd)) {
+ if (sec_sid != SMMU_SEC_SID_S) {
+ /* Secure Stream with Non-Secure command */
+ cmd_error = SMMU_CERROR_ILL;
+ break;
+ }
cmd_error = SMMU_CERROR_ILL;
break;
}
@@ -1532,6 +1537,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
SMMUSIDRange sid_range;
if (CMD_SSEC(&cmd)) {
+ if (sec_sid != SMMU_SEC_SID_S) {
+ cmd_error = SMMU_CERROR_ILL;
+ break;
+ }
cmd_error = SMMU_CERROR_ILL;
break;
}
@@ -1551,6 +1560,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
SMMUDevice *sdev = smmu_find_sdev(bs, sid);
if (CMD_SSEC(&cmd)) {
+ if (sec_sid != SMMU_SEC_SID_S) {
+ cmd_error = SMMU_CERROR_ILL;
+ break;
+ }
cmd_error = SMMU_CERROR_ILL;
break;
}
@@ -1618,7 +1631,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
cmd_error = SMMU_CERROR_ILL;
break;
}
- smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, SMMU_SEC_SID_NS);
+ smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, sec_sid);
break;
case SMMU_CMD_TLBI_S12_VMALL:
{
@@ -1628,6 +1641,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
cmd_error = SMMU_CERROR_ILL;
break;
}
+ /* Secure Stage 2 isn't supported for now */
+ if (sec_sid != SMMU_SEC_SID_NS) {
+ cmd_error = SMMU_CERROR_ABT;
+ break;
+ }
trace_smmuv3_cmdq_tlbi_s12_vmid(vmid);
smmu_inv_notifiers_all(&s->smmu_state);
@@ -1639,11 +1657,16 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
cmd_error = SMMU_CERROR_ILL;
break;
}
+
+ if (sec_sid != SMMU_SEC_SID_NS) {
+ cmd_error = SMMU_CERROR_ABT;
+ break;
+ }
/*
* As currently only either s1 or s2 are supported
* we can reuse same function for s2.
*/
- smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, SMMU_SEC_SID_NS);
+ smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, sec_sid);
break;
case SMMU_CMD_TLBI_EL3_ALL:
case SMMU_CMD_TLBI_EL3_VA:
@@ -1680,7 +1703,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
if (cmd_error) {
trace_smmuv3_cmdq_consume_error(smmu_cmd_string(type), cmd_error);
smmu_write_cmdq_err(s, cmd_error, sec_sid);
- smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK);
+ smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK, sec_sid);
}
trace_smmuv3_cmdq_consume_out(Q_PROD(q), Q_CONS(q),
@@ -1772,7 +1795,7 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
bank->cr[0] = data;
bank->cr0ack = data & ~SMMU_CR0_RESERVED;
/* in case the command queue has been enabled */
- smmuv3_cmdq_consume(s);
+ smmuv3_cmdq_consume(s, reg_sec_sid);
return MEMTX_OK;
case A_CR1:
bank->cr[1] = data;
@@ -1792,12 +1815,12 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
bank->irq_ctrl = data;
return MEMTX_OK;
case A_GERRORN:
- smmuv3_write_gerrorn(s, data);
+ smmuv3_write_gerrorn(s, data, reg_sec_sid);
/*
* By acknowledging the CMDQ_ERR, SW may notify cmds can
* be processed again
*/
- smmuv3_cmdq_consume(s);
+ smmuv3_cmdq_consume(s, reg_sec_sid);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0: /* 64b */
if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
@@ -1899,7 +1922,7 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
return MEMTX_OK;
case A_CMDQ_PROD:
bank->cmdq.prod = data;
- smmuv3_cmdq_consume(s);
+ smmuv3_cmdq_consume(s, reg_sec_sid);
return MEMTX_OK;
case A_CMDQ_CONS:
if (!smmu_cmdqen_disabled(s, reg_sec_sid)) {
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index 0e7ad8fee3..697e0d84f3 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -35,7 +35,7 @@ smmuv3_trigger_irq(int irq) "irq=%d"
smmuv3_write_gerror(uint32_t toggled, uint32_t gerror) "toggled=0x%x, new GERROR=0x%x"
smmuv3_write_gerrorn(uint32_t acked, uint32_t gerrorn) "acked=0x%x, new GERRORN=0x%x"
smmuv3_unhandled_cmd(uint32_t type) "Unhandled command type=%d"
-smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod=%d cons=%d prod.wrap=%d cons.wrap=%d"
+smmuv3_cmdq_consume(int sec_sid, uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "sec_sid=%d prod=%d cons=%d prod.wrap=%d cons.wrap=%d"
smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 17/21] hw/arm/smmuv3: Pass security state to command queue and IRQ logic
2025-10-12 15:14 ` [RFC v3 17/21] hw/arm/smmuv3: Pass security state to command queue and IRQ logic Tao Tang
@ 2025-12-04 14:46 ` Eric Auger
2025-12-05 9:42 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-04 14:46 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:14 PM, Tao Tang wrote:
> The command queue and interrupt logic must operate within the correct
> security context. Handling a command queue in one security state can
> have side effects, such as interrupts or errors, that need to be
> processed in another. This requires the IRQ and GERROR logic to be
> fully aware of the multi-security-state environment.
>
> This patch refactors the command queue processing and interrupt handling
> to be security-state aware. Besides, unlike the command queue, the
> event queue logic was already updated to be security-state aware in a
> previous change. The SMMUSecSID is now passed through the relevant
> functions to ensure that:
>
> - Command queue operations are performed on the correct register bank.
>
> - Interrupts are triggered and checked against the correct security
> state's configuration.
>
> - Errors from command processing are reported in the correct GERROR
> register bank.
>
> - Architectural access controls, like preventing secure commands from a
> non-secure queue, are correctly enforced.
>
> - As Secure Stage 2 is not yet implemented, commands that target it
> are now correctly aborted during command queue processing.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 61 +++++++++++++++++++++++++++++++--------------
> hw/arm/trace-events | 2 +-
> 2 files changed, 43 insertions(+), 20 deletions(-)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 432de88610..4ac7a2f3c7 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -46,11 +46,11 @@
> *
> * @irq: irq type
> * @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
> + * @sec_sid: SEC_SID of the bank
> */
> static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
> - uint32_t gerror_mask)
> + uint32_t gerror_mask, SMMUSecSID sec_sid)
> {
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>
> bool pulse = false;
> @@ -87,9 +87,9 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
> }
> }
>
> -static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
> +static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
> + SMMUSecSID sec_sid)
> {
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> uint32_t pending = bank->gerror ^ bank->gerrorn;
> uint32_t toggled = bank->gerrorn ^ new_gerrorn;
> @@ -109,7 +109,7 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
> trace_smmuv3_write_gerrorn(toggled & pending, bank->gerrorn);
> }
>
> -static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
> +static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd, SMMUSecSID sec_sid)
why this change as sec_sid is not yet used?
besides the q is already specialized for a given sec_sid
> {
> dma_addr_t addr = Q_CONS_ENTRY(q);
> MemTxResult ret;
> @@ -167,7 +167,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, SMMUSecSID sec_sid,
> }
>
> if (!smmuv3_q_empty(q)) {
> - smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0);
> + smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0, sec_sid);
> }
> return MEMTX_OK;
> }
> @@ -263,7 +263,8 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
> info->sid);
> r = smmuv3_write_eventq(s, sec_sid, &evt);
> if (r != MEMTX_OK) {
> - smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
> + smmuv3_trigger_irq(s, SMMU_IRQ_GERROR,
> + R_GERROR_EVENTQ_ABT_ERR_MASK, sec_sid);
> }
> info->recorded = true;
> }
> @@ -1457,11 +1458,10 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s, SMMUSecSID sec_sid)
> return smmu_irq_ctl_evtq_irqen_disabled(s, sec_sid);
> }
>
> -static int smmuv3_cmdq_consume(SMMUv3State *s)
> +static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
> {
> SMMUState *bs = ARM_SMMU(s);
> SMMUCmdError cmd_error = SMMU_CERROR_NONE;
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> SMMUQueue *q = &bank->cmdq;
> SMMUCommandType type = 0;
> @@ -1480,14 +1480,14 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> uint32_t pending = bank->gerror ^ bank->gerrorn;
> Cmd cmd;
>
> - trace_smmuv3_cmdq_consume(Q_PROD(q), Q_CONS(q),
> + trace_smmuv3_cmdq_consume(sec_sid, Q_PROD(q), Q_CONS(q),
> Q_PROD_WRAP(q), Q_CONS_WRAP(q));
>
> if (FIELD_EX32(pending, GERROR, CMDQ_ERR)) {
> break;
> }
>
> - if (queue_read(q, &cmd) != MEMTX_OK) {
> + if (queue_read(q, &cmd, sec_sid) != MEMTX_OK) {
> cmd_error = SMMU_CERROR_ABT;
> break;
> }
> @@ -1500,7 +1500,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> switch (type) {
> case SMMU_CMD_SYNC:
> if (CMD_SYNC_CS(&cmd) & CMD_SYNC_SIG_IRQ) {
> - smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0);
> + smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0, sec_sid);
> }
> break;
> case SMMU_CMD_PREFETCH_CONFIG:
> @@ -1512,6 +1512,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>
> if (CMD_SSEC(&cmd)) {
when reading the spec I have the impression SSEC is common to all commands
4.1.6 Common command fields
Can't you factorize that check?
> + if (sec_sid != SMMU_SEC_SID_S) {
> + /* Secure Stream with Non-Secure command */
> + cmd_error = SMMU_CERROR_ILL;
> + break;
> + }
> cmd_error = SMMU_CERROR_ILL;
> break;
> }
> @@ -1532,6 +1537,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> SMMUSIDRange sid_range;
>
> if (CMD_SSEC(&cmd)) {
> + if (sec_sid != SMMU_SEC_SID_S) {
> + cmd_error = SMMU_CERROR_ILL;
> + break;
> + }
> cmd_error = SMMU_CERROR_ILL;
> break;
> }
> @@ -1551,6 +1560,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>
> if (CMD_SSEC(&cmd)) {
> + if (sec_sid != SMMU_SEC_SID_S) {
> + cmd_error = SMMU_CERROR_ILL;
> + break;
> + }
> cmd_error = SMMU_CERROR_ILL;
> break;
> }
> @@ -1618,7 +1631,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> cmd_error = SMMU_CERROR_ILL;
> break;
> }
> - smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, SMMU_SEC_SID_NS);
> + smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, sec_sid);
> break;
> case SMMU_CMD_TLBI_S12_VMALL:
> {
> @@ -1628,6 +1641,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> cmd_error = SMMU_CERROR_ILL;
> break;
> }
> + /* Secure Stage 2 isn't supported for now */
> + if (sec_sid != SMMU_SEC_SID_NS) {
> + cmd_error = SMMU_CERROR_ABT;
> + break;
> + }
>
> trace_smmuv3_cmdq_tlbi_s12_vmid(vmid);
> smmu_inv_notifiers_all(&s->smmu_state);
> @@ -1639,11 +1657,16 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> cmd_error = SMMU_CERROR_ILL;
> break;
> }
> +
> + if (sec_sid != SMMU_SEC_SID_NS) {
> + cmd_error = SMMU_CERROR_ABT;
> + break;
> + }
> /*
> * As currently only either s1 or s2 are supported
> * we can reuse same function for s2.
> */
> - smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, SMMU_SEC_SID_NS);
> + smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, sec_sid);
> break;
> case SMMU_CMD_TLBI_EL3_ALL:
> case SMMU_CMD_TLBI_EL3_VA:
> @@ -1680,7 +1703,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> if (cmd_error) {
> trace_smmuv3_cmdq_consume_error(smmu_cmd_string(type), cmd_error);
> smmu_write_cmdq_err(s, cmd_error, sec_sid);
> - smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK);
> + smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK, sec_sid);
> }
>
> trace_smmuv3_cmdq_consume_out(Q_PROD(q), Q_CONS(q),
> @@ -1772,7 +1795,7 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> bank->cr[0] = data;
> bank->cr0ack = data & ~SMMU_CR0_RESERVED;
> /* in case the command queue has been enabled */
> - smmuv3_cmdq_consume(s);
> + smmuv3_cmdq_consume(s, reg_sec_sid);
> return MEMTX_OK;
> case A_CR1:
> bank->cr[1] = data;
> @@ -1792,12 +1815,12 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> bank->irq_ctrl = data;
> return MEMTX_OK;
> case A_GERRORN:
> - smmuv3_write_gerrorn(s, data);
> + smmuv3_write_gerrorn(s, data, reg_sec_sid);
> /*
> * By acknowledging the CMDQ_ERR, SW may notify cmds can
> * be processed again
> */
> - smmuv3_cmdq_consume(s);
> + smmuv3_cmdq_consume(s, reg_sec_sid);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> if (!smmu_gerror_irq_cfg_writable(s, reg_sec_sid)) {
> @@ -1899,7 +1922,7 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> return MEMTX_OK;
> case A_CMDQ_PROD:
> bank->cmdq.prod = data;
> - smmuv3_cmdq_consume(s);
> + smmuv3_cmdq_consume(s, reg_sec_sid);
> return MEMTX_OK;
> case A_CMDQ_CONS:
> if (!smmu_cmdqen_disabled(s, reg_sec_sid)) {
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index 0e7ad8fee3..697e0d84f3 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -35,7 +35,7 @@ smmuv3_trigger_irq(int irq) "irq=%d"
> smmuv3_write_gerror(uint32_t toggled, uint32_t gerror) "toggled=0x%x, new GERROR=0x%x"
> smmuv3_write_gerrorn(uint32_t acked, uint32_t gerrorn) "acked=0x%x, new GERRORN=0x%x"
> smmuv3_unhandled_cmd(uint32_t type) "Unhandled command type=%d"
> -smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod=%d cons=%d prod.wrap=%d cons.wrap=%d"
> +smmuv3_cmdq_consume(int sec_sid, uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "sec_sid=%d prod=%d cons=%d prod.wrap=%d cons.wrap=%d"
> smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
> smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
> smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
Thanks
Eric
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 17/21] hw/arm/smmuv3: Pass security state to command queue and IRQ logic
2025-12-04 14:46 ` Eric Auger
@ 2025-12-05 9:42 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-05 9:42 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/12/4 22:46, Eric Auger wrote:
>
> On 10/12/25 5:14 PM, Tao Tang wrote:
>> The command queue and interrupt logic must operate within the correct
>> security context. Handling a command queue in one security state can
>> have side effects, such as interrupts or errors, that need to be
>> processed in another. This requires the IRQ and GERROR logic to be
>> fully aware of the multi-security-state environment.
>>
>> This patch refactors the command queue processing and interrupt handling
>> to be security-state aware. Besides, unlike the command queue, the
>> event queue logic was already updated to be security-state aware in a
>> previous change. The SMMUSecSID is now passed through the relevant
>> functions to ensure that:
>>
>> - Command queue operations are performed on the correct register bank.
>>
>> - Interrupts are triggered and checked against the correct security
>> state's configuration.
>>
>> - Errors from command processing are reported in the correct GERROR
>> register bank.
>>
>> - Architectural access controls, like preventing secure commands from a
>> non-secure queue, are correctly enforced.
>>
>> - As Secure Stage 2 is not yet implemented, commands that target it
>> are now correctly aborted during command queue processing.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 61 +++++++++++++++++++++++++++++++--------------
>> hw/arm/trace-events | 2 +-
>> 2 files changed, 43 insertions(+), 20 deletions(-)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index 432de88610..4ac7a2f3c7 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -46,11 +46,11 @@
>> *
>> * @irq: irq type
>> * @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
>> + * @sec_sid: SEC_SID of the bank
>> */
>> static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
>> - uint32_t gerror_mask)
>> + uint32_t gerror_mask, SMMUSecSID sec_sid)
>> {
>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>
>> bool pulse = false;
>> @@ -87,9 +87,9 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
>> }
>> }
>>
>> -static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
>> +static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
>> + SMMUSecSID sec_sid)
>> {
>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>> uint32_t pending = bank->gerror ^ bank->gerrorn;
>> uint32_t toggled = bank->gerrorn ^ new_gerrorn;
>> @@ -109,7 +109,7 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
>> trace_smmuv3_write_gerrorn(toggled & pending, bank->gerrorn);
>> }
>>
>> -static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
>> +static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd, SMMUSecSID sec_sid)
> why this change as sec_sid is not yet used?
> besides the q is already specialized for a given sec_sid
>> {
>> dma_addr_t addr = Q_CONS_ENTRY(q);
>> MemTxResult ret;
>> @@ -167,7 +167,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, SMMUSecSID sec_sid,
>> }
>>
>> if (!smmuv3_q_empty(q)) {
>> - smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0);
>> + smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0, sec_sid);
>> }
>> return MEMTX_OK;
>> }
>> @@ -263,7 +263,8 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
>> info->sid);
>> r = smmuv3_write_eventq(s, sec_sid, &evt);
>> if (r != MEMTX_OK) {
>> - smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
>> + smmuv3_trigger_irq(s, SMMU_IRQ_GERROR,
>> + R_GERROR_EVENTQ_ABT_ERR_MASK, sec_sid);
>> }
>> info->recorded = true;
>> }
>> @@ -1457,11 +1458,10 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s, SMMUSecSID sec_sid)
>> return smmu_irq_ctl_evtq_irqen_disabled(s, sec_sid);
>> }
>>
>> -static int smmuv3_cmdq_consume(SMMUv3State *s)
>> +static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
>> {
>> SMMUState *bs = ARM_SMMU(s);
>> SMMUCmdError cmd_error = SMMU_CERROR_NONE;
>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>> SMMUQueue *q = &bank->cmdq;
>> SMMUCommandType type = 0;
>> @@ -1480,14 +1480,14 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> uint32_t pending = bank->gerror ^ bank->gerrorn;
>> Cmd cmd;
>>
>> - trace_smmuv3_cmdq_consume(Q_PROD(q), Q_CONS(q),
>> + trace_smmuv3_cmdq_consume(sec_sid, Q_PROD(q), Q_CONS(q),
>> Q_PROD_WRAP(q), Q_CONS_WRAP(q));
>>
>> if (FIELD_EX32(pending, GERROR, CMDQ_ERR)) {
>> break;
>> }
>>
>> - if (queue_read(q, &cmd) != MEMTX_OK) {
>> + if (queue_read(q, &cmd, sec_sid) != MEMTX_OK) {
>> cmd_error = SMMU_CERROR_ABT;
>> break;
>> }
>> @@ -1500,7 +1500,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> switch (type) {
>> case SMMU_CMD_SYNC:
>> if (CMD_SYNC_CS(&cmd) & CMD_SYNC_SIG_IRQ) {
>> - smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0);
>> + smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0, sec_sid);
>> }
>> break;
>> case SMMU_CMD_PREFETCH_CONFIG:
>> @@ -1512,6 +1512,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>>
>> if (CMD_SSEC(&cmd)) {
> when reading the spec I have the impression SSEC is common to all commands
> 4.1.6 Common command fields
> Can't you factorize that check?
>> + if (sec_sid != SMMU_SEC_SID_S) {
>> + /* Secure Stream with Non-Secure command */
>> + cmd_error = SMMU_CERROR_ILL;
>> + break;
>> + }
>> cmd_error = SMMU_CERROR_ILL;
>> break;
>> }
>> @@ -1532,6 +1537,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> SMMUSIDRange sid_range;
>>
>> if (CMD_SSEC(&cmd)) {
>> + if (sec_sid != SMMU_SEC_SID_S) {
>> + cmd_error = SMMU_CERROR_ILL;
>> + break;
>> + }
>> cmd_error = SMMU_CERROR_ILL;
>> break;
>> }
>> @@ -1551,6 +1560,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>>
>> if (CMD_SSEC(&cmd)) {
>> + if (sec_sid != SMMU_SEC_SID_S) {
>> + cmd_error = SMMU_CERROR_ILL;
>> + break;
>> + }
>> cmd_error = SMMU_CERROR_ILL;
>> break;
>> }
>> @@ -1618,7 +1631,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> cmd_error = SMMU_CERROR_ILL;
>> break;
>> }
>> - smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, SMMU_SEC_SID_NS);
>> + smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, sec_sid);
>> break;
>> case SMMU_CMD_TLBI_S12_VMALL:
>> {
>> @@ -1628,6 +1641,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> cmd_error = SMMU_CERROR_ILL;
>> break;
>> }
>> + /* Secure Stage 2 isn't supported for now */
>> + if (sec_sid != SMMU_SEC_SID_NS) {
>> + cmd_error = SMMU_CERROR_ABT;
>> + break;
>> + }
>>
>> trace_smmuv3_cmdq_tlbi_s12_vmid(vmid);
>> smmu_inv_notifiers_all(&s->smmu_state);
>> @@ -1639,11 +1657,16 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> cmd_error = SMMU_CERROR_ILL;
>> break;
>> }
>> +
>> + if (sec_sid != SMMU_SEC_SID_NS) {
>> + cmd_error = SMMU_CERROR_ABT;
>> + break;
>> + }
>> /*
>> * As currently only either s1 or s2 are supported
>> * we can reuse same function for s2.
>> */
>> - smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, SMMU_SEC_SID_NS);
>> + smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, sec_sid);
>> break;
>> case SMMU_CMD_TLBI_EL3_ALL:
>> case SMMU_CMD_TLBI_EL3_VA:
Thanks for the pointers on 4.1.6. After reading the spec, I realize I
did not clearly separate two related concepts in my initial implementation:
- the security context of the command queue / register bank, and
- the security state of the target stream described by SSEC in the
command payload that the command is meant to operate on.
I plan to split the command queue’s security state from the stream's
security state. The command queue bank (NS/S) still drives which
register bank sees the GERROR/CMDQ state, while a small helper now
interprets SSEC so every opcode automatically targets the right stream
security state, which is pointed by SMMUSecSID *stream_sid (NS-only for
the NS queue; Secure vs Non-secure selected per SSEC on the Secure queue).
static bool smmuv3_cmd_resolve_stream_sid(SMMUSecSID cmdq_sid,
const Cmd *cmd,
SMMUSecSID *stream_sid,
/* Output SEC_SID that point to the target stream and will be used in
the subsequent code */
SMMUCmdError *err)
{
uint32_t ssec = CMD_SSEC(cmd);
switch (cmdq_sid) {
case SMMU_SEC_SID_NS:
if (ssec) {
*err = SMMU_CERROR_ILL;
return false;
}
*stream_sid = SMMU_SEC_SID_NS;
return true;
case SMMU_SEC_SID_S:
*stream_sid = ssec ? SMMU_SEC_SID_S : SMMU_SEC_SID_NS;
return true;
default:
*err = SMMU_CERROR_ILL;
return false;
}
}
static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid) {
.......
if (!smmuv3_cmd_resolve_stream_sid(sec_sid, &cmd,
&stream_sid, &cmd_error)) {
break;
}
qemu_mutex_lock(&s->mutex);
switch (type) {
.......
case SMMU_CMD_TLBI_NH_VAA:
case SMMU_CMD_TLBI_NH_VA:
if (!STAGE1_SUPPORTED(s)) {
cmd_error = SMMU_CERROR_ILL;
break;
}
smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, stream_sid); /*
Use the output SEC_SID that returned from smmuv3_cmd_resolve_stream_sid*/
break;
}
Could you please take a quick look at the helper before I fold it into v4?
Also I'll remove `SMMUSecSID sec_sid` in queue_read.
Thanks a lot for your time!
Best regards,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 18/21] hw/arm/smmuv3: Harden security checks in MMIO handlers
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (16 preceding siblings ...)
2025-10-12 15:14 ` [RFC v3 17/21] hw/arm/smmuv3: Pass security state to command queue and IRQ logic Tao Tang
@ 2025-10-12 15:14 ` Tao Tang
2025-12-04 14:59 ` Eric Auger
2025-10-12 15:15 ` [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context Tao Tang
` (2 subsequent siblings)
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:14 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
This patch hardens the security validation within the main MMIO
dispatcher functions (smmu_read_mmio and smmu_write_mmio).
First, accesses to the secure register space are now correctly gated by
whether the SECURE_IMPL feature is enabled in the model. This prevents
guest software from accessing the secure programming interface when it is
disabled, though some registers are exempt from this check as per the
architecture.
Second, the check for the input stream's security is made more robust.
It now validates not only the legacy MemTxAttrs.secure bit, but also
the .space field. This brings the SMMU's handling of security spaces
into full alignment with the PE.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 64 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 4ac7a2f3c7..c9c742c80b 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1458,6 +1458,12 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s, SMMUSecSID sec_sid)
return smmu_irq_ctl_evtq_irqen_disabled(s, sec_sid);
}
+/* Check if the SMMU hardware itself implements secure state features */
+static inline bool smmu_hw_secure_implemented(SMMUv3State *s)
+{
+ return FIELD_EX32(s->bank[SMMU_SEC_SID_S].idr[1], S_IDR1, SECURE_IMPL);
+}
+
static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
{
SMMUState *bs = ARM_SMMU(s);
@@ -1712,6 +1718,55 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
return 0;
}
+/*
+ * Check if a register is exempt from the secure implementation check.
+ *
+ * The SMMU architecture specifies that certain secure registers, such as
+ * the secure Event Queue IRQ configuration registers, must be accessible
+ * even if the full secure hardware is not implemented. This function
+ * identifies those registers.
+ *
+ * Returns true if the register is exempt, false otherwise.
+ */
+static bool is_secure_impl_exempt_reg(hwaddr offset)
+{
+ switch (offset) {
+ case A_S_EVENTQ_IRQ_CFG0:
+ case A_S_EVENTQ_IRQ_CFG1:
+ case A_S_EVENTQ_IRQ_CFG2:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/* Helper function for Secure register access validation */
+static bool smmu_check_secure_access(SMMUv3State *s, MemTxAttrs attrs,
+ hwaddr offset, bool is_read)
+{ /* Check if the access is secure */
+ if (!(attrs.space == ARMSS_Secure ||
+ attrs.secure == 1)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "%s: Non-secure %s attempt at offset 0x%" PRIx64 " (%s)\n",
+ __func__, is_read ? "read" : "write", offset,
+ is_read ? "RAZ" : "WI");
+ return false;
+ }
+
+ /*
+ * Check if the secure state is implemented. Some registers are exempted
+ * from this check.
+ */
+ if (!is_secure_impl_exempt_reg(offset) && !smmu_hw_secure_implemented(s)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "%s: Secure %s attempt at offset 0x%" PRIx64 ". But Secure state "
+ "is not implemented (RES0)\n",
+ __func__, is_read ? "read" : "write", offset);
+ return false;
+ }
+ return true;
+}
+
static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
uint64_t data, MemTxAttrs attrs,
SMMUSecSID reg_sec_sid)
@@ -2058,6 +2113,10 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
* statement to handle those specific security states.
*/
if (offset >= SMMU_SECURE_REG_START) {
+ if (!smmu_check_secure_access(s, attrs, offset, false)) {
+ trace_smmuv3_write_mmio(offset, data, size, MEMTX_OK);
+ return MEMTX_OK;
+ }
reg_sec_sid = SMMU_SEC_SID_S;
}
@@ -2248,6 +2307,11 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
offset &= ~0x10000;
SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
if (offset >= SMMU_SECURE_REG_START) {
+ if (!smmu_check_secure_access(s, attrs, offset, true)) {
+ *data = 0;
+ trace_smmuv3_read_mmio(offset, *data, size, MEMTX_OK);
+ return MEMTX_OK;
+ }
reg_sec_sid = SMMU_SEC_SID_S;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 18/21] hw/arm/smmuv3: Harden security checks in MMIO handlers
2025-10-12 15:14 ` [RFC v3 18/21] hw/arm/smmuv3: Harden security checks in MMIO handlers Tao Tang
@ 2025-12-04 14:59 ` Eric Auger
2025-12-05 10:36 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-04 14:59 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:14 PM, Tao Tang wrote:
> This patch hardens the security validation within the main MMIO
> dispatcher functions (smmu_read_mmio and smmu_write_mmio).
>
> First, accesses to the secure register space are now correctly gated by
> whether the SECURE_IMPL feature is enabled in the model. This prevents
> guest software from accessing the secure programming interface when it is
> disabled, though some registers are exempt from this check as per the
> architecture.
>
> Second, the check for the input stream's security is made more robust.
> It now validates not only the legacy MemTxAttrs.secure bit, but also
> the .space field. This brings the SMMU's handling of security spaces
> into full alignment with the PE.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 64 insertions(+)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 4ac7a2f3c7..c9c742c80b 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -1458,6 +1458,12 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s, SMMUSecSID sec_sid)
> return smmu_irq_ctl_evtq_irqen_disabled(s, sec_sid);
> }
>
> +/* Check if the SMMU hardware itself implements secure state features */
> +static inline bool smmu_hw_secure_implemented(SMMUv3State *s)
> +{
> + return FIELD_EX32(s->bank[SMMU_SEC_SID_S].idr[1], S_IDR1, SECURE_IMPL);
> +}
> +
> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
> {
> SMMUState *bs = ARM_SMMU(s);
> @@ -1712,6 +1718,55 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
> return 0;
> }
>
> +/*
> + * Check if a register is exempt from the secure implementation check.
> + *
> + * The SMMU architecture specifies that certain secure registers, such as
> + * the secure Event Queue IRQ configuration registers, must be accessible
> + * even if the full secure hardware is not implemented. This function
> + * identifies those registers.
> + *
> + * Returns true if the register is exempt, false otherwise.
> + */
> +static bool is_secure_impl_exempt_reg(hwaddr offset)
> +{
> + switch (offset) {
> + case A_S_EVENTQ_IRQ_CFG0:
> + case A_S_EVENTQ_IRQ_CFG1:
> + case A_S_EVENTQ_IRQ_CFG2:
> + return true;
> + default:
> + return false;
> + }
> +}
> +
> +/* Helper function for Secure register access validation */
I think we shall improve the doc commennt for the function. I understand
@offset is a secure register offset and the function returns whether the
access to the secure register is possible. This requires a) the access
to be secure and in general secure state support exccet for few regs?
> +static bool smmu_check_secure_access(SMMUv3State *s, MemTxAttrs attrs,
> + hwaddr offset, bool is_read)
> +{ /* Check if the access is secure */
> + if (!(attrs.space == ARMSS_Secure ||
> + attrs.secure == 1)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: Non-secure %s attempt at offset 0x%" PRIx64 " (%s)\n",
> + __func__, is_read ? "read" : "write", offset,
> + is_read ? "RAZ" : "WI");
> + return false;
> + }
> +
> + /*
> + * Check if the secure state is implemented. Some registers are exempted
> + * from this check.
> + */
> + if (!is_secure_impl_exempt_reg(offset) && !smmu_hw_secure_implemented(s)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: Secure %s attempt at offset 0x%" PRIx64 ". But Secure state "
> + "is not implemented (RES0)\n",
> + __func__, is_read ? "read" : "write", offset);
> + return false;
> + }
> + return true;
> +}
> +
> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> uint64_t data, MemTxAttrs attrs,
> SMMUSecSID reg_sec_sid)
> @@ -2058,6 +2113,10 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
> * statement to handle those specific security states.
> */
> if (offset >= SMMU_SECURE_REG_START) {
> + if (!smmu_check_secure_access(s, attrs, offset, false)) {
> + trace_smmuv3_write_mmio(offset, data, size, MEMTX_OK);
> + return MEMTX_OK;
so the access to @offset is not permitted and we return MEMTX_OK? I am
confused
> + }
> reg_sec_sid = SMMU_SEC_SID_S;
> }
>
> @@ -2248,6 +2307,11 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
> offset &= ~0x10000;
> SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
> if (offset >= SMMU_SECURE_REG_START) {
> + if (!smmu_check_secure_access(s, attrs, offset, true)) {
> + *data = 0;
> + trace_smmuv3_read_mmio(offset, *data, size, MEMTX_OK);
> + return MEMTX_OK;
same here?
> + }
> reg_sec_sid = SMMU_SEC_SID_S;
> }
>
Thanks
Eric
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 18/21] hw/arm/smmuv3: Harden security checks in MMIO handlers
2025-12-04 14:59 ` Eric Auger
@ 2025-12-05 10:36 ` Tao Tang
2025-12-05 17:23 ` Pierrick Bouvier
0 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-12-05 10:36 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/12/4 22:59, Eric Auger wrote:
>
> On 10/12/25 5:14 PM, Tao Tang wrote:
>> This patch hardens the security validation within the main MMIO
>> dispatcher functions (smmu_read_mmio and smmu_write_mmio).
>>
>> First, accesses to the secure register space are now correctly gated by
>> whether the SECURE_IMPL feature is enabled in the model. This prevents
>> guest software from accessing the secure programming interface when it is
>> disabled, though some registers are exempt from this check as per the
>> architecture.
>>
>> Second, the check for the input stream's security is made more robust.
>> It now validates not only the legacy MemTxAttrs.secure bit, but also
>> the .space field. This brings the SMMU's handling of security spaces
>> into full alignment with the PE.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 64 insertions(+)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index 4ac7a2f3c7..c9c742c80b 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -1458,6 +1458,12 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s, SMMUSecSID sec_sid)
>> return smmu_irq_ctl_evtq_irqen_disabled(s, sec_sid);
>> }
>>
>> +/* Check if the SMMU hardware itself implements secure state features */
>> +static inline bool smmu_hw_secure_implemented(SMMUv3State *s)
>> +{
>> + return FIELD_EX32(s->bank[SMMU_SEC_SID_S].idr[1], S_IDR1, SECURE_IMPL);
>> +}
>> +
>> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
>> {
>> SMMUState *bs = ARM_SMMU(s);
>> @@ -1712,6 +1718,55 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
>> return 0;
>> }
>>
>> +/*
>> + * Check if a register is exempt from the secure implementation check.
>> + *
>> + * The SMMU architecture specifies that certain secure registers, such as
>> + * the secure Event Queue IRQ configuration registers, must be accessible
>> + * even if the full secure hardware is not implemented. This function
>> + * identifies those registers.
>> + *
>> + * Returns true if the register is exempt, false otherwise.
>> + */
>> +static bool is_secure_impl_exempt_reg(hwaddr offset)
>> +{
>> + switch (offset) {
>> + case A_S_EVENTQ_IRQ_CFG0:
>> + case A_S_EVENTQ_IRQ_CFG1:
>> + case A_S_EVENTQ_IRQ_CFG2:
>> + return true;
>> + default:
>> + return false;
>> + }
>> +}
>> +
>> +/* Helper function for Secure register access validation */
> I think we shall improve the doc commennt for the function. I understand
> @offset is a secure register offset and the function returns whether the
> access to the secure register is possible. This requires a) the access
> to be secure and in general secure state support exccet for few regs?
>> +static bool smmu_check_secure_access(SMMUv3State *s, MemTxAttrs attrs,
>> + hwaddr offset, bool is_read)
>> +{ /* Check if the access is secure */
>> + if (!(attrs.space == ARMSS_Secure ||
>> + attrs.secure == 1)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "%s: Non-secure %s attempt at offset 0x%" PRIx64 " (%s)\n",
>> + __func__, is_read ? "read" : "write", offset,
>> + is_read ? "RAZ" : "WI");
>> + return false;
>> + }
>> +
>> + /*
>> + * Check if the secure state is implemented. Some registers are exempted
>> + * from this check.
>> + */
>> + if (!is_secure_impl_exempt_reg(offset) && !smmu_hw_secure_implemented(s)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "%s: Secure %s attempt at offset 0x%" PRIx64 ". But Secure state "
>> + "is not implemented (RES0)\n",
>> + __func__, is_read ? "read" : "write", offset);
>> + return false;
>> + }
>> + return true;
>> +}
>> +
>> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>> uint64_t data, MemTxAttrs attrs,
>> SMMUSecSID reg_sec_sid)
>> @@ -2058,6 +2113,10 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
>> * statement to handle those specific security states.
>> */
>> if (offset >= SMMU_SECURE_REG_START) {
>> + if (!smmu_check_secure_access(s, attrs, offset, false)) {
>> + trace_smmuv3_write_mmio(offset, data, size, MEMTX_OK);
>> + return MEMTX_OK;
> so the access to @offset is not permitted and we return MEMTX_OK? I am
> confused
>> + }
>> reg_sec_sid = SMMU_SEC_SID_S;
>> }
>>
>> @@ -2248,6 +2307,11 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
>> offset &= ~0x10000;
>> SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
>> if (offset >= SMMU_SECURE_REG_START) {
>> + if (!smmu_check_secure_access(s, attrs, offset, true)) {
>> + *data = 0;
>> + trace_smmuv3_read_mmio(offset, *data, size, MEMTX_OK);
>> + return MEMTX_OK;
> same here?
>> + }
>> reg_sec_sid = SMMU_SEC_SID_S;
>> }
>>
> Thanks
>
> Eric
Thanks for the review and for calling out the confusion around this helper.
The function `smmu_check_secure_access` and `return MEMTX_OK` statement
after smmu_check_secure_access returned false is trying to follow the
SECURE_IMPL rules from the architecture spec:
ARM IHI 0070 G.b , 6.2 Register overview
- When SMMU_S_IDR1.SECURE_IMPL == 1, SMMU_S_* registers are RAZ/WI to
Non-secure access. See section 3.11 Reset, Enable and initialization
regarding Non-secure access to SMMU_S_INIT. All other registers are
accessible to both Secure and Non-secure accesses.
- When SMMU_S_IDR1.SECURE_IMPL == 0, SMMU_S_* registers are RES0.
So the MEMTX_OK in the MMIO handlers is deliberate: we are acknowledging
the bus transaction while applying the architectural RAZ/WI/RES0
semantics at the register level, rather than modelling a bus abort. Also
there was another discussion about this issue [1] in V1 series.
[1]
https://lore.kernel.org/qemu-devel/a5154459-a632-42b0-b599-d5dff85b5dd2@phytium.com.cn/
I'll add these needed details and parameters introduction as comment
in `smmu_check_secure_access` and `return MEMTX_OK`. How do you think
about it?
Thanks again for the feedback,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 18/21] hw/arm/smmuv3: Harden security checks in MMIO handlers
2025-12-05 10:36 ` Tao Tang
@ 2025-12-05 17:23 ` Pierrick Bouvier
0 siblings, 0 replies; 67+ messages in thread
From: Pierrick Bouvier @ 2025-12-05 17:23 UTC (permalink / raw)
To: Tao Tang, eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh
On 12/5/25 2:36 AM, Tao Tang wrote:
> Hi Eric,
>
> On 2025/12/4 22:59, Eric Auger wrote:
>>
>> On 10/12/25 5:14 PM, Tao Tang wrote:
>>> This patch hardens the security validation within the main MMIO
>>> dispatcher functions (smmu_read_mmio and smmu_write_mmio).
>>>
>>> First, accesses to the secure register space are now correctly gated by
>>> whether the SECURE_IMPL feature is enabled in the model. This prevents
>>> guest software from accessing the secure programming interface when it is
>>> disabled, though some registers are exempt from this check as per the
>>> architecture.
>>>
>>> Second, the check for the input stream's security is made more robust.
>>> It now validates not only the legacy MemTxAttrs.secure bit, but also
>>> the .space field. This brings the SMMU's handling of security spaces
>>> into full alignment with the PE.
>>>
>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>> ---
>>> hw/arm/smmuv3.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 64 insertions(+)
>>>
>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>> index 4ac7a2f3c7..c9c742c80b 100644
>>> --- a/hw/arm/smmuv3.c
>>> +++ b/hw/arm/smmuv3.c
>>> @@ -1458,6 +1458,12 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s, SMMUSecSID sec_sid)
>>> return smmu_irq_ctl_evtq_irqen_disabled(s, sec_sid);
>>> }
>>>
>>> +/* Check if the SMMU hardware itself implements secure state features */
>>> +static inline bool smmu_hw_secure_implemented(SMMUv3State *s)
>>> +{
>>> + return FIELD_EX32(s->bank[SMMU_SEC_SID_S].idr[1], S_IDR1, SECURE_IMPL);
>>> +}
>>> +
>>> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
>>> {
>>> SMMUState *bs = ARM_SMMU(s);
>>> @@ -1712,6 +1718,55 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecSID sec_sid)
>>> return 0;
>>> }
>>>
>>> +/*
>>> + * Check if a register is exempt from the secure implementation check.
>>> + *
>>> + * The SMMU architecture specifies that certain secure registers, such as
>>> + * the secure Event Queue IRQ configuration registers, must be accessible
>>> + * even if the full secure hardware is not implemented. This function
>>> + * identifies those registers.
>>> + *
>>> + * Returns true if the register is exempt, false otherwise.
>>> + */
>>> +static bool is_secure_impl_exempt_reg(hwaddr offset)
>>> +{
>>> + switch (offset) {
>>> + case A_S_EVENTQ_IRQ_CFG0:
>>> + case A_S_EVENTQ_IRQ_CFG1:
>>> + case A_S_EVENTQ_IRQ_CFG2:
>>> + return true;
>>> + default:
>>> + return false;
>>> + }
>>> +}
>>> +
>>> +/* Helper function for Secure register access validation */
>> I think we shall improve the doc commennt for the function. I understand
>> @offset is a secure register offset and the function returns whether the
>> access to the secure register is possible. This requires a) the access
>> to be secure and in general secure state support exccet for few regs?
>>> +static bool smmu_check_secure_access(SMMUv3State *s, MemTxAttrs attrs,
>>> + hwaddr offset, bool is_read)
>>> +{ /* Check if the access is secure */
>>> + if (!(attrs.space == ARMSS_Secure ||
>>> + attrs.secure == 1)) {
>>> + qemu_log_mask(LOG_GUEST_ERROR,
>>> + "%s: Non-secure %s attempt at offset 0x%" PRIx64 " (%s)\n",
>>> + __func__, is_read ? "read" : "write", offset,
>>> + is_read ? "RAZ" : "WI");
>>> + return false;
>>> + }
>>> +
>>> + /*
>>> + * Check if the secure state is implemented. Some registers are exempted
>>> + * from this check.
>>> + */
>>> + if (!is_secure_impl_exempt_reg(offset) && !smmu_hw_secure_implemented(s)) {
>>> + qemu_log_mask(LOG_GUEST_ERROR,
>>> + "%s: Secure %s attempt at offset 0x%" PRIx64 ". But Secure state "
>>> + "is not implemented (RES0)\n",
>>> + __func__, is_read ? "read" : "write", offset);
>>> + return false;
>>> + }
>>> + return true;
>>> +}
>>> +
>>> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>>> uint64_t data, MemTxAttrs attrs,
>>> SMMUSecSID reg_sec_sid)
>>> @@ -2058,6 +2113,10 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
>>> * statement to handle those specific security states.
>>> */
>>> if (offset >= SMMU_SECURE_REG_START) {
>>> + if (!smmu_check_secure_access(s, attrs, offset, false)) {
>>> + trace_smmuv3_write_mmio(offset, data, size, MEMTX_OK);
>>> + return MEMTX_OK;
>> so the access to @offset is not permitted and we return MEMTX_OK? I am
>> confused
>>> + }
>>> reg_sec_sid = SMMU_SEC_SID_S;
>>> }
>>>
>>> @@ -2248,6 +2307,11 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
>>> offset &= ~0x10000;
>>> SMMUSecSID reg_sec_sid = SMMU_SEC_SID_NS;
>>> if (offset >= SMMU_SECURE_REG_START) {
>>> + if (!smmu_check_secure_access(s, attrs, offset, true)) {
>>> + *data = 0;
>>> + trace_smmuv3_read_mmio(offset, *data, size, MEMTX_OK);
>>> + return MEMTX_OK;
>> same here?
>>> + }
>>> reg_sec_sid = SMMU_SEC_SID_S;
>>> }
>>>
>> Thanks
>>
>> Eric
>
>
> Thanks for the review and for calling out the confusion around this helper.
>
>
> The function `smmu_check_secure_access` and `return MEMTX_OK` statement
> after smmu_check_secure_access returned false is trying to follow the
> SECURE_IMPL rules from the architecture spec:
>
> ARM IHI 0070 G.b , 6.2 Register overview
>
> - When SMMU_S_IDR1.SECURE_IMPL == 1, SMMU_S_* registers are RAZ/WI to
> Non-secure access. See section 3.11 Reset, Enable and initialization
> regarding Non-secure access to SMMU_S_INIT. All other registers are
> accessible to both Secure and Non-secure accesses.
> - When SMMU_S_IDR1.SECURE_IMPL == 0, SMMU_S_* registers are RES0.
>
May be worth to add a comment quoting the spec, we have been two people
to ask the same thing. It definitely looks like a mistake when reading
for the first time.
>
> So the MEMTX_OK in the MMIO handlers is deliberate: we are acknowledging
> the bus transaction while applying the architectural RAZ/WI/RES0
> semantics at the register level, rather than modelling a bus abort. Also
> there was another discussion about this issue [1] in V1 series.
>
> [1]
> https://lore.kernel.org/qemu-devel/a5154459-a632-42b0-b599-d5dff85b5dd2@phytium.com.cn/
>
>
> I'll add these needed details and parameters introduction as comment
> in `smmu_check_secure_access` and `return MEMTX_OK`. How do you think
> about it?
>
>
> Thanks again for the feedback,
>
> Tao
>
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (17 preceding siblings ...)
2025-10-12 15:14 ` [RFC v3 18/21] hw/arm/smmuv3: Harden security checks in MMIO handlers Tao Tang
@ 2025-10-12 15:15 ` Tao Tang
2025-10-15 0:02 ` Pierrick Bouvier
2025-10-12 15:15 ` [RFC v3 20/21] hw/arm/smmuv3: Initialize the secure register bank Tao Tang
2025-10-12 15:16 ` [RFC v3 21/21] hw/arm/smmuv3: Add secure migration and enable secure state Tao Tang
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:15 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to select
the programming interface. To support future extensions like RME, which
defines four security states (Non-secure, Secure, Realm, and Root), the
QEMU model must cleanly separate these contexts for all operations.
This commit leverages the generic iommu_index to represent this
security context. The core IOMMU layer now uses the SMMU's .attrs_to_index
callback to map a transaction's ARMSecuritySpace attribute to the
corresponding iommu_index.
This index is then passed down to smmuv3_translate and used throughout
the model to select the correct register bank and processing logic. This
makes the iommu_index the clear QEMU equivalent of the architectural
SEC_SID, cleanly separating the contexts for all subsequent lookups.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index c9c742c80b..b44859540f 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo *event, hwaddr iova)
}
}
+static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
+{
+ switch (attrs.space) {
+ case ARMSS_Secure:
+ return SMMU_SEC_SID_S;
+ case ARMSS_NonSecure:
+ default:
+ return SMMU_SEC_SID_NS;
+ }
+}
+
+/*
+ * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
+ * iommu_idx = 0: Non-secure transactions
+ * iommu_idx = 1: Secure transactions
+ *
+ * The iommu_idx parameter effectively implements the SEC_SID
+ * (Security Stream ID) attribute from the ARM SMMU architecture specification,
+ * which allows the SMMU to differentiate between different security state
+ * transactions at the hardware level.
+ */
+static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
+{
+ return (int)smmuv3_attrs_to_sec_sid(attrs);
+}
+
+static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
+{
+ /* Support 2 IOMMU indexes for now: NS/S */
+ return SMMU_SEC_SID_NUM;
+}
+
/* Entry point to SMMU, does everything. */
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
IOMMUAccessFlags flag, int iommu_idx)
@@ -1087,7 +1119,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
SMMUv3State *s = sdev->smmu;
uint32_t sid = smmu_get_sid(sdev);
- SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
+ SMMUSecSID sec_sid = iommu_idx;
SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
SMMUEventInfo event = {.type = SMMU_EVT_NONE,
.sid = sid,
@@ -2540,6 +2572,8 @@ static void smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
imrc->translate = smmuv3_translate;
imrc->notify_flag_changed = smmuv3_notify_flag_changed;
+ imrc->attrs_to_index = smmuv3_attrs_to_index;
+ imrc->num_indexes = smmuv3_num_indexes;
}
static const TypeInfo smmuv3_type_info = {
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-12 15:15 ` [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context Tao Tang
@ 2025-10-15 0:02 ` Pierrick Bouvier
2025-10-16 6:37 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Pierrick Bouvier @ 2025-10-15 0:02 UTC (permalink / raw)
To: Tao Tang, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
Hi Tao,
On 10/12/25 8:15 AM, Tao Tang wrote:
> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to select
> the programming interface. To support future extensions like RME, which
> defines four security states (Non-secure, Secure, Realm, and Root), the
> QEMU model must cleanly separate these contexts for all operations.
>
> This commit leverages the generic iommu_index to represent this
> security context. The core IOMMU layer now uses the SMMU's .attrs_to_index
> callback to map a transaction's ARMSecuritySpace attribute to the
> corresponding iommu_index.
>
> This index is then passed down to smmuv3_translate and used throughout
> the model to select the correct register bank and processing logic. This
> makes the iommu_index the clear QEMU equivalent of the architectural
> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
> 1 file changed, 35 insertions(+), 1 deletion(-)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index c9c742c80b..b44859540f 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo *event, hwaddr iova)
> }
> }
>
> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
> +{
> + switch (attrs.space) {
> + case ARMSS_Secure:
> + return SMMU_SEC_SID_S;
> + case ARMSS_NonSecure:
> + default:
> + return SMMU_SEC_SID_NS;
> + }
> +}
> +
> +/*
> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
> + * iommu_idx = 0: Non-secure transactions
> + * iommu_idx = 1: Secure transactions
> + *
> + * The iommu_idx parameter effectively implements the SEC_SID
> + * (Security Stream ID) attribute from the ARM SMMU architecture specification,
> + * which allows the SMMU to differentiate between different security state
> + * transactions at the hardware level.
> + */
> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
> +{
> + return (int)smmuv3_attrs_to_sec_sid(attrs);
> +}
> +
> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
> +{
> + /* Support 2 IOMMU indexes for now: NS/S */
> + return SMMU_SEC_SID_NUM;
> +}
> +
> /* Entry point to SMMU, does everything. */
> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> IOMMUAccessFlags flag, int iommu_idx)
> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
> SMMUv3State *s = sdev->smmu;
> uint32_t sid = smmu_get_sid(sdev);
> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
> + SMMUSecSID sec_sid = iommu_idx;
> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
> .sid = sid,
> @@ -2540,6 +2572,8 @@ static void smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>
> imrc->translate = smmuv3_translate;
> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
> + imrc->attrs_to_index = smmuv3_attrs_to_index;
> + imrc->num_indexes = smmuv3_num_indexes;
> }
>
> static const TypeInfo smmuv3_type_info = {
I noticed that this commit breaks boot of a simple Linux kernel. It was
already the case with v2, and it seems there is a deeper issue.
Virtio drive initialization hangs up with:
[ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
blocks (10.7 GB/10.0 GiB)
smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18 bypass
(smmu disabled) iova:0xfffff040 is_write=1
You can reproduce that with any kernel/rootfs, but if you want a simple
recipe (you need podman and qemu-user-static):
$ git clone https://github.com/pbo-linaro/qemu-linux-stack
$ cd qemu-linux-stack
$ ./build_kernel.sh
$ ./build_rootfs.sh
$ /path/to/qemu-system-aarch64 \
-nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
-append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
Looking more closely,
we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
This values comes from smmuv3_attrs_to_sec_sid, by reading attrs.space,
which is ArmSS_Secure.
The problem is that it's impossible to have anything Secure given that
all the code above runs in NonSecure world.
After investigation, the original value read from attrs.space has not
been set anywhere, and is just the default zero-initialized value coming
from pci_msi_trigger. It happens that it defaults to SEC_SID_S, which
probably matches your use case with hafnium, but it's an happy accident.
Looking at the SMMU spec, I understand that SEC_SID is configured for
each stream, and can change dynamically.
On the opposite, a StreamID is fixed and derived from PCI bus and slot
for a given device.
Thus, I think we are missing some logic here.
I'm still trying to understand where the SEC_SID should come from initially.
"The association between a device and the Security state of the
programming interface is a system-defined property."
Does it mean we should be able to set a QEMU property for any device?
Does anyone familiar with this has some idea?
As well, we should check the SEC_SID found based on SMMU_S_IDR1.SECURE_IMPL.
3.10.1 StreamID Security state (SEC_SID)
If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
StreamID, and either:
• A SEC_SID identifier with a value of 0.
• No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a StreamID,
and a SEC_SID identifier.
Regards,
Pierrick
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-15 0:02 ` Pierrick Bouvier
@ 2025-10-16 6:37 ` Tao Tang
2025-10-16 7:04 ` Pierrick Bouvier
0 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-16 6:37 UTC (permalink / raw)
To: Pierrick Bouvier, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
Hi Pierrick:
On 2025/10/15 08:02, Pierrick Bouvier wrote:
> Hi Tao,
>
> On 10/12/25 8:15 AM, Tao Tang wrote:
>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to select
>> the programming interface. To support future extensions like RME, which
>> defines four security states (Non-secure, Secure, Realm, and Root), the
>> QEMU model must cleanly separate these contexts for all operations.
>>
>> This commit leverages the generic iommu_index to represent this
>> security context. The core IOMMU layer now uses the SMMU's
>> .attrs_to_index
>> callback to map a transaction's ARMSecuritySpace attribute to the
>> corresponding iommu_index.
>>
>> This index is then passed down to smmuv3_translate and used throughout
>> the model to select the correct register bank and processing logic. This
>> makes the iommu_index the clear QEMU equivalent of the architectural
>> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index c9c742c80b..b44859540f 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo
>> *event, hwaddr iova)
>> }
>> }
>> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
>> +{
>> + switch (attrs.space) {
>> + case ARMSS_Secure:
>> + return SMMU_SEC_SID_S;
>> + case ARMSS_NonSecure:
>> + default:
>> + return SMMU_SEC_SID_NS;
>> + }
>> +}
>> +
>> +/*
>> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
>> + * iommu_idx = 0: Non-secure transactions
>> + * iommu_idx = 1: Secure transactions
>> + *
>> + * The iommu_idx parameter effectively implements the SEC_SID
>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>> specification,
>> + * which allows the SMMU to differentiate between different security
>> state
>> + * transactions at the hardware level.
>> + */
>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu,
>> MemTxAttrs attrs)
>> +{
>> + return (int)smmuv3_attrs_to_sec_sid(attrs);
>> +}
>> +
>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>> +{
>> + /* Support 2 IOMMU indexes for now: NS/S */
>> + return SMMU_SEC_SID_NUM;
>> +}
>> +
>> /* Entry point to SMMU, does everything. */
>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr
>> addr,
>> IOMMUAccessFlags flag, int
>> iommu_idx)
>> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry
>> smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
>> SMMUv3State *s = sdev->smmu;
>> uint32_t sid = smmu_get_sid(sdev);
>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>> + SMMUSecSID sec_sid = iommu_idx;
>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
>> .sid = sid,
>> @@ -2540,6 +2572,8 @@ static void
>> smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>> imrc->translate = smmuv3_translate;
>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>> + imrc->num_indexes = smmuv3_num_indexes;
>> }
>> static const TypeInfo smmuv3_type_info = {
>
> I noticed that this commit breaks boot of a simple Linux kernel. It
> was already the case with v2, and it seems there is a deeper issue.
>
> Virtio drive initialization hangs up with:
> [ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
> blocks (10.7 GB/10.0 GiB)
> smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18
> bypass (smmu disabled) iova:0xfffff040 is_write=1
>
> You can reproduce that with any kernel/rootfs, but if you want a
> simple recipe (you need podman and qemu-user-static):
> $ git clone https://github.com/pbo-linaro/qemu-linux-stack
> $ cd qemu-linux-stack
> $ ./build_kernel.sh
> $ ./build_rootfs.sh
> $ /path/to/qemu-system-aarch64 \
> -nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
> -append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
>
> Looking more closely,
> we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
> This values comes from smmuv3_attrs_to_sec_sid, by reading
> attrs.space, which is ArmSS_Secure.
>
> The problem is that it's impossible to have anything Secure given that
> all the code above runs in NonSecure world.
> After investigation, the original value read from attrs.space has not
> been set anywhere, and is just the default zero-initialized value
> coming from pci_msi_trigger. It happens that it defaults to SEC_SID_S,
> which probably matches your use case with hafnium, but it's an happy
> accident.
>
> Looking at the SMMU spec, I understand that SEC_SID is configured for
> each stream, and can change dynamically.
> On the opposite, a StreamID is fixed and derived from PCI bus and slot
> for a given device.
>
> Thus, I think we are missing some logic here.
> I'm still trying to understand where the SEC_SID should come from
> initially.
> "The association between a device and the Security state of the
> programming interface is a system-defined property."
> Does it mean we should be able to set a QEMU property for any device?
>
> Does anyone familiar with this has some idea?
>
> As well, we should check the SEC_SID found based on
> SMMU_S_IDR1.SECURE_IMPL.
> 3.10.1 StreamID Security state (SEC_SID)
> If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
> StreamID, and either:
> • A SEC_SID identifier with a value of 0.
> • No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
> If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a
> StreamID, and a SEC_SID identifier.
>
> Regards,
> Pierrick
Thank you very much for your detailed review and in-depth analysis, and
for pointing out this critical issue that breaks the Linux boot.
To be transparent, my initial approach was indeed tailored to my
specific test case, where I was effectively hardcoding the device's
StreamID to represent it's a so-called Secure device in my self testing.
This clearly isn't a general solution.
You've raised a crucial architectural point that I hadn't fully
considered: how a standard "Normal World" PCIe device should be properly
associated with the "Secure World". To be honest, I didn't have a clear
answer for this, so your feedback is a perfect opportunity for me to dig
in and understand this area correctly.
I'd be very interested to hear ideas from others on the list who are
more familiar with this topic.
Best,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-16 6:37 ` Tao Tang
@ 2025-10-16 7:04 ` Pierrick Bouvier
2025-10-20 8:44 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Pierrick Bouvier @ 2025-10-16 7:04 UTC (permalink / raw)
To: Tao Tang, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
On 10/15/25 11:37 PM, Tao Tang wrote:
> Hi Pierrick:
>
> On 2025/10/15 08:02, Pierrick Bouvier wrote:
>> Hi Tao,
>>
>> On 10/12/25 8:15 AM, Tao Tang wrote:
>>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to select
>>> the programming interface. To support future extensions like RME, which
>>> defines four security states (Non-secure, Secure, Realm, and Root), the
>>> QEMU model must cleanly separate these contexts for all operations.
>>>
>>> This commit leverages the generic iommu_index to represent this
>>> security context. The core IOMMU layer now uses the SMMU's
>>> .attrs_to_index
>>> callback to map a transaction's ARMSecuritySpace attribute to the
>>> corresponding iommu_index.
>>>
>>> This index is then passed down to smmuv3_translate and used throughout
>>> the model to select the correct register bank and processing logic. This
>>> makes the iommu_index the clear QEMU equivalent of the architectural
>>> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>>>
>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>> ---
>>> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
>>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>> index c9c742c80b..b44859540f 100644
>>> --- a/hw/arm/smmuv3.c
>>> +++ b/hw/arm/smmuv3.c
>>> @@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo
>>> *event, hwaddr iova)
>>> }
>>> }
>>> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
>>> +{
>>> + switch (attrs.space) {
>>> + case ARMSS_Secure:
>>> + return SMMU_SEC_SID_S;
>>> + case ARMSS_NonSecure:
>>> + default:
>>> + return SMMU_SEC_SID_NS;
>>> + }
>>> +}
>>> +
>>> +/*
>>> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
>>> + * iommu_idx = 0: Non-secure transactions
>>> + * iommu_idx = 1: Secure transactions
>>> + *
>>> + * The iommu_idx parameter effectively implements the SEC_SID
>>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>>> specification,
>>> + * which allows the SMMU to differentiate between different security
>>> state
>>> + * transactions at the hardware level.
>>> + */
>>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu,
>>> MemTxAttrs attrs)
>>> +{
>>> + return (int)smmuv3_attrs_to_sec_sid(attrs);
>>> +}
>>> +
>>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>>> +{
>>> + /* Support 2 IOMMU indexes for now: NS/S */
>>> + return SMMU_SEC_SID_NUM;
>>> +}
>>> +
>>> /* Entry point to SMMU, does everything. */
>>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr
>>> addr,
>>> IOMMUAccessFlags flag, int
>>> iommu_idx)
>>> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry
>>> smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>>> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
>>> SMMUv3State *s = sdev->smmu;
>>> uint32_t sid = smmu_get_sid(sdev);
>>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>>> + SMMUSecSID sec_sid = iommu_idx;
>>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
>>> .sid = sid,
>>> @@ -2540,6 +2572,8 @@ static void
>>> smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>>> imrc->translate = smmuv3_translate;
>>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>>> + imrc->num_indexes = smmuv3_num_indexes;
>>> }
>>> static const TypeInfo smmuv3_type_info = {
>>
>> I noticed that this commit breaks boot of a simple Linux kernel. It
>> was already the case with v2, and it seems there is a deeper issue.
>>
>> Virtio drive initialization hangs up with:
>> [ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
>> blocks (10.7 GB/10.0 GiB)
>> smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18
>> bypass (smmu disabled) iova:0xfffff040 is_write=1
>>
>> You can reproduce that with any kernel/rootfs, but if you want a
>> simple recipe (you need podman and qemu-user-static):
>> $ git clone https://github.com/pbo-linaro/qemu-linux-stack
>> $ cd qemu-linux-stack
>> $ ./build_kernel.sh
>> $ ./build_rootfs.sh
>> $ /path/to/qemu-system-aarch64 \
>> -nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
>> -append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
>>
>> Looking more closely,
>> we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
>> This values comes from smmuv3_attrs_to_sec_sid, by reading
>> attrs.space, which is ArmSS_Secure.
>>
>> The problem is that it's impossible to have anything Secure given that
>> all the code above runs in NonSecure world.
>> After investigation, the original value read from attrs.space has not
>> been set anywhere, and is just the default zero-initialized value
>> coming from pci_msi_trigger. It happens that it defaults to SEC_SID_S,
>> which probably matches your use case with hafnium, but it's an happy
>> accident.
>>
>> Looking at the SMMU spec, I understand that SEC_SID is configured for
>> each stream, and can change dynamically.
>> On the opposite, a StreamID is fixed and derived from PCI bus and slot
>> for a given device.
>>
>> Thus, I think we are missing some logic here.
>> I'm still trying to understand where the SEC_SID should come from
>> initially.
>> "The association between a device and the Security state of the
>> programming interface is a system-defined property."
>> Does it mean we should be able to set a QEMU property for any device?
>>
>> Does anyone familiar with this has some idea?
>>
>> As well, we should check the SEC_SID found based on
>> SMMU_S_IDR1.SECURE_IMPL.
>> 3.10.1 StreamID Security state (SEC_SID)
>> If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
>> StreamID, and either:
>> • A SEC_SID identifier with a value of 0.
>> • No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
>> If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a
>> StreamID, and a SEC_SID identifier.
>>
>> Regards,
>> Pierrick
>
> Thank you very much for your detailed review and in-depth analysis, and
> for pointing out this critical issue that breaks the Linux boot.
>
>
> To be transparent, my initial approach was indeed tailored to my
> specific test case, where I was effectively hardcoding the device's
> StreamID to represent it's a so-called Secure device in my self testing.
> This clearly isn't a general solution.
>
It's definitely not a bad approach, and it's a good way to exercise the
secure path. It would have been caught by some of QEMU functional tests
anyway, so it's not a big deal.
A solution would be to define the secure attribute as a property of the
PCI device, and query that to identify sec_sid accordingly.
As you'll see in 3.10.1 StreamID Security state (SEC_SID), "Whether a
stream is under Secure control or not is a different property to the
target PA space of a transaction.", so we definitely should *not* do any
funky stuff depending on which address is accessed.
By curiosity, which kind of secure device are you using? Is it one of
the device available upstream, or a specific one you have in your fork?
>
> You've raised a crucial architectural point that I hadn't fully
> considered: how a standard "Normal World" PCIe device should be properly
> associated with the "Secure World". To be honest, I didn't have a clear
> answer for this, so your feedback is a perfect opportunity for me to dig
> in and understand this area correctly.
>
It took time for us to reach that question also.
Our current understanding is that SEC_SID == Realm is identified by bits
on pci side (part of TDISP protocol), and that secure devices are indeed
hardcoded somewhere.
We asked this question to some Arm folks working on this area, to
confirm Secure devices are supposed to be defined this way.
>
> I'd be very interested to hear ideas from others on the list who are
> more familiar with this topic.
>
>
> Best,
>
> Tao
>
>
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-16 7:04 ` Pierrick Bouvier
@ 2025-10-20 8:44 ` Tao Tang
2025-10-20 22:55 ` Pierrick Bouvier
2025-12-04 15:05 ` Eric Auger
0 siblings, 2 replies; 67+ messages in thread
From: Tao Tang @ 2025-10-20 8:44 UTC (permalink / raw)
To: Pierrick Bouvier, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
Hi Pierrick,
On 2025/10/16 15:04, Pierrick Bouvier wrote:
> On 10/15/25 11:37 PM, Tao Tang wrote:
>> Hi Pierrick:
>>
>> On 2025/10/15 08:02, Pierrick Bouvier wrote:
>>> Hi Tao,
>>>
>>> On 10/12/25 8:15 AM, Tao Tang wrote:
>>>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to select
>>>> the programming interface. To support future extensions like RME,
>>>> which
>>>> defines four security states (Non-secure, Secure, Realm, and Root),
>>>> the
>>>> QEMU model must cleanly separate these contexts for all operations.
>>>>
>>>> This commit leverages the generic iommu_index to represent this
>>>> security context. The core IOMMU layer now uses the SMMU's
>>>> .attrs_to_index
>>>> callback to map a transaction's ARMSecuritySpace attribute to the
>>>> corresponding iommu_index.
>>>>
>>>> This index is then passed down to smmuv3_translate and used throughout
>>>> the model to select the correct register bank and processing logic.
>>>> This
>>>> makes the iommu_index the clear QEMU equivalent of the architectural
>>>> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>>>>
>>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>>> ---
>>>> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
>>>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>>> index c9c742c80b..b44859540f 100644
>>>> --- a/hw/arm/smmuv3.c
>>>> +++ b/hw/arm/smmuv3.c
>>>> @@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo
>>>> *event, hwaddr iova)
>>>> }
>>>> }
>>>> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
>>>> +{
>>>> + switch (attrs.space) {
>>>> + case ARMSS_Secure:
>>>> + return SMMU_SEC_SID_S;
>>>> + case ARMSS_NonSecure:
>>>> + default:
>>>> + return SMMU_SEC_SID_NS;
>>>> + }
>>>> +}
>>>> +
>>>> +/*
>>>> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
>>>> + * iommu_idx = 0: Non-secure transactions
>>>> + * iommu_idx = 1: Secure transactions
>>>> + *
>>>> + * The iommu_idx parameter effectively implements the SEC_SID
>>>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>>>> specification,
>>>> + * which allows the SMMU to differentiate between different security
>>>> state
>>>> + * transactions at the hardware level.
>>>> + */
>>>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu,
>>>> MemTxAttrs attrs)
>>>> +{
>>>> + return (int)smmuv3_attrs_to_sec_sid(attrs);
>>>> +}
>>>> +
>>>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>>>> +{
>>>> + /* Support 2 IOMMU indexes for now: NS/S */
>>>> + return SMMU_SEC_SID_NUM;
>>>> +}
>>>> +
>>>> /* Entry point to SMMU, does everything. */
>>>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr
>>>> addr,
>>>> IOMMUAccessFlags flag, int
>>>> iommu_idx)
>>>> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry
>>>> smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>>>> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
>>>> SMMUv3State *s = sdev->smmu;
>>>> uint32_t sid = smmu_get_sid(sdev);
>>>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>>>> + SMMUSecSID sec_sid = iommu_idx;
>>>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>>> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
>>>> .sid = sid,
>>>> @@ -2540,6 +2572,8 @@ static void
>>>> smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>>>> imrc->translate = smmuv3_translate;
>>>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>>>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>>>> + imrc->num_indexes = smmuv3_num_indexes;
>>>> }
>>>> static const TypeInfo smmuv3_type_info = {
>>>
>>> I noticed that this commit breaks boot of a simple Linux kernel. It
>>> was already the case with v2, and it seems there is a deeper issue.
>>>
>>> Virtio drive initialization hangs up with:
>>> [ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
>>> blocks (10.7 GB/10.0 GiB)
>>> smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18
>>> bypass (smmu disabled) iova:0xfffff040 is_write=1
>>>
>>> You can reproduce that with any kernel/rootfs, but if you want a
>>> simple recipe (you need podman and qemu-user-static):
>>> $ git clone https://github.com/pbo-linaro/qemu-linux-stack
>>> $ cd qemu-linux-stack
>>> $ ./build_kernel.sh
>>> $ ./build_rootfs.sh
>>> $ /path/to/qemu-system-aarch64 \
>>> -nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
>>> -append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
>>>
>>> Looking more closely,
>>> we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
>>> This values comes from smmuv3_attrs_to_sec_sid, by reading
>>> attrs.space, which is ArmSS_Secure.
>>>
>>> The problem is that it's impossible to have anything Secure given that
>>> all the code above runs in NonSecure world.
>>> After investigation, the original value read from attrs.space has not
>>> been set anywhere, and is just the default zero-initialized value
>>> coming from pci_msi_trigger. It happens that it defaults to SEC_SID_S,
>>> which probably matches your use case with hafnium, but it's an happy
>>> accident.
>>>
>>> Looking at the SMMU spec, I understand that SEC_SID is configured for
>>> each stream, and can change dynamically.
>>> On the opposite, a StreamID is fixed and derived from PCI bus and slot
>>> for a given device.
>>>
>>> Thus, I think we are missing some logic here.
>>> I'm still trying to understand where the SEC_SID should come from
>>> initially.
>>> "The association between a device and the Security state of the
>>> programming interface is a system-defined property."
>>> Does it mean we should be able to set a QEMU property for any device?
>>>
>>> Does anyone familiar with this has some idea?
>>>
>>> As well, we should check the SEC_SID found based on
>>> SMMU_S_IDR1.SECURE_IMPL.
>>> 3.10.1 StreamID Security state (SEC_SID)
>>> If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
>>> StreamID, and either:
>>> • A SEC_SID identifier with a value of 0.
>>> • No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
>>> If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a
>>> StreamID, and a SEC_SID identifier.
>>>
>>> Regards,
>>> Pierrick
>>
>> Thank you very much for your detailed review and in-depth analysis, and
>> for pointing out this critical issue that breaks the Linux boot.
>>
>>
>> To be transparent, my initial approach was indeed tailored to my
>> specific test case, where I was effectively hardcoding the device's
>> StreamID to represent it's a so-called Secure device in my self testing.
>> This clearly isn't a general solution.
>>
>
> It's definitely not a bad approach, and it's a good way to exercise
> the secure path. It would have been caught by some of QEMU functional
> tests anyway, so it's not a big deal.
>
> A solution would be to define the secure attribute as a property of
> the PCI device, and query that to identify sec_sid accordingly.
> As you'll see in 3.10.1 StreamID Security state (SEC_SID), "Whether a
> stream is under Secure control or not is a different property to the
> target PA space of a transaction.", so we definitely should *not* do
> any funky stuff depending on which address is accessed.
Thank you for the encouraging and very constructive feedback.
Your proposed solution—to define the security attribute as a property on
the PCIDevice—is the perfect way forward to resolve Secure device issue.
Perhaps we can implement this functionality in V4 as shown in the
following code snippet?
1) define sec_sid in include/hw/pci/pci_device.h:
struct PCIDevice {
DeviceState qdev;
......
/* Add SEC_SID property for SMMU security context */
uint8_t sec_sid; /* 0 = Non-secure, 1 = Secure*/
......
}
2) then add sec-sid field in the Property of PCI in hw/pci/pci.c:
static const Property pci_props[] = {
......
/* SEC_SID property: 0=NS, 1=S */
DEFINE_PROP_UINT8("sec-sid", PCIDevice, sec_sid, 0),
......
};
3) get sec-sid in smmu_find_add_as(hw/arm/smmu-common.c):
static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
{
SMMUState *s = opaque;
SMMUPciBus *sbus = g_hash_table_lookup(s->smmu_pcibus_by_busptr, bus);
SMMUDevice *sdev;
static unsigned int index;
......
sdev = sbus->pbdev[devfn];
if (!sdev) {
PCIDevice *pcidev;
pcidev = pci_find_device(bus, pci_bus_num(bus), devfn);
if (pcidev) {
/* Get sec_sid which is originally from QEMU options.
* For example:
* qemu-system-aarch64 \
* -drive if=none,file=/nvme.img,format=raw,id=nvme0 \
* -device nvme,drive=nvme0,serial=deadbeef,sec-sid=1
*
* This NVMe device will have sec_sid = 1.
*/
sdev->sec_sid = pcidev->sec_sid;
} else {
/* Default to Non-secure if device not found */
sdev->sec_sid = 0;
}
......
}
The SEC_SID of device will be passed from QEMU options to PCIDevice and
then SMMUDevice. This would allow the SMMU model to perform the
necessary checks against both the security context of the DMA access and
the SMMU_S_IDR1.SECURE_IMPL capability bit.
Is this a reasonable implementation approach? I would greatly appreciate
any feedback.
>
> By curiosity, which kind of secure device are you using? Is it one of
> the device available upstream, or a specific one you have in your fork?
I just use IGB NIC for test with Hafnium + OP-TEE software stack.
>
>>
>> You've raised a crucial architectural point that I hadn't fully
>> considered: how a standard "Normal World" PCIe device should be properly
>> associated with the "Secure World". To be honest, I didn't have a clear
>> answer for this, so your feedback is a perfect opportunity for me to dig
>> in and understand this area correctly.
>>
> It took time for us to reach that question also.
> Our current understanding is that SEC_SID == Realm is identified by
> bits on pci side (part of TDISP protocol), and that secure devices are
> indeed hardcoded somewhere.
>
> We asked this question to some Arm folks working on this area, to
> confirm Secure devices are supposed to be defined this way.
Thank you also for sharing the invaluable context from your team's
internal discussions and your outreach to the Arm experts. This
clarification directly inspired my new proposal as described above.
I will proceed with this plan for the v4 patch set. Thanks again for
your mentorship and for helping to clarify the correct path forward.
Best regards,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-20 8:44 ` Tao Tang
@ 2025-10-20 22:55 ` Pierrick Bouvier
2025-10-21 3:51 ` Tao Tang
2025-12-04 15:05 ` Eric Auger
1 sibling, 1 reply; 67+ messages in thread
From: Pierrick Bouvier @ 2025-10-20 22:55 UTC (permalink / raw)
To: Tao Tang, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
On 2025-10-20 01:44, Tao Tang wrote:
> Hi Pierrick,
>
> On 2025/10/16 15:04, Pierrick Bouvier wrote:
>> On 10/15/25 11:37 PM, Tao Tang wrote:
>>> Hi Pierrick:
>>>
>>> On 2025/10/15 08:02, Pierrick Bouvier wrote:
>>>> Hi Tao,
>>>>
>>>> On 10/12/25 8:15 AM, Tao Tang wrote:
>>>>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to select
>>>>> the programming interface. To support future extensions like RME,
>>>>> which
>>>>> defines four security states (Non-secure, Secure, Realm, and Root),
>>>>> the
>>>>> QEMU model must cleanly separate these contexts for all operations.
>>>>>
>>>>> This commit leverages the generic iommu_index to represent this
>>>>> security context. The core IOMMU layer now uses the SMMU's
>>>>> .attrs_to_index
>>>>> callback to map a transaction's ARMSecuritySpace attribute to the
>>>>> corresponding iommu_index.
>>>>>
>>>>> This index is then passed down to smmuv3_translate and used throughout
>>>>> the model to select the correct register bank and processing logic.
>>>>> This
>>>>> makes the iommu_index the clear QEMU equivalent of the architectural
>>>>> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>>>>>
>>>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>>>> ---
>>>>> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
>>>>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>>>> index c9c742c80b..b44859540f 100644
>>>>> --- a/hw/arm/smmuv3.c
>>>>> +++ b/hw/arm/smmuv3.c
>>>>> @@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo
>>>>> *event, hwaddr iova)
>>>>> }
>>>>> }
>>>>> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
>>>>> +{
>>>>> + switch (attrs.space) {
>>>>> + case ARMSS_Secure:
>>>>> + return SMMU_SEC_SID_S;
>>>>> + case ARMSS_NonSecure:
>>>>> + default:
>>>>> + return SMMU_SEC_SID_NS;
>>>>> + }
>>>>> +}
>>>>> +
>>>>> +/*
>>>>> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
>>>>> + * iommu_idx = 0: Non-secure transactions
>>>>> + * iommu_idx = 1: Secure transactions
>>>>> + *
>>>>> + * The iommu_idx parameter effectively implements the SEC_SID
>>>>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>>>>> specification,
>>>>> + * which allows the SMMU to differentiate between different security
>>>>> state
>>>>> + * transactions at the hardware level.
>>>>> + */
>>>>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu,
>>>>> MemTxAttrs attrs)
>>>>> +{
>>>>> + return (int)smmuv3_attrs_to_sec_sid(attrs);
>>>>> +}
>>>>> +
>>>>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>>>>> +{
>>>>> + /* Support 2 IOMMU indexes for now: NS/S */
>>>>> + return SMMU_SEC_SID_NUM;
>>>>> +}
>>>>> +
>>>>> /* Entry point to SMMU, does everything. */
>>>>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr
>>>>> addr,
>>>>> IOMMUAccessFlags flag, int
>>>>> iommu_idx)
>>>>> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry
>>>>> smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>>>>> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
>>>>> SMMUv3State *s = sdev->smmu;
>>>>> uint32_t sid = smmu_get_sid(sdev);
>>>>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>>>>> + SMMUSecSID sec_sid = iommu_idx;
>>>>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>>>> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
>>>>> .sid = sid,
>>>>> @@ -2540,6 +2572,8 @@ static void
>>>>> smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>>>>> imrc->translate = smmuv3_translate;
>>>>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>>>>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>>>>> + imrc->num_indexes = smmuv3_num_indexes;
>>>>> }
>>>>> static const TypeInfo smmuv3_type_info = {
>>>>
>>>> I noticed that this commit breaks boot of a simple Linux kernel. It
>>>> was already the case with v2, and it seems there is a deeper issue.
>>>>
>>>> Virtio drive initialization hangs up with:
>>>> [ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
>>>> blocks (10.7 GB/10.0 GiB)
>>>> smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18
>>>> bypass (smmu disabled) iova:0xfffff040 is_write=1
>>>>
>>>> You can reproduce that with any kernel/rootfs, but if you want a
>>>> simple recipe (you need podman and qemu-user-static):
>>>> $ git clone https://github.com/pbo-linaro/qemu-linux-stack
>>>> $ cd qemu-linux-stack
>>>> $ ./build_kernel.sh
>>>> $ ./build_rootfs.sh
>>>> $ /path/to/qemu-system-aarch64 \
>>>> -nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
>>>> -append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
>>>>
>>>> Looking more closely,
>>>> we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
>>>> This values comes from smmuv3_attrs_to_sec_sid, by reading
>>>> attrs.space, which is ArmSS_Secure.
>>>>
>>>> The problem is that it's impossible to have anything Secure given that
>>>> all the code above runs in NonSecure world.
>>>> After investigation, the original value read from attrs.space has not
>>>> been set anywhere, and is just the default zero-initialized value
>>>> coming from pci_msi_trigger. It happens that it defaults to SEC_SID_S,
>>>> which probably matches your use case with hafnium, but it's an happy
>>>> accident.
>>>>
>>>> Looking at the SMMU spec, I understand that SEC_SID is configured for
>>>> each stream, and can change dynamically.
>>>> On the opposite, a StreamID is fixed and derived from PCI bus and slot
>>>> for a given device.
>>>>
>>>> Thus, I think we are missing some logic here.
>>>> I'm still trying to understand where the SEC_SID should come from
>>>> initially.
>>>> "The association between a device and the Security state of the
>>>> programming interface is a system-defined property."
>>>> Does it mean we should be able to set a QEMU property for any device?
>>>>
>>>> Does anyone familiar with this has some idea?
>>>>
>>>> As well, we should check the SEC_SID found based on
>>>> SMMU_S_IDR1.SECURE_IMPL.
>>>> 3.10.1 StreamID Security state (SEC_SID)
>>>> If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
>>>> StreamID, and either:
>>>> • A SEC_SID identifier with a value of 0.
>>>> • No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
>>>> If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a
>>>> StreamID, and a SEC_SID identifier.
>>>>
>>>> Regards,
>>>> Pierrick
>>>
>>> Thank you very much for your detailed review and in-depth analysis, and
>>> for pointing out this critical issue that breaks the Linux boot.
>>>
>>>
>>> To be transparent, my initial approach was indeed tailored to my
>>> specific test case, where I was effectively hardcoding the device's
>>> StreamID to represent it's a so-called Secure device in my self testing.
>>> This clearly isn't a general solution.
>>>
>>
>> It's definitely not a bad approach, and it's a good way to exercise
>> the secure path. It would have been caught by some of QEMU functional
>> tests anyway, so it's not a big deal.
>>
>> A solution would be to define the secure attribute as a property of
>> the PCI device, and query that to identify sec_sid accordingly.
>> As you'll see in 3.10.1 StreamID Security state (SEC_SID), "Whether a
>> stream is under Secure control or not is a different property to the
>> target PA space of a transaction.", so we definitely should *not* do
>> any funky stuff depending on which address is accessed.
>
>
> Thank you for the encouraging and very constructive feedback.
>
>
> Your proposed solution—to define the security attribute as a property on
> the PCIDevice—is the perfect way forward to resolve Secure device issue.
> Perhaps we can implement this functionality in V4 as shown in the
> following code snippet?
>
> 1) define sec_sid in include/hw/pci/pci_device.h:
>
> struct PCIDevice {
> DeviceState qdev;
> ......
> /* Add SEC_SID property for SMMU security context */
> uint8_t sec_sid; /* 0 = Non-secure, 1 = Secure*/
> ......
>
> }
>
>
> 2) then add sec-sid field in the Property of PCI in hw/pci/pci.c:
>
> static const Property pci_props[] = {
> ......
> /* SEC_SID property: 0=NS, 1=S */
> DEFINE_PROP_UINT8("sec-sid", PCIDevice, sec_sid, 0),
>
> ......
>
> };
>
>
> 3) get sec-sid in smmu_find_add_as(hw/arm/smmu-common.c):
>
> static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
> {
> SMMUState *s = opaque;
> SMMUPciBus *sbus = g_hash_table_lookup(s->smmu_pcibus_by_busptr, bus);
> SMMUDevice *sdev;
> static unsigned int index;
> ......
> sdev = sbus->pbdev[devfn];
> if (!sdev) {
>
> PCIDevice *pcidev;
> pcidev = pci_find_device(bus, pci_bus_num(bus), devfn);
> if (pcidev) {
> /* Get sec_sid which is originally from QEMU options.
> * For example:
> * qemu-system-aarch64 \
> * -drive if=none,file=/nvme.img,format=raw,id=nvme0 \
> * -device nvme,drive=nvme0,serial=deadbeef,sec-sid=1
> *
> * This NVMe device will have sec_sid = 1.
> */
> sdev->sec_sid = pcidev->sec_sid;
> } else {
> /* Default to Non-secure if device not found */
> sdev->sec_sid = 0;
> }
>
> ......
>
> }
>
> The SEC_SID of device will be passed from QEMU options to PCIDevice and
> then SMMUDevice. This would allow the SMMU model to perform the
> necessary checks against both the security context of the DMA access and
> the SMMU_S_IDR1.SECURE_IMPL capability bit.
>
>
> Is this a reasonable implementation approach? I would greatly appreciate
> any feedback.
>
Yes, this looks reasonable.
However, for Realm support, the sec_sid is not static, and can be
changed dynamically by the device itself, after interaction with RMM
firmware, following TDISP protocol (T bit is set in PCI transactions,
which we don't model in QEMU).
See 3.9.4 SMMU interactions with the PCIe fields T, TE and XT.
This T bit state is currently stored out of QEMU, as we use the external
program spdm-emu for all that. So, we implemented a very hacky solution
detecting when this device it set in "Realm" mode based on config
prefetch with this new sec_sid:
https://github.com/pbo-linaro/qemu/commit/c4db6f72c26ac52739814621ce018e65869f934b
It uses a dictionnary simply because of lifetime issue, as the config
seems to be emitted before the first access of the device in our case. I
didn't dig further. It all cases, it's ugly, not a reference, and just a
work in progress to show you how we need to update it.
All that to said that even though we can provide this static property
for devices that are always secure, the design will have to support
dynamic changes as well. Not a big deal, and you can keep this out of
scope for now, we'll change that later when adding Realms support.
As long as we have something that does not break non secure use case
while allowing secure devices, I think we're good!
>
>>
>> By curiosity, which kind of secure device are you using? Is it one of
>> the device available upstream, or a specific one you have in your fork?
>
>
> I just use IGB NIC for test with Hafnium + OP-TEE software stack.
>
>
>>
>>>
>>> You've raised a crucial architectural point that I hadn't fully
>>> considered: how a standard "Normal World" PCIe device should be properly
>>> associated with the "Secure World". To be honest, I didn't have a clear
>>> answer for this, so your feedback is a perfect opportunity for me to dig
>>> in and understand this area correctly.
>>>
>> It took time for us to reach that question also.
>> Our current understanding is that SEC_SID == Realm is identified by
>> bits on pci side (part of TDISP protocol), and that secure devices are
>> indeed hardcoded somewhere.
>>
>> We asked this question to some Arm folks working on this area, to
>> confirm Secure devices are supposed to be defined this way.
>
>
> Thank you also for sharing the invaluable context from your team's
> internal discussions and your outreach to the Arm experts. This
> clarification directly inspired my new proposal as described above.
>
We didn't receive an answer, but after looking more at how secure world
is modelled (with separate address space), it makes sense to have this
description built in in the firmware or the platform itself.
I'm not familiar with Hafnium, but I don't expect any device to
transition from Non secure to Secure world similar to Realm approach.
>
> I will proceed with this plan for the v4 patch set. Thanks again for
> your mentorship and for helping to clarify the correct path forward.
>
Thanks for your series, it's definitely a great base to work on Realm
support, and we'll be glad to publish this later, after secure support
is merged. It will be your turn to review and give feedback if you want :)
> Best regards,
>
> Tao
>
Regards,
Pierrick
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-20 22:55 ` Pierrick Bouvier
@ 2025-10-21 3:51 ` Tao Tang
2025-10-22 21:23 ` Pierrick Bouvier
0 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-21 3:51 UTC (permalink / raw)
To: Pierrick Bouvier, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
Hi Pierrick,
On 2025/10/21 06:55, Pierrick Bouvier wrote:
> On 2025-10-20 01:44, Tao Tang wrote:
>> Hi Pierrick,
>>
>> On 2025/10/16 15:04, Pierrick Bouvier wrote:
>>> On 10/15/25 11:37 PM, Tao Tang wrote:
>>>> Hi Pierrick:
>>>>
>>>> On 2025/10/15 08:02, Pierrick Bouvier wrote:
>>>>> Hi Tao,
>>>>>
>>>>> On 10/12/25 8:15 AM, Tao Tang wrote:
>>>>>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to
>>>>>> select
>>>>>> the programming interface. To support future extensions like RME,
>>>>>> which
>>>>>> defines four security states (Non-secure, Secure, Realm, and Root),
>>>>>> the
>>>>>> QEMU model must cleanly separate these contexts for all operations.
>>>>>>
>>>>>> This commit leverages the generic iommu_index to represent this
>>>>>> security context. The core IOMMU layer now uses the SMMU's
>>>>>> .attrs_to_index
>>>>>> callback to map a transaction's ARMSecuritySpace attribute to the
>>>>>> corresponding iommu_index.
>>>>>>
>>>>>> This index is then passed down to smmuv3_translate and used
>>>>>> throughout
>>>>>> the model to select the correct register bank and processing logic.
>>>>>> This
>>>>>> makes the iommu_index the clear QEMU equivalent of the architectural
>>>>>> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>>>>>>
>>>>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>>>>> ---
>>>>>> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
>>>>>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>>>>> index c9c742c80b..b44859540f 100644
>>>>>> --- a/hw/arm/smmuv3.c
>>>>>> +++ b/hw/arm/smmuv3.c
>>>>>> @@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo
>>>>>> *event, hwaddr iova)
>>>>>> }
>>>>>> }
>>>>>> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
>>>>>> +{
>>>>>> + switch (attrs.space) {
>>>>>> + case ARMSS_Secure:
>>>>>> + return SMMU_SEC_SID_S;
>>>>>> + case ARMSS_NonSecure:
>>>>>> + default:
>>>>>> + return SMMU_SEC_SID_NS;
>>>>>> + }
>>>>>> +}
>>>>>> +
>>>>>> +/*
>>>>>> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
>>>>>> + * iommu_idx = 0: Non-secure transactions
>>>>>> + * iommu_idx = 1: Secure transactions
>>>>>> + *
>>>>>> + * The iommu_idx parameter effectively implements the SEC_SID
>>>>>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>>>>>> specification,
>>>>>> + * which allows the SMMU to differentiate between different
>>>>>> security
>>>>>> state
>>>>>> + * transactions at the hardware level.
>>>>>> + */
>>>>>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu,
>>>>>> MemTxAttrs attrs)
>>>>>> +{
>>>>>> + return (int)smmuv3_attrs_to_sec_sid(attrs);
>>>>>> +}
>>>>>> +
>>>>>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>>>>>> +{
>>>>>> + /* Support 2 IOMMU indexes for now: NS/S */
>>>>>> + return SMMU_SEC_SID_NUM;
>>>>>> +}
>>>>>> +
>>>>>> /* Entry point to SMMU, does everything. */
>>>>>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr,
>>>>>> hwaddr
>>>>>> addr,
>>>>>> IOMMUAccessFlags flag, int
>>>>>> iommu_idx)
>>>>>> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry
>>>>>> smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>>>>>> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
>>>>>> SMMUv3State *s = sdev->smmu;
>>>>>> uint32_t sid = smmu_get_sid(sdev);
>>>>>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>>>>>> + SMMUSecSID sec_sid = iommu_idx;
>>>>>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>>>>> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
>>>>>> .sid = sid,
>>>>>> @@ -2540,6 +2572,8 @@ static void
>>>>>> smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>>>>>> imrc->translate = smmuv3_translate;
>>>>>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>>>>>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>>>>>> + imrc->num_indexes = smmuv3_num_indexes;
>>>>>> }
>>>>>> static const TypeInfo smmuv3_type_info = {
>>>>>
>>>>> I noticed that this commit breaks boot of a simple Linux kernel. It
>>>>> was already the case with v2, and it seems there is a deeper issue.
>>>>>
>>>>> Virtio drive initialization hangs up with:
>>>>> [ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
>>>>> blocks (10.7 GB/10.0 GiB)
>>>>> smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18
>>>>> bypass (smmu disabled) iova:0xfffff040 is_write=1
>>>>>
>>>>> You can reproduce that with any kernel/rootfs, but if you want a
>>>>> simple recipe (you need podman and qemu-user-static):
>>>>> $ git clone https://github.com/pbo-linaro/qemu-linux-stack
>>>>> $ cd qemu-linux-stack
>>>>> $ ./build_kernel.sh
>>>>> $ ./build_rootfs.sh
>>>>> $ /path/to/qemu-system-aarch64 \
>>>>> -nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
>>>>> -append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
>>>>>
>>>>> Looking more closely,
>>>>> we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
>>>>> This values comes from smmuv3_attrs_to_sec_sid, by reading
>>>>> attrs.space, which is ArmSS_Secure.
>>>>>
>>>>> The problem is that it's impossible to have anything Secure given
>>>>> that
>>>>> all the code above runs in NonSecure world.
>>>>> After investigation, the original value read from attrs.space has not
>>>>> been set anywhere, and is just the default zero-initialized value
>>>>> coming from pci_msi_trigger. It happens that it defaults to
>>>>> SEC_SID_S,
>>>>> which probably matches your use case with hafnium, but it's an happy
>>>>> accident.
>>>>>
>>>>> Looking at the SMMU spec, I understand that SEC_SID is configured for
>>>>> each stream, and can change dynamically.
>>>>> On the opposite, a StreamID is fixed and derived from PCI bus and
>>>>> slot
>>>>> for a given device.
>>>>>
>>>>> Thus, I think we are missing some logic here.
>>>>> I'm still trying to understand where the SEC_SID should come from
>>>>> initially.
>>>>> "The association between a device and the Security state of the
>>>>> programming interface is a system-defined property."
>>>>> Does it mean we should be able to set a QEMU property for any device?
>>>>>
>>>>> Does anyone familiar with this has some idea?
>>>>>
>>>>> As well, we should check the SEC_SID found based on
>>>>> SMMU_S_IDR1.SECURE_IMPL.
>>>>> 3.10.1 StreamID Security state (SEC_SID)
>>>>> If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
>>>>> StreamID, and either:
>>>>> • A SEC_SID identifier with a value of 0.
>>>>> • No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
>>>>> If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a
>>>>> StreamID, and a SEC_SID identifier.
>>>>>
>>>>> Regards,
>>>>> Pierrick
>>>>
>>>> Thank you very much for your detailed review and in-depth analysis,
>>>> and
>>>> for pointing out this critical issue that breaks the Linux boot.
>>>>
>>>>
>>>> To be transparent, my initial approach was indeed tailored to my
>>>> specific test case, where I was effectively hardcoding the device's
>>>> StreamID to represent it's a so-called Secure device in my self
>>>> testing.
>>>> This clearly isn't a general solution.
>>>>
>>>
>>> It's definitely not a bad approach, and it's a good way to exercise
>>> the secure path. It would have been caught by some of QEMU functional
>>> tests anyway, so it's not a big deal.
>>>
>>> A solution would be to define the secure attribute as a property of
>>> the PCI device, and query that to identify sec_sid accordingly.
>>> As you'll see in 3.10.1 StreamID Security state (SEC_SID), "Whether a
>>> stream is under Secure control or not is a different property to the
>>> target PA space of a transaction.", so we definitely should *not* do
>>> any funky stuff depending on which address is accessed.
>>
>>
>> Thank you for the encouraging and very constructive feedback.
>>
>>
>> Your proposed solution—to define the security attribute as a property on
>> the PCIDevice—is the perfect way forward to resolve Secure device issue.
>> Perhaps we can implement this functionality in V4 as shown in the
>> following code snippet?
>>
>> 1) define sec_sid in include/hw/pci/pci_device.h:
>>
>> struct PCIDevice {
>> DeviceState qdev;
>> ......
>> /* Add SEC_SID property for SMMU security context */
>> uint8_t sec_sid; /* 0 = Non-secure, 1 = Secure*/
>> ......
>>
>> }
>>
>>
>> 2) then add sec-sid field in the Property of PCI in hw/pci/pci.c:
>>
>> static const Property pci_props[] = {
>> ......
>> /* SEC_SID property: 0=NS, 1=S */
>> DEFINE_PROP_UINT8("sec-sid", PCIDevice, sec_sid, 0),
>>
>> ......
>>
>> };
>>
>>
>> 3) get sec-sid in smmu_find_add_as(hw/arm/smmu-common.c):
>>
>> static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int
>> devfn)
>> {
>> SMMUState *s = opaque;
>> SMMUPciBus *sbus =
>> g_hash_table_lookup(s->smmu_pcibus_by_busptr, bus);
>> SMMUDevice *sdev;
>> static unsigned int index;
>> ......
>> sdev = sbus->pbdev[devfn];
>> if (!sdev) {
>>
>> PCIDevice *pcidev;
>> pcidev = pci_find_device(bus, pci_bus_num(bus), devfn);
>> if (pcidev) {
>> /* Get sec_sid which is originally from QEMU options.
>> * For example:
>> * qemu-system-aarch64 \
>> * -drive if=none,file=/nvme.img,format=raw,id=nvme0 \
>> * -device nvme,drive=nvme0,serial=deadbeef,sec-sid=1
>> *
>> * This NVMe device will have sec_sid = 1.
>> */
>> sdev->sec_sid = pcidev->sec_sid;
>> } else {
>> /* Default to Non-secure if device not found */
>> sdev->sec_sid = 0;
>> }
>>
>> ......
>>
>> }
>>
>> The SEC_SID of device will be passed from QEMU options to PCIDevice and
>> then SMMUDevice. This would allow the SMMU model to perform the
>> necessary checks against both the security context of the DMA access and
>> the SMMU_S_IDR1.SECURE_IMPL capability bit.
>>
>>
>> Is this a reasonable implementation approach? I would greatly appreciate
>> any feedback.
>>
>
> Yes, this looks reasonable.
> However, for Realm support, the sec_sid is not static, and can be
> changed dynamically by the device itself, after interaction with RMM
> firmware, following TDISP protocol (T bit is set in PCI transactions,
> which we don't model in QEMU).
>
> See 3.9.4 SMMU interactions with the PCIe fields T, TE and XT.
>
> This T bit state is currently stored out of QEMU, as we use the
> external program spdm-emu for all that. So, we implemented a very
> hacky solution detecting when this device it set in "Realm" mode based
> on config prefetch with this new sec_sid:
> https://github.com/pbo-linaro/qemu/commit/c4db6f72c26ac52739814621ce018e65869f934b
>
> It uses a dictionnary simply because of lifetime issue, as the config
> seems to be emitted before the first access of the device in our case.
> I didn't dig further. It all cases, it's ugly, not a reference, and
> just a work in progress to show you how we need to update it.
Thank you for the detailed feedback and for approving the new direction.
I'm glad we are aligned on the path forward.
It's interesting that you mention the dynamic update mechanism for
Realm. In my early testing, before submitting the RFC patches, I
actually experimented with a similar dynamic approach. I defined a
custom QMP interface to directly modify a bitmap structure inside the
SMMU model, which was used to dynamically mark or unmark a StreamID as
secure. The lookup logic was something like this:
bool smmuv3_sid_is_secure(uint32_t sid)
{
uint32_t chunk_idx;
uint32_t bit_idx;
SidBitmapChunk *chunk;
chunk_idx = SID_INDEX(sid);
bit_idx = SID_BIT(sid);
/* Check if we have a bitmap for this chunk */
chunk = sid_manager.chunks[chunk_idx];
if (!chunk) {
return false;
}
/* Fast bitmap lookup */
return test_bit(bit_idx, chunk->bitmap);
}
Ultimately, I didn't include this in the patch series because managing
the security context completely separately from the device felt a bit
strange and wasn't a clean architectural fit.
>
> All that to said that even though we can provide this static property
> for devices that are always secure, the design will have to support
> dynamic changes as well. Not a big deal, and you can keep this out of
> scope for now, we'll change that later when adding Realms support.
> As long as we have something that does not break non secure use case
> while allowing secure devices, I think we're good!
I completely agree. I will proceed with an initial version based on the
static property approach we discussed, ensuring that Non-secure
regression tests pass. The behavior will be as follows as you suggested
in previous thread:
- For a Non-secure device (sec_sid=0), all accesses will be treated as
Non-secure.
- For a Secure device (sec_sid=1), if MemTxAttrs.space is Secure and
SMMU_S_IDR1.SECURE_IMPL == 1, the access will be Secure.
- For a Secure device (sec_sid=1), if MemTxAttrs.space is Non-secure,
the access will remain Non-secure.
>>
>>>
>>> By curiosity, which kind of secure device are you using? Is it one of
>>> the device available upstream, or a specific one you have in your fork?
>>
>>
>> I just use IGB NIC for test with Hafnium + OP-TEE software stack.
>>
>>
>>>
>>>>
>>>> You've raised a crucial architectural point that I hadn't fully
>>>> considered: how a standard "Normal World" PCIe device should be
>>>> properly
>>>> associated with the "Secure World". To be honest, I didn't have a
>>>> clear
>>>> answer for this, so your feedback is a perfect opportunity for me
>>>> to dig
>>>> in and understand this area correctly.
>>>>
>>> It took time for us to reach that question also.
>>> Our current understanding is that SEC_SID == Realm is identified by
>>> bits on pci side (part of TDISP protocol), and that secure devices are
>>> indeed hardcoded somewhere.
>>>
>>> We asked this question to some Arm folks working on this area, to
>>> confirm Secure devices are supposed to be defined this way.
>>
>>
>> Thank you also for sharing the invaluable context from your team's
>> internal discussions and your outreach to the Arm experts. This
>> clarification directly inspired my new proposal as described above.
>>
>
> We didn't receive an answer, but after looking more at how secure
> world is modelled (with separate address space), it makes sense to
> have this description built in in the firmware or the platform itself.
>
> I'm not familiar with Hafnium, but I don't expect any device to
> transition from Non secure to Secure world similar to Realm approach.
This has been a long-standing question of mine as well. Your intuition
makes perfect sense to me: if a device can switch between Secure and
Non-secure states at will, it seems physically insecure. A device could
be compromised while in the Non-secure state and then carry that
compromised state into the Secure World, which would undermine the very
protections the SMMU aims to provide. For the Realm state, I am not very
familiar with Realm either, but I will definitely study the PCIe and
SMMU specifications to better understand the mechanisms that ensure
security during these dynamic transitions between Realm and Non-Realm state.
>
>>
>> I will proceed with this plan for the v4 patch set. Thanks again for
>> your mentorship and for helping to clarify the correct path forward.
>>
>
> Thanks for your series, it's definitely a great base to work on Realm
> support, and we'll be glad to publish this later, after secure support
> is merged. It will be your turn to review and give feedback if you
> want :)
Thank you for the kind offer! I would be absolutely thrilled to
contribute in any way I can to the future Realm support work and I look
forward to it.
Yours,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-21 3:51 ` Tao Tang
@ 2025-10-22 21:23 ` Pierrick Bouvier
2025-10-23 9:02 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Pierrick Bouvier @ 2025-10-22 21:23 UTC (permalink / raw)
To: Tao Tang, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
On 2025-10-20 20:51, Tao Tang wrote:
> Hi Pierrick,
>
> On 2025/10/21 06:55, Pierrick Bouvier wrote:
>> On 2025-10-20 01:44, Tao Tang wrote:
>>> Hi Pierrick,
>>>
>>> On 2025/10/16 15:04, Pierrick Bouvier wrote:
>>>> On 10/15/25 11:37 PM, Tao Tang wrote:
>>>>> Hi Pierrick:
>>>>>
>>>>> On 2025/10/15 08:02, Pierrick Bouvier wrote:
>>>>>> Hi Tao,
>>>>>>
>>>>>> On 10/12/25 8:15 AM, Tao Tang wrote:
>>>>>>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to
>>>>>>> select
>>>>>>> the programming interface. To support future extensions like RME,
>>>>>>> which
>>>>>>> defines four security states (Non-secure, Secure, Realm, and Root),
>>>>>>> the
>>>>>>> QEMU model must cleanly separate these contexts for all operations.
>>>>>>>
>>>>>>> This commit leverages the generic iommu_index to represent this
>>>>>>> security context. The core IOMMU layer now uses the SMMU's
>>>>>>> .attrs_to_index
>>>>>>> callback to map a transaction's ARMSecuritySpace attribute to the
>>>>>>> corresponding iommu_index.
>>>>>>>
>>>>>>> This index is then passed down to smmuv3_translate and used
>>>>>>> throughout
>>>>>>> the model to select the correct register bank and processing logic.
>>>>>>> This
>>>>>>> makes the iommu_index the clear QEMU equivalent of the architectural
>>>>>>> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>>>>>>>
>>>>>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>>>>>> ---
>>>>>>> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
>>>>>>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>>>>>> index c9c742c80b..b44859540f 100644
>>>>>>> --- a/hw/arm/smmuv3.c
>>>>>>> +++ b/hw/arm/smmuv3.c
>>>>>>> @@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo
>>>>>>> *event, hwaddr iova)
>>>>>>> }
>>>>>>> }
>>>>>>> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
>>>>>>> +{
>>>>>>> + switch (attrs.space) {
>>>>>>> + case ARMSS_Secure:
>>>>>>> + return SMMU_SEC_SID_S;
>>>>>>> + case ARMSS_NonSecure:
>>>>>>> + default:
>>>>>>> + return SMMU_SEC_SID_NS;
>>>>>>> + }
>>>>>>> +}
>>>>>>> +
>>>>>>> +/*
>>>>>>> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
>>>>>>> + * iommu_idx = 0: Non-secure transactions
>>>>>>> + * iommu_idx = 1: Secure transactions
>>>>>>> + *
>>>>>>> + * The iommu_idx parameter effectively implements the SEC_SID
>>>>>>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>>>>>>> specification,
>>>>>>> + * which allows the SMMU to differentiate between different
>>>>>>> security
>>>>>>> state
>>>>>>> + * transactions at the hardware level.
>>>>>>> + */
>>>>>>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu,
>>>>>>> MemTxAttrs attrs)
>>>>>>> +{
>>>>>>> + return (int)smmuv3_attrs_to_sec_sid(attrs);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>>>>>>> +{
>>>>>>> + /* Support 2 IOMMU indexes for now: NS/S */
>>>>>>> + return SMMU_SEC_SID_NUM;
>>>>>>> +}
>>>>>>> +
>>>>>>> /* Entry point to SMMU, does everything. */
>>>>>>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr,
>>>>>>> hwaddr
>>>>>>> addr,
>>>>>>> IOMMUAccessFlags flag, int
>>>>>>> iommu_idx)
>>>>>>> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry
>>>>>>> smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>>>>>>> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
>>>>>>> SMMUv3State *s = sdev->smmu;
>>>>>>> uint32_t sid = smmu_get_sid(sdev);
>>>>>>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>>>>>>> + SMMUSecSID sec_sid = iommu_idx;
>>>>>>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>>>>>> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
>>>>>>> .sid = sid,
>>>>>>> @@ -2540,6 +2572,8 @@ static void
>>>>>>> smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>>>>>>> imrc->translate = smmuv3_translate;
>>>>>>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>>>>>>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>>>>>>> + imrc->num_indexes = smmuv3_num_indexes;
>>>>>>> }
>>>>>>> static const TypeInfo smmuv3_type_info = {
>>>>>>
>>>>>> I noticed that this commit breaks boot of a simple Linux kernel. It
>>>>>> was already the case with v2, and it seems there is a deeper issue.
>>>>>>
>>>>>> Virtio drive initialization hangs up with:
>>>>>> [ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
>>>>>> blocks (10.7 GB/10.0 GiB)
>>>>>> smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18
>>>>>> bypass (smmu disabled) iova:0xfffff040 is_write=1
>>>>>>
>>>>>> You can reproduce that with any kernel/rootfs, but if you want a
>>>>>> simple recipe (you need podman and qemu-user-static):
>>>>>> $ git clone https://github.com/pbo-linaro/qemu-linux-stack
>>>>>> $ cd qemu-linux-stack
>>>>>> $ ./build_kernel.sh
>>>>>> $ ./build_rootfs.sh
>>>>>> $ /path/to/qemu-system-aarch64 \
>>>>>> -nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
>>>>>> -append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
>>>>>>
>>>>>> Looking more closely,
>>>>>> we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
>>>>>> This values comes from smmuv3_attrs_to_sec_sid, by reading
>>>>>> attrs.space, which is ArmSS_Secure.
>>>>>>
>>>>>> The problem is that it's impossible to have anything Secure given
>>>>>> that
>>>>>> all the code above runs in NonSecure world.
>>>>>> After investigation, the original value read from attrs.space has not
>>>>>> been set anywhere, and is just the default zero-initialized value
>>>>>> coming from pci_msi_trigger. It happens that it defaults to
>>>>>> SEC_SID_S,
>>>>>> which probably matches your use case with hafnium, but it's an happy
>>>>>> accident.
>>>>>>
>>>>>> Looking at the SMMU spec, I understand that SEC_SID is configured for
>>>>>> each stream, and can change dynamically.
>>>>>> On the opposite, a StreamID is fixed and derived from PCI bus and
>>>>>> slot
>>>>>> for a given device.
>>>>>>
>>>>>> Thus, I think we are missing some logic here.
>>>>>> I'm still trying to understand where the SEC_SID should come from
>>>>>> initially.
>>>>>> "The association between a device and the Security state of the
>>>>>> programming interface is a system-defined property."
>>>>>> Does it mean we should be able to set a QEMU property for any device?
>>>>>>
>>>>>> Does anyone familiar with this has some idea?
>>>>>>
>>>>>> As well, we should check the SEC_SID found based on
>>>>>> SMMU_S_IDR1.SECURE_IMPL.
>>>>>> 3.10.1 StreamID Security state (SEC_SID)
>>>>>> If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
>>>>>> StreamID, and either:
>>>>>> • A SEC_SID identifier with a value of 0.
>>>>>> • No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
>>>>>> If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a
>>>>>> StreamID, and a SEC_SID identifier.
>>>>>>
>>>>>> Regards,
>>>>>> Pierrick
>>>>>
>>>>> Thank you very much for your detailed review and in-depth analysis,
>>>>> and
>>>>> for pointing out this critical issue that breaks the Linux boot.
>>>>>
>>>>>
>>>>> To be transparent, my initial approach was indeed tailored to my
>>>>> specific test case, where I was effectively hardcoding the device's
>>>>> StreamID to represent it's a so-called Secure device in my self
>>>>> testing.
>>>>> This clearly isn't a general solution.
>>>>>
>>>>
>>>> It's definitely not a bad approach, and it's a good way to exercise
>>>> the secure path. It would have been caught by some of QEMU functional
>>>> tests anyway, so it's not a big deal.
>>>>
>>>> A solution would be to define the secure attribute as a property of
>>>> the PCI device, and query that to identify sec_sid accordingly.
>>>> As you'll see in 3.10.1 StreamID Security state (SEC_SID), "Whether a
>>>> stream is under Secure control or not is a different property to the
>>>> target PA space of a transaction.", so we definitely should *not* do
>>>> any funky stuff depending on which address is accessed.
>>>
>>>
>>> Thank you for the encouraging and very constructive feedback.
>>>
>>>
>>> Your proposed solution—to define the security attribute as a property on
>>> the PCIDevice—is the perfect way forward to resolve Secure device issue.
>>> Perhaps we can implement this functionality in V4 as shown in the
>>> following code snippet?
>>>
>>> 1) define sec_sid in include/hw/pci/pci_device.h:
>>>
>>> struct PCIDevice {
>>> DeviceState qdev;
>>> ......
>>> /* Add SEC_SID property for SMMU security context */
>>> uint8_t sec_sid; /* 0 = Non-secure, 1 = Secure*/
>>> ......
>>>
>>> }
>>>
>>>
>>> 2) then add sec-sid field in the Property of PCI in hw/pci/pci.c:
>>>
>>> static const Property pci_props[] = {
>>> ......
>>> /* SEC_SID property: 0=NS, 1=S */
>>> DEFINE_PROP_UINT8("sec-sid", PCIDevice, sec_sid, 0),
>>>
>>> ......
>>>
>>> };
>>>
>>>
>>> 3) get sec-sid in smmu_find_add_as(hw/arm/smmu-common.c):
>>>
>>> static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int
>>> devfn)
>>> {
>>> SMMUState *s = opaque;
>>> SMMUPciBus *sbus =
>>> g_hash_table_lookup(s->smmu_pcibus_by_busptr, bus);
>>> SMMUDevice *sdev;
>>> static unsigned int index;
>>> ......
>>> sdev = sbus->pbdev[devfn];
>>> if (!sdev) {
>>>
>>> PCIDevice *pcidev;
>>> pcidev = pci_find_device(bus, pci_bus_num(bus), devfn);
>>> if (pcidev) {
>>> /* Get sec_sid which is originally from QEMU options.
>>> * For example:
>>> * qemu-system-aarch64 \
>>> * -drive if=none,file=/nvme.img,format=raw,id=nvme0 \
>>> * -device nvme,drive=nvme0,serial=deadbeef,sec-sid=1
>>> *
>>> * This NVMe device will have sec_sid = 1.
>>> */
>>> sdev->sec_sid = pcidev->sec_sid;
>>> } else {
>>> /* Default to Non-secure if device not found */
>>> sdev->sec_sid = 0;
>>> }
>>>
>>> ......
>>>
>>> }
>>>
>>> The SEC_SID of device will be passed from QEMU options to PCIDevice and
>>> then SMMUDevice. This would allow the SMMU model to perform the
>>> necessary checks against both the security context of the DMA access and
>>> the SMMU_S_IDR1.SECURE_IMPL capability bit.
>>>
>>>
>>> Is this a reasonable implementation approach? I would greatly appreciate
>>> any feedback.
>>>
>>
>> Yes, this looks reasonable.
>> However, for Realm support, the sec_sid is not static, and can be
>> changed dynamically by the device itself, after interaction with RMM
>> firmware, following TDISP protocol (T bit is set in PCI transactions,
>> which we don't model in QEMU).
>>
>> See 3.9.4 SMMU interactions with the PCIe fields T, TE and XT.
>>
>> This T bit state is currently stored out of QEMU, as we use the
>> external program spdm-emu for all that. So, we implemented a very
>> hacky solution detecting when this device it set in "Realm" mode based
>> on config prefetch with this new sec_sid:
>> https://github.com/pbo-linaro/qemu/commit/c4db6f72c26ac52739814621ce018e65869f934b
>>
>> It uses a dictionnary simply because of lifetime issue, as the config
>> seems to be emitted before the first access of the device in our case.
>> I didn't dig further. It all cases, it's ugly, not a reference, and
>> just a work in progress to show you how we need to update it.
>
>
> Thank you for the detailed feedback and for approving the new direction.
> I'm glad we are aligned on the path forward.
>
>
> It's interesting that you mention the dynamic update mechanism for
> Realm. In my early testing, before submitting the RFC patches, I
> actually experimented with a similar dynamic approach. I defined a
> custom QMP interface to directly modify a bitmap structure inside the
> SMMU model, which was used to dynamically mark or unmark a StreamID as
> secure. The lookup logic was something like this:
>
> bool smmuv3_sid_is_secure(uint32_t sid)
> {
> uint32_t chunk_idx;
> uint32_t bit_idx;
> SidBitmapChunk *chunk;
>
> chunk_idx = SID_INDEX(sid);
> bit_idx = SID_BIT(sid);
>
> /* Check if we have a bitmap for this chunk */
> chunk = sid_manager.chunks[chunk_idx];
> if (!chunk) {
> return false;
> }
> /* Fast bitmap lookup */
> return test_bit(bit_idx, chunk->bitmap);
> }
>
>
> Ultimately, I didn't include this in the patch series because managing
> the security context completely separately from the device felt a bit
> strange and wasn't a clean architectural fit.
>
>>
>> All that to said that even though we can provide this static property
>> for devices that are always secure, the design will have to support
>> dynamic changes as well. Not a big deal, and you can keep this out of
>> scope for now, we'll change that later when adding Realms support.
>> As long as we have something that does not break non secure use case
>> while allowing secure devices, I think we're good!
>
>
> I completely agree. I will proceed with an initial version based on the
> static property approach we discussed, ensuring that Non-secure
> regression tests pass. The behavior will be as follows as you suggested
> in previous thread:
>
> - For a Non-secure device (sec_sid=0), all accesses will be treated as
> Non-secure.
>
> - For a Secure device (sec_sid=1), if MemTxAttrs.space is Secure and
> SMMU_S_IDR1.SECURE_IMPL == 1, the access will be Secure.
>
> - For a Secure device (sec_sid=1), if MemTxAttrs.space is Non-secure,
> the access will remain Non-secure.
>
That's good.
>
>>>
>>>>
>>>> By curiosity, which kind of secure device are you using? Is it one of
>>>> the device available upstream, or a specific one you have in your fork?
>>>
>>>
>>> I just use IGB NIC for test with Hafnium + OP-TEE software stack.
>>>
>>>
>>>>
>>>>>
>>>>> You've raised a crucial architectural point that I hadn't fully
>>>>> considered: how a standard "Normal World" PCIe device should be
>>>>> properly
>>>>> associated with the "Secure World". To be honest, I didn't have a
>>>>> clear
>>>>> answer for this, so your feedback is a perfect opportunity for me
>>>>> to dig
>>>>> in and understand this area correctly.
>>>>>
>>>> It took time for us to reach that question also.
>>>> Our current understanding is that SEC_SID == Realm is identified by
>>>> bits on pci side (part of TDISP protocol), and that secure devices are
>>>> indeed hardcoded somewhere.
>>>>
>>>> We asked this question to some Arm folks working on this area, to
>>>> confirm Secure devices are supposed to be defined this way.
>>>
>>>
>>> Thank you also for sharing the invaluable context from your team's
>>> internal discussions and your outreach to the Arm experts. This
>>> clarification directly inspired my new proposal as described above.
>>>
>>
>> We didn't receive an answer, but after looking more at how secure
>> world is modelled (with separate address space), it makes sense to
>> have this description built in in the firmware or the platform itself.
>>
>> I'm not familiar with Hafnium, but I don't expect any device to
>> transition from Non secure to Secure world similar to Realm approach.
>
>
> This has been a long-standing question of mine as well. Your intuition
> makes perfect sense to me: if a device can switch between Secure and
> Non-secure states at will, it seems physically insecure. A device could
> be compromised while in the Non-secure state and then carry that
> compromised state into the Secure World, which would undermine the very
> protections the SMMU aims to provide. For the Realm state, I am not very
> familiar with Realm either, but I will definitely study the PCIe and
> SMMU specifications to better understand the mechanisms that ensure
> security during these dynamic transitions between Realm and Non-Realm state.
>
For completeness, we received an answer about this topic, and this
confirm what we said on this thread.
"Prior to RME and RME-DA, PCIe devices were only assignable to
Non-secure state (SEC_SID=NS) as there was no architected method to
allow them to be assigned to Secure state.
RME-DA and TDISP allow a PCIe device to have SEC_SID that is either
Realm or Non-secure."
> How SEC_SID == Secure is identified?
"This would be set by the system, usually by statically marking certain
on-chip devices with SEC_SID=Secure."
>
>>
>>>
>>> I will proceed with this plan for the v4 patch set. Thanks again for
>>> your mentorship and for helping to clarify the correct path forward.
>>>
>>
>> Thanks for your series, it's definitely a great base to work on Realm
>> support, and we'll be glad to publish this later, after secure support
>> is merged. It will be your turn to review and give feedback if you
>> want :)
>
>
> Thank you for the kind offer! I would be absolutely thrilled to
> contribute in any way I can to the future Realm support work and I look
> forward to it.
>
I will ping you on time for this!
>
> Yours,
>
> Tao
>
Regards,
Pierrick
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-22 21:23 ` Pierrick Bouvier
@ 2025-10-23 9:02 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-10-23 9:02 UTC (permalink / raw)
To: Pierrick Bouvier, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
Hi Pierrick,
On 2025/10/23 05:23, Pierrick Bouvier wrote:
> On 2025-10-20 20:51, Tao Tang wrote:
>> Hi Pierrick,
>>
>> On 2025/10/21 06:55, Pierrick Bouvier wrote:
>>> On 2025-10-20 01:44, Tao Tang wrote:
>>>> Hi Pierrick,
>>>>
>>>> On 2025/10/16 15:04, Pierrick Bouvier wrote:
>>>>> On 10/15/25 11:37 PM, Tao Tang wrote:
>>>>>> Hi Pierrick:
>>>>>>
>>>>>> On 2025/10/15 08:02, Pierrick Bouvier wrote:
>>>>>>> Hi Tao,
>>>>>>>
>>>>>>> On 10/12/25 8:15 AM, Tao Tang wrote:
>>>>>>>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to
>>>>>>>> select
>>>>>>>> the programming interface. To support future extensions like RME,
>>>>>>>> which
>>>>>>>> defines four security states (Non-secure, Secure, Realm, and
>>>>>>>> Root),
>>>>>>>> the
>>>>>>>> QEMU model must cleanly separate these contexts for all
>>>>>>>> operations.
>>>>>>>>
>>>>>>>> This commit leverages the generic iommu_index to represent this
>>>>>>>> security context. The core IOMMU layer now uses the SMMU's
>>>>>>>> .attrs_to_index
>>>>>>>> callback to map a transaction's ARMSecuritySpace attribute to the
>>>>>>>> corresponding iommu_index.
>>>>>>>>
>>>>>>>> This index is then passed down to smmuv3_translate and used
>>>>>>>> throughout
>>>>>>>> the model to select the correct register bank and processing
>>>>>>>> logic.
>>>>>>>> This
>>>>>>>> makes the iommu_index the clear QEMU equivalent of the
>>>>>>>> architectural
>>>>>>>> SEC_SID, cleanly separating the contexts for all subsequent
>>>>>>>> lookups.
>>>>>>>>
>>>>>>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>>>>>>> ---
>>>>>>>> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
>>>>>>>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>>>>>>> index c9c742c80b..b44859540f 100644
>>>>>>>> --- a/hw/arm/smmuv3.c
>>>>>>>> +++ b/hw/arm/smmuv3.c
>>>>>>>> @@ -1080,6 +1080,38 @@ static void
>>>>>>>> smmuv3_fixup_event(SMMUEventInfo
>>>>>>>> *event, hwaddr iova)
>>>>>>>> }
>>>>>>>> }
>>>>>>>> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
>>>>>>>> +{
>>>>>>>> + switch (attrs.space) {
>>>>>>>> + case ARMSS_Secure:
>>>>>>>> + return SMMU_SEC_SID_S;
>>>>>>>> + case ARMSS_NonSecure:
>>>>>>>> + default:
>>>>>>>> + return SMMU_SEC_SID_NS;
>>>>>>>> + }
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +/*
>>>>>>>> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
>>>>>>>> + * iommu_idx = 0: Non-secure transactions
>>>>>>>> + * iommu_idx = 1: Secure transactions
>>>>>>>> + *
>>>>>>>> + * The iommu_idx parameter effectively implements the SEC_SID
>>>>>>>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>>>>>>>> specification,
>>>>>>>> + * which allows the SMMU to differentiate between different
>>>>>>>> security
>>>>>>>> state
>>>>>>>> + * transactions at the hardware level.
>>>>>>>> + */
>>>>>>>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu,
>>>>>>>> MemTxAttrs attrs)
>>>>>>>> +{
>>>>>>>> + return (int)smmuv3_attrs_to_sec_sid(attrs);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>>>>>>>> +{
>>>>>>>> + /* Support 2 IOMMU indexes for now: NS/S */
>>>>>>>> + return SMMU_SEC_SID_NUM;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> /* Entry point to SMMU, does everything. */
>>>>>>>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr,
>>>>>>>> hwaddr
>>>>>>>> addr,
>>>>>>>> IOMMUAccessFlags flag, int
>>>>>>>> iommu_idx)
>>>>>>>> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry
>>>>>>>> smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>>>>>>>> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
>>>>>>>> SMMUv3State *s = sdev->smmu;
>>>>>>>> uint32_t sid = smmu_get_sid(sdev);
>>>>>>>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>>>>>>>> + SMMUSecSID sec_sid = iommu_idx;
>>>>>>>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>>>>>>> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
>>>>>>>> .sid = sid,
>>>>>>>> @@ -2540,6 +2572,8 @@ static void
>>>>>>>> smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>>>>>>>> imrc->translate = smmuv3_translate;
>>>>>>>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>>>>>>>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>>>>>>>> + imrc->num_indexes = smmuv3_num_indexes;
>>>>>>>> }
>>>>>>>> static const TypeInfo smmuv3_type_info = {
>>>>>>>
>>>>>>> I noticed that this commit breaks boot of a simple Linux kernel. It
>>>>>>> was already the case with v2, and it seems there is a deeper issue.
>>>>>>>
>>>>>>> Virtio drive initialization hangs up with:
>>>>>>> [ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
>>>>>>> blocks (10.7 GB/10.0 GiB)
>>>>>>> smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18
>>>>>>> bypass (smmu disabled) iova:0xfffff040 is_write=1
>>>>>>>
>>>>>>> You can reproduce that with any kernel/rootfs, but if you want a
>>>>>>> simple recipe (you need podman and qemu-user-static):
>>>>>>> $ git clone https://github.com/pbo-linaro/qemu-linux-stack
>>>>>>> $ cd qemu-linux-stack
>>>>>>> $ ./build_kernel.sh
>>>>>>> $ ./build_rootfs.sh
>>>>>>> $ /path/to/qemu-system-aarch64 \
>>>>>>> -nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
>>>>>>> -append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
>>>>>>>
>>>>>>> Looking more closely,
>>>>>>> we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
>>>>>>> This values comes from smmuv3_attrs_to_sec_sid, by reading
>>>>>>> attrs.space, which is ArmSS_Secure.
>>>>>>>
>>>>>>> The problem is that it's impossible to have anything Secure given
>>>>>>> that
>>>>>>> all the code above runs in NonSecure world.
>>>>>>> After investigation, the original value read from attrs.space
>>>>>>> has not
>>>>>>> been set anywhere, and is just the default zero-initialized value
>>>>>>> coming from pci_msi_trigger. It happens that it defaults to
>>>>>>> SEC_SID_S,
>>>>>>> which probably matches your use case with hafnium, but it's an
>>>>>>> happy
>>>>>>> accident.
>>>>>>>
>>>>>>> Looking at the SMMU spec, I understand that SEC_SID is
>>>>>>> configured for
>>>>>>> each stream, and can change dynamically.
>>>>>>> On the opposite, a StreamID is fixed and derived from PCI bus and
>>>>>>> slot
>>>>>>> for a given device.
>>>>>>>
>>>>>>> Thus, I think we are missing some logic here.
>>>>>>> I'm still trying to understand where the SEC_SID should come from
>>>>>>> initially.
>>>>>>> "The association between a device and the Security state of the
>>>>>>> programming interface is a system-defined property."
>>>>>>> Does it mean we should be able to set a QEMU property for any
>>>>>>> device?
>>>>>>>
>>>>>>> Does anyone familiar with this has some idea?
>>>>>>>
>>>>>>> As well, we should check the SEC_SID found based on
>>>>>>> SMMU_S_IDR1.SECURE_IMPL.
>>>>>>> 3.10.1 StreamID Security state (SEC_SID)
>>>>>>> If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
>>>>>>> StreamID, and either:
>>>>>>> • A SEC_SID identifier with a value of 0.
>>>>>>> • No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
>>>>>>> If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a
>>>>>>> StreamID, and a SEC_SID identifier.
>>>>>>>
>>>>>>> Regards,
>>>>>>> Pierrick
>>>>>>
>>>>>> Thank you very much for your detailed review and in-depth analysis,
>>>>>> and
>>>>>> for pointing out this critical issue that breaks the Linux boot.
>>>>>>
>>>>>>
>>>>>> To be transparent, my initial approach was indeed tailored to my
>>>>>> specific test case, where I was effectively hardcoding the device's
>>>>>> StreamID to represent it's a so-called Secure device in my self
>>>>>> testing.
>>>>>> This clearly isn't a general solution.
>>>>>>
>>>>>
>>>>> It's definitely not a bad approach, and it's a good way to exercise
>>>>> the secure path. It would have been caught by some of QEMU functional
>>>>> tests anyway, so it's not a big deal.
>>>>>
>>>>> A solution would be to define the secure attribute as a property of
>>>>> the PCI device, and query that to identify sec_sid accordingly.
>>>>> As you'll see in 3.10.1 StreamID Security state (SEC_SID), "Whether a
>>>>> stream is under Secure control or not is a different property to the
>>>>> target PA space of a transaction.", so we definitely should *not* do
>>>>> any funky stuff depending on which address is accessed.
>>>>
>>>>
>>>> Thank you for the encouraging and very constructive feedback.
>>>>
>>>>
>>>> Your proposed solution—to define the security attribute as a
>>>> property on
>>>> the PCIDevice—is the perfect way forward to resolve Secure device
>>>> issue.
>>>> Perhaps we can implement this functionality in V4 as shown in the
>>>> following code snippet?
>>>>
>>>> 1) define sec_sid in include/hw/pci/pci_device.h:
>>>>
>>>> struct PCIDevice {
>>>> DeviceState qdev;
>>>> ......
>>>> /* Add SEC_SID property for SMMU security context */
>>>> uint8_t sec_sid; /* 0 = Non-secure, 1 = Secure*/
>>>> ......
>>>>
>>>> }
>>>>
>>>>
>>>> 2) then add sec-sid field in the Property of PCI in hw/pci/pci.c:
>>>>
>>>> static const Property pci_props[] = {
>>>> ......
>>>> /* SEC_SID property: 0=NS, 1=S */
>>>> DEFINE_PROP_UINT8("sec-sid", PCIDevice, sec_sid, 0),
>>>>
>>>> ......
>>>>
>>>> };
>>>>
>>>>
>>>> 3) get sec-sid in smmu_find_add_as(hw/arm/smmu-common.c):
>>>>
>>>> static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int
>>>> devfn)
>>>> {
>>>> SMMUState *s = opaque;
>>>> SMMUPciBus *sbus =
>>>> g_hash_table_lookup(s->smmu_pcibus_by_busptr, bus);
>>>> SMMUDevice *sdev;
>>>> static unsigned int index;
>>>> ......
>>>> sdev = sbus->pbdev[devfn];
>>>> if (!sdev) {
>>>>
>>>> PCIDevice *pcidev;
>>>> pcidev = pci_find_device(bus, pci_bus_num(bus), devfn);
>>>> if (pcidev) {
>>>> /* Get sec_sid which is originally from QEMU options.
>>>> * For example:
>>>> * qemu-system-aarch64 \
>>>> * -drive if=none,file=/nvme.img,format=raw,id=nvme0 \
>>>> * -device nvme,drive=nvme0,serial=deadbeef,sec-sid=1
>>>> *
>>>> * This NVMe device will have sec_sid = 1.
>>>> */
>>>> sdev->sec_sid = pcidev->sec_sid;
>>>> } else {
>>>> /* Default to Non-secure if device not found */
>>>> sdev->sec_sid = 0;
>>>> }
>>>>
>>>> ......
>>>>
>>>> }
>>>>
>>>> The SEC_SID of device will be passed from QEMU options to PCIDevice
>>>> and
>>>> then SMMUDevice. This would allow the SMMU model to perform the
>>>> necessary checks against both the security context of the DMA
>>>> access and
>>>> the SMMU_S_IDR1.SECURE_IMPL capability bit.
>>>>
>>>>
>>>> Is this a reasonable implementation approach? I would greatly
>>>> appreciate
>>>> any feedback.
>>>>
>>>
>>> Yes, this looks reasonable.
>>> However, for Realm support, the sec_sid is not static, and can be
>>> changed dynamically by the device itself, after interaction with RMM
>>> firmware, following TDISP protocol (T bit is set in PCI transactions,
>>> which we don't model in QEMU).
>>>
>>> See 3.9.4 SMMU interactions with the PCIe fields T, TE and XT.
>>>
>>> This T bit state is currently stored out of QEMU, as we use the
>>> external program spdm-emu for all that. So, we implemented a very
>>> hacky solution detecting when this device it set in "Realm" mode based
>>> on config prefetch with this new sec_sid:
>>> https://github.com/pbo-linaro/qemu/commit/c4db6f72c26ac52739814621ce018e65869f934b
>>>
>>>
>>> It uses a dictionnary simply because of lifetime issue, as the config
>>> seems to be emitted before the first access of the device in our case.
>>> I didn't dig further. It all cases, it's ugly, not a reference, and
>>> just a work in progress to show you how we need to update it.
>>
>>
>> Thank you for the detailed feedback and for approving the new direction.
>> I'm glad we are aligned on the path forward.
>>
>>
>> It's interesting that you mention the dynamic update mechanism for
>> Realm. In my early testing, before submitting the RFC patches, I
>> actually experimented with a similar dynamic approach. I defined a
>> custom QMP interface to directly modify a bitmap structure inside the
>> SMMU model, which was used to dynamically mark or unmark a StreamID as
>> secure. The lookup logic was something like this:
>>
>> bool smmuv3_sid_is_secure(uint32_t sid)
>> {
>> uint32_t chunk_idx;
>> uint32_t bit_idx;
>> SidBitmapChunk *chunk;
>>
>> chunk_idx = SID_INDEX(sid);
>> bit_idx = SID_BIT(sid);
>>
>> /* Check if we have a bitmap for this chunk */
>> chunk = sid_manager.chunks[chunk_idx];
>> if (!chunk) {
>> return false;
>> }
>> /* Fast bitmap lookup */
>> return test_bit(bit_idx, chunk->bitmap);
>> }
>>
>>
>> Ultimately, I didn't include this in the patch series because managing
>> the security context completely separately from the device felt a bit
>> strange and wasn't a clean architectural fit.
>>
>>>
>>> All that to said that even though we can provide this static property
>>> for devices that are always secure, the design will have to support
>>> dynamic changes as well. Not a big deal, and you can keep this out of
>>> scope for now, we'll change that later when adding Realms support.
>>> As long as we have something that does not break non secure use case
>>> while allowing secure devices, I think we're good!
>>
>>
>> I completely agree. I will proceed with an initial version based on the
>> static property approach we discussed, ensuring that Non-secure
>> regression tests pass. The behavior will be as follows as you suggested
>> in previous thread:
>>
>> - For a Non-secure device (sec_sid=0), all accesses will be treated as
>> Non-secure.
>>
>> - For a Secure device (sec_sid=1), if MemTxAttrs.space is Secure and
>> SMMU_S_IDR1.SECURE_IMPL == 1, the access will be Secure.
>>
>> - For a Secure device (sec_sid=1), if MemTxAttrs.space is Non-secure,
>> the access will remain Non-secure.
>>
>
> That's good.
>
>>
>>>>
>>>>>
>>>>> By curiosity, which kind of secure device are you using? Is it one of
>>>>> the device available upstream, or a specific one you have in your
>>>>> fork?
>>>>
>>>>
>>>> I just use IGB NIC for test with Hafnium + OP-TEE software stack.
>>>>
>>>>
>>>>>
>>>>>>
>>>>>> You've raised a crucial architectural point that I hadn't fully
>>>>>> considered: how a standard "Normal World" PCIe device should be
>>>>>> properly
>>>>>> associated with the "Secure World". To be honest, I didn't have a
>>>>>> clear
>>>>>> answer for this, so your feedback is a perfect opportunity for me
>>>>>> to dig
>>>>>> in and understand this area correctly.
>>>>>>
>>>>> It took time for us to reach that question also.
>>>>> Our current understanding is that SEC_SID == Realm is identified by
>>>>> bits on pci side (part of TDISP protocol), and that secure devices
>>>>> are
>>>>> indeed hardcoded somewhere.
>>>>>
>>>>> We asked this question to some Arm folks working on this area, to
>>>>> confirm Secure devices are supposed to be defined this way.
>>>>
>>>>
>>>> Thank you also for sharing the invaluable context from your team's
>>>> internal discussions and your outreach to the Arm experts. This
>>>> clarification directly inspired my new proposal as described above.
>>>>
>>>
>>> We didn't receive an answer, but after looking more at how secure
>>> world is modelled (with separate address space), it makes sense to
>>> have this description built in in the firmware or the platform itself.
>>>
>>> I'm not familiar with Hafnium, but I don't expect any device to
>>> transition from Non secure to Secure world similar to Realm approach.
>>
>>
>> This has been a long-standing question of mine as well. Your intuition
>> makes perfect sense to me: if a device can switch between Secure and
>> Non-secure states at will, it seems physically insecure. A device could
>> be compromised while in the Non-secure state and then carry that
>> compromised state into the Secure World, which would undermine the very
>> protections the SMMU aims to provide. For the Realm state, I am not very
>> familiar with Realm either, but I will definitely study the PCIe and
>> SMMU specifications to better understand the mechanisms that ensure
>> security during these dynamic transitions between Realm and Non-Realm
>> state.
>>
>
> For completeness, we received an answer about this topic, and this
> confirm what we said on this thread.
>
> "Prior to RME and RME-DA, PCIe devices were only assignable to
> Non-secure state (SEC_SID=NS) as there was no architected method to
> allow them to be assigned to Secure state.
> RME-DA and TDISP allow a PCIe device to have SEC_SID that is either
> Realm or Non-secure."
>
> > How SEC_SID == Secure is identified?
>
> "This would be set by the system, usually by statically marking
> certain on-chip devices with SEC_SID=Secure."
Thank you for sharing the official confirmation from Arm. It's great to
have this final validation for our approach.
With the architectural path now clear, I will proceed with implementing
the v4 patch series as we've discussed.
Thanks again for your guidance and collaboration.
Best regards,
Tao
>
>>
>>>
>>>>
>>>> I will proceed with this plan for the v4 patch set. Thanks again for
>>>> your mentorship and for helping to clarify the correct path forward.
>>>>
>>>
>>> Thanks for your series, it's definitely a great base to work on Realm
>>> support, and we'll be glad to publish this later, after secure support
>>> is merged. It will be your turn to review and give feedback if you
>>> want :)
>>
>>
>> Thank you for the kind offer! I would be absolutely thrilled to
>> contribute in any way I can to the future Realm support work and I look
>> forward to it.
>>
>
> I will ping you on time for this!
>
>>
>> Yours,
>>
>> Tao
>>
> Regards,
> Pierrick
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-10-20 8:44 ` Tao Tang
2025-10-20 22:55 ` Pierrick Bouvier
@ 2025-12-04 15:05 ` Eric Auger
2025-12-05 10:54 ` Tao Tang
1 sibling, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-04 15:05 UTC (permalink / raw)
To: Tao Tang, Pierrick Bouvier, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
On 10/20/25 10:44 AM, Tao Tang wrote:
> Hi Pierrick,
>
> On 2025/10/16 15:04, Pierrick Bouvier wrote:
>> On 10/15/25 11:37 PM, Tao Tang wrote:
>>> Hi Pierrick:
>>>
>>> On 2025/10/15 08:02, Pierrick Bouvier wrote:
>>>> Hi Tao,
>>>>
>>>> On 10/12/25 8:15 AM, Tao Tang wrote:
>>>>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to
>>>>> select
>>>>> the programming interface. To support future extensions like RME,
>>>>> which
>>>>> defines four security states (Non-secure, Secure, Realm, and
>>>>> Root), the
>>>>> QEMU model must cleanly separate these contexts for all operations.
>>>>>
>>>>> This commit leverages the generic iommu_index to represent this
>>>>> security context. The core IOMMU layer now uses the SMMU's
>>>>> .attrs_to_index
>>>>> callback to map a transaction's ARMSecuritySpace attribute to the
>>>>> corresponding iommu_index.
>>>>>
>>>>> This index is then passed down to smmuv3_translate and used
>>>>> throughout
>>>>> the model to select the correct register bank and processing
>>>>> logic. This
>>>>> makes the iommu_index the clear QEMU equivalent of the architectural
>>>>> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>>>>>
>>>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>>>> ---
>>>>> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
>>>>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>>>> index c9c742c80b..b44859540f 100644
>>>>> --- a/hw/arm/smmuv3.c
>>>>> +++ b/hw/arm/smmuv3.c
>>>>> @@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo
>>>>> *event, hwaddr iova)
>>>>> }
>>>>> }
>>>>> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
>>>>> +{
>>>>> + switch (attrs.space) {
>>>>> + case ARMSS_Secure:
>>>>> + return SMMU_SEC_SID_S;
>>>>> + case ARMSS_NonSecure:
>>>>> + default:
>>>>> + return SMMU_SEC_SID_NS;
>>>>> + }
>>>>> +}
>>>>> +
>>>>> +/*
>>>>> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
>>>>> + * iommu_idx = 0: Non-secure transactions
>>>>> + * iommu_idx = 1: Secure transactions
>>>>> + *
>>>>> + * The iommu_idx parameter effectively implements the SEC_SID
>>>>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>>>>> specification,
>>>>> + * which allows the SMMU to differentiate between different security
>>>>> state
>>>>> + * transactions at the hardware level.
>>>>> + */
>>>>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu,
>>>>> MemTxAttrs attrs)
>>>>> +{
>>>>> + return (int)smmuv3_attrs_to_sec_sid(attrs);
>>>>> +}
>>>>> +
>>>>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>>>>> +{
>>>>> + /* Support 2 IOMMU indexes for now: NS/S */
>>>>> + return SMMU_SEC_SID_NUM;
>>>>> +}
>>>>> +
>>>>> /* Entry point to SMMU, does everything. */
>>>>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr,
>>>>> hwaddr
>>>>> addr,
>>>>> IOMMUAccessFlags flag, int
>>>>> iommu_idx)
>>>>> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry
>>>>> smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>>>>> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
>>>>> SMMUv3State *s = sdev->smmu;
>>>>> uint32_t sid = smmu_get_sid(sdev);
>>>>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>>>>> + SMMUSecSID sec_sid = iommu_idx;
>>>>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>>>> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
>>>>> .sid = sid,
>>>>> @@ -2540,6 +2572,8 @@ static void
>>>>> smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>>>>> imrc->translate = smmuv3_translate;
>>>>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>>>>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>>>>> + imrc->num_indexes = smmuv3_num_indexes;
>>>>> }
>>>>> static const TypeInfo smmuv3_type_info = {
>>>>
>>>> I noticed that this commit breaks boot of a simple Linux kernel. It
>>>> was already the case with v2, and it seems there is a deeper issue.
>>>>
>>>> Virtio drive initialization hangs up with:
>>>> [ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
>>>> blocks (10.7 GB/10.0 GiB)
>>>> smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18
>>>> bypass (smmu disabled) iova:0xfffff040 is_write=1
>>>>
>>>> You can reproduce that with any kernel/rootfs, but if you want a
>>>> simple recipe (you need podman and qemu-user-static):
>>>> $ git clone https://github.com/pbo-linaro/qemu-linux-stack
>>>> $ cd qemu-linux-stack
>>>> $ ./build_kernel.sh
>>>> $ ./build_rootfs.sh
>>>> $ /path/to/qemu-system-aarch64 \
>>>> -nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
>>>> -append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
>>>>
>>>> Looking more closely,
>>>> we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
>>>> This values comes from smmuv3_attrs_to_sec_sid, by reading
>>>> attrs.space, which is ArmSS_Secure.
>>>>
>>>> The problem is that it's impossible to have anything Secure given that
>>>> all the code above runs in NonSecure world.
>>>> After investigation, the original value read from attrs.space has not
>>>> been set anywhere, and is just the default zero-initialized value
>>>> coming from pci_msi_trigger. It happens that it defaults to SEC_SID_S,
>>>> which probably matches your use case with hafnium, but it's an happy
>>>> accident.
>>>>
>>>> Looking at the SMMU spec, I understand that SEC_SID is configured for
>>>> each stream, and can change dynamically.
>>>> On the opposite, a StreamID is fixed and derived from PCI bus and slot
>>>> for a given device.
>>>>
>>>> Thus, I think we are missing some logic here.
>>>> I'm still trying to understand where the SEC_SID should come from
>>>> initially.
>>>> "The association between a device and the Security state of the
>>>> programming interface is a system-defined property."
>>>> Does it mean we should be able to set a QEMU property for any device?
>>>>
>>>> Does anyone familiar with this has some idea?
>>>>
>>>> As well, we should check the SEC_SID found based on
>>>> SMMU_S_IDR1.SECURE_IMPL.
>>>> 3.10.1 StreamID Security state (SEC_SID)
>>>> If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
>>>> StreamID, and either:
>>>> • A SEC_SID identifier with a value of 0.
>>>> • No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
>>>> If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a
>>>> StreamID, and a SEC_SID identifier.
>>>>
>>>> Regards,
>>>> Pierrick
>>>
>>> Thank you very much for your detailed review and in-depth analysis, and
>>> for pointing out this critical issue that breaks the Linux boot.
>>>
>>>
>>> To be transparent, my initial approach was indeed tailored to my
>>> specific test case, where I was effectively hardcoding the device's
>>> StreamID to represent it's a so-called Secure device in my self
>>> testing.
>>> This clearly isn't a general solution.
>>>
>>
>> It's definitely not a bad approach, and it's a good way to exercise
>> the secure path. It would have been caught by some of QEMU functional
>> tests anyway, so it's not a big deal.
>>
>> A solution would be to define the secure attribute as a property of
>> the PCI device, and query that to identify sec_sid accordingly.
>> As you'll see in 3.10.1 StreamID Security state (SEC_SID), "Whether a
>> stream is under Secure control or not is a different property to the
>> target PA space of a transaction.", so we definitely should *not* do
>> any funky stuff depending on which address is accessed.
>
>
> Thank you for the encouraging and very constructive feedback.
>
>
> Your proposed solution—to define the security attribute as a property
> on the PCIDevice—is the perfect way forward to resolve Secure device
> issue. Perhaps we can implement this functionality in V4 as shown in
> the following code snippet?
>
> 1) define sec_sid in include/hw/pci/pci_device.h:
>
> struct PCIDevice {
> DeviceState qdev;
> ......
> /* Add SEC_SID property for SMMU security context */
> uint8_t sec_sid; /* 0 = Non-secure, 1 = Secure*/
> ......
>
> }
>
>
> 2) then add sec-sid field in the Property of PCI in hw/pci/pci.c:
>
> static const Property pci_props[] = {
> ......
> /* SEC_SID property: 0=NS, 1=S */
> DEFINE_PROP_UINT8("sec-sid", PCIDevice, sec_sid, 0),
>
> ......
>
> };
As this impacts the PCIe subsystem, I would encourage to submit that
change in a separate pre-requisite series. This needs to be reviewed by
Michael and other PCIe specialists.
Thanks
Eric
>
>
> 3) get sec-sid in smmu_find_add_as(hw/arm/smmu-common.c):
>
> static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int
> devfn)
> {
> SMMUState *s = opaque;
> SMMUPciBus *sbus = g_hash_table_lookup(s->smmu_pcibus_by_busptr,
> bus);
> SMMUDevice *sdev;
> static unsigned int index;
> ......
> sdev = sbus->pbdev[devfn];
> if (!sdev) {
>
> PCIDevice *pcidev;
> pcidev = pci_find_device(bus, pci_bus_num(bus), devfn);
> if (pcidev) {
> /* Get sec_sid which is originally from QEMU options.
> * For example:
> * qemu-system-aarch64 \
> * -drive if=none,file=/nvme.img,format=raw,id=nvme0 \
> * -device nvme,drive=nvme0,serial=deadbeef,sec-sid=1
> *
> * This NVMe device will have sec_sid = 1.
> */
> sdev->sec_sid = pcidev->sec_sid;
> } else {
> /* Default to Non-secure if device not found */
> sdev->sec_sid = 0;
> }
>
> ......
>
> }
>
> The SEC_SID of device will be passed from QEMU options to PCIDevice
> and then SMMUDevice. This would allow the SMMU model to perform the
> necessary checks against both the security context of the DMA access
> and the SMMU_S_IDR1.SECURE_IMPL capability bit.
>
>
> Is this a reasonable implementation approach? I would greatly
> appreciate any feedback.
>
>
>>
>> By curiosity, which kind of secure device are you using? Is it one of
>> the device available upstream, or a specific one you have in your fork?
>
>
> I just use IGB NIC for test with Hafnium + OP-TEE software stack.
>
>
>>
>>>
>>> You've raised a crucial architectural point that I hadn't fully
>>> considered: how a standard "Normal World" PCIe device should be
>>> properly
>>> associated with the "Secure World". To be honest, I didn't have a clear
>>> answer for this, so your feedback is a perfect opportunity for me to
>>> dig
>>> in and understand this area correctly.
>>>
>> It took time for us to reach that question also.
>> Our current understanding is that SEC_SID == Realm is identified by
>> bits on pci side (part of TDISP protocol), and that secure devices
>> are indeed hardcoded somewhere.
>>
>> We asked this question to some Arm folks working on this area, to
>> confirm Secure devices are supposed to be defined this way.
>
>
> Thank you also for sharing the invaluable context from your team's
> internal discussions and your outreach to the Arm experts. This
> clarification directly inspired my new proposal as described above.
>
>
> I will proceed with this plan for the v4 patch set. Thanks again for
> your mentorship and for helping to clarify the correct path forward.
>
> Best regards,
>
> Tao
>
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-12-04 15:05 ` Eric Auger
@ 2025-12-05 10:54 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-05 10:54 UTC (permalink / raw)
To: eric.auger, Pierrick Bouvier, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Philippe Mathieu-Daudé,
Jean-Philippe Brucker, Mostafa Saleh, Mathieu Poirier
Hi Eric,
On 2025/12/4 23:05, Eric Auger wrote:
>
> On 10/20/25 10:44 AM, Tao Tang wrote:
>> Hi Pierrick,
>>
>> On 2025/10/16 15:04, Pierrick Bouvier wrote:
>>> On 10/15/25 11:37 PM, Tao Tang wrote:
>>>> Hi Pierrick:
>>>>
>>>> On 2025/10/15 08:02, Pierrick Bouvier wrote:
>>>>> Hi Tao,
>>>>>
>>>>> On 10/12/25 8:15 AM, Tao Tang wrote:
>>>>>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to
>>>>>> select
>>>>>> the programming interface. To support future extensions like RME,
>>>>>> which
>>>>>> defines four security states (Non-secure, Secure, Realm, and
>>>>>> Root), the
>>>>>> QEMU model must cleanly separate these contexts for all operations.
>>>>>>
>>>>>> This commit leverages the generic iommu_index to represent this
>>>>>> security context. The core IOMMU layer now uses the SMMU's
>>>>>> .attrs_to_index
>>>>>> callback to map a transaction's ARMSecuritySpace attribute to the
>>>>>> corresponding iommu_index.
>>>>>>
>>>>>> This index is then passed down to smmuv3_translate and used
>>>>>> throughout
>>>>>> the model to select the correct register bank and processing
>>>>>> logic. This
>>>>>> makes the iommu_index the clear QEMU equivalent of the architectural
>>>>>> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>>>>>>
>>>>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>>>>> ---
>>>>>> hw/arm/smmuv3.c | 36 +++++++++++++++++++++++++++++++++++-
>>>>>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>>>>> index c9c742c80b..b44859540f 100644
>>>>>> --- a/hw/arm/smmuv3.c
>>>>>> +++ b/hw/arm/smmuv3.c
>>>>>> @@ -1080,6 +1080,38 @@ static void smmuv3_fixup_event(SMMUEventInfo
>>>>>> *event, hwaddr iova)
>>>>>> }
>>>>>> }
>>>>>> +static SMMUSecSID smmuv3_attrs_to_sec_sid(MemTxAttrs attrs)
>>>>>> +{
>>>>>> + switch (attrs.space) {
>>>>>> + case ARMSS_Secure:
>>>>>> + return SMMU_SEC_SID_S;
>>>>>> + case ARMSS_NonSecure:
>>>>>> + default:
>>>>>> + return SMMU_SEC_SID_NS;
>>>>>> + }
>>>>>> +}
>>>>>> +
>>>>>> +/*
>>>>>> + * ARM IOMMU index mapping (implements SEC_SID from ARM SMMU):
>>>>>> + * iommu_idx = 0: Non-secure transactions
>>>>>> + * iommu_idx = 1: Secure transactions
>>>>>> + *
>>>>>> + * The iommu_idx parameter effectively implements the SEC_SID
>>>>>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>>>>>> specification,
>>>>>> + * which allows the SMMU to differentiate between different security
>>>>>> state
>>>>>> + * transactions at the hardware level.
>>>>>> + */
>>>>>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu,
>>>>>> MemTxAttrs attrs)
>>>>>> +{
>>>>>> + return (int)smmuv3_attrs_to_sec_sid(attrs);
>>>>>> +}
>>>>>> +
>>>>>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>>>>>> +{
>>>>>> + /* Support 2 IOMMU indexes for now: NS/S */
>>>>>> + return SMMU_SEC_SID_NUM;
>>>>>> +}
>>>>>> +
>>>>>> /* Entry point to SMMU, does everything. */
>>>>>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr,
>>>>>> hwaddr
>>>>>> addr,
>>>>>> IOMMUAccessFlags flag, int
>>>>>> iommu_idx)
>>>>>> @@ -1087,7 +1119,7 @@ static IOMMUTLBEntry
>>>>>> smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>>>>>> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
>>>>>> SMMUv3State *s = sdev->smmu;
>>>>>> uint32_t sid = smmu_get_sid(sdev);
>>>>>> - SMMUSecSID sec_sid = SMMU_SEC_SID_NS;
>>>>>> + SMMUSecSID sec_sid = iommu_idx;
>>>>>> SMMUv3RegBank *bank = smmuv3_bank(s, sec_sid);
>>>>>> SMMUEventInfo event = {.type = SMMU_EVT_NONE,
>>>>>> .sid = sid,
>>>>>> @@ -2540,6 +2572,8 @@ static void
>>>>>> smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>>>>>> imrc->translate = smmuv3_translate;
>>>>>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>>>>>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>>>>>> + imrc->num_indexes = smmuv3_num_indexes;
>>>>>> }
>>>>>> static const TypeInfo smmuv3_type_info = {
>>>>> I noticed that this commit breaks boot of a simple Linux kernel. It
>>>>> was already the case with v2, and it seems there is a deeper issue.
>>>>>
>>>>> Virtio drive initialization hangs up with:
>>>>> [ 9.421906] virtio_blk virtio2: [vda] 20971520 512-byte logical
>>>>> blocks (10.7 GB/10.0 GiB)
>>>>> smmuv3_translate_disable smmuv3-iommu-memory-region-24-3 sid=0x18
>>>>> bypass (smmu disabled) iova:0xfffff040 is_write=1
>>>>>
>>>>> You can reproduce that with any kernel/rootfs, but if you want a
>>>>> simple recipe (you need podman and qemu-user-static):
>>>>> $ git clone https://github.com/pbo-linaro/qemu-linux-stack
>>>>> $ cd qemu-linux-stack
>>>>> $ ./build_kernel.sh
>>>>> $ ./build_rootfs.sh
>>>>> $ /path/to/qemu-system-aarch64 \
>>>>> -nographic -M virt,iommu=smmuv3 -cpu max -kernel out/Image.gz \
>>>>> -append "root=/dev/vda rw" out/host.ext4 -trace 'smmuv3*'
>>>>>
>>>>> Looking more closely,
>>>>> we reach SMMU_TRANS_DISABLE, because iommu_idx associated is 1.
>>>>> This values comes from smmuv3_attrs_to_sec_sid, by reading
>>>>> attrs.space, which is ArmSS_Secure.
>>>>>
>>>>> The problem is that it's impossible to have anything Secure given that
>>>>> all the code above runs in NonSecure world.
>>>>> After investigation, the original value read from attrs.space has not
>>>>> been set anywhere, and is just the default zero-initialized value
>>>>> coming from pci_msi_trigger. It happens that it defaults to SEC_SID_S,
>>>>> which probably matches your use case with hafnium, but it's an happy
>>>>> accident.
>>>>>
>>>>> Looking at the SMMU spec, I understand that SEC_SID is configured for
>>>>> each stream, and can change dynamically.
>>>>> On the opposite, a StreamID is fixed and derived from PCI bus and slot
>>>>> for a given device.
>>>>>
>>>>> Thus, I think we are missing some logic here.
>>>>> I'm still trying to understand where the SEC_SID should come from
>>>>> initially.
>>>>> "The association between a device and the Security state of the
>>>>> programming interface is a system-defined property."
>>>>> Does it mean we should be able to set a QEMU property for any device?
>>>>>
>>>>> Does anyone familiar with this has some idea?
>>>>>
>>>>> As well, we should check the SEC_SID found based on
>>>>> SMMU_S_IDR1.SECURE_IMPL.
>>>>> 3.10.1 StreamID Security state (SEC_SID)
>>>>> If SMMU_S_IDR1.SECURE_IMPL == 0, then incoming transactions have a
>>>>> StreamID, and either:
>>>>> • A SEC_SID identifier with a value of 0.
>>>>> • No SEC_SID identifer, and SEC_SID is implicitly treated as 0.
>>>>> If SMMU_S_IDR1.SECURE_IMPL == 1, incoming transactions have a
>>>>> StreamID, and a SEC_SID identifier.
>>>>>
>>>>> Regards,
>>>>> Pierrick
>>>> Thank you very much for your detailed review and in-depth analysis, and
>>>> for pointing out this critical issue that breaks the Linux boot.
>>>>
>>>>
>>>> To be transparent, my initial approach was indeed tailored to my
>>>> specific test case, where I was effectively hardcoding the device's
>>>> StreamID to represent it's a so-called Secure device in my self
>>>> testing.
>>>> This clearly isn't a general solution.
>>>>
>>> It's definitely not a bad approach, and it's a good way to exercise
>>> the secure path. It would have been caught by some of QEMU functional
>>> tests anyway, so it's not a big deal.
>>>
>>> A solution would be to define the secure attribute as a property of
>>> the PCI device, and query that to identify sec_sid accordingly.
>>> As you'll see in 3.10.1 StreamID Security state (SEC_SID), "Whether a
>>> stream is under Secure control or not is a different property to the
>>> target PA space of a transaction.", so we definitely should *not* do
>>> any funky stuff depending on which address is accessed.
>>
>> Thank you for the encouraging and very constructive feedback.
>>
>>
>> Your proposed solution—to define the security attribute as a property
>> on the PCIDevice—is the perfect way forward to resolve Secure device
>> issue. Perhaps we can implement this functionality in V4 as shown in
>> the following code snippet?
>>
>> 1) define sec_sid in include/hw/pci/pci_device.h:
>>
>> struct PCIDevice {
>> DeviceState qdev;
>> ......
>> /* Add SEC_SID property for SMMU security context */
>> uint8_t sec_sid; /* 0 = Non-secure, 1 = Secure*/
>> ......
>>
>> }
>>
>>
>> 2) then add sec-sid field in the Property of PCI in hw/pci/pci.c:
>>
>> static const Property pci_props[] = {
>> ......
>> /* SEC_SID property: 0=NS, 1=S */
>> DEFINE_PROP_UINT8("sec-sid", PCIDevice, sec_sid, 0),
>>
>> ......
>>
>> };
> As this impacts the PCIe subsystem, I would encourage to submit that
> change in a separate pre-requisite series. This needs to be reviewed by
> Michael and other PCIe specialists.
>
> Thanks
>
> Eric
OK. I'll split this feature into another patch. Thanks for the guidance!
Best regards,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 20/21] hw/arm/smmuv3: Initialize the secure register bank
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (18 preceding siblings ...)
2025-10-12 15:15 ` [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context Tao Tang
@ 2025-10-12 15:15 ` Tao Tang
2025-12-02 16:36 ` Eric Auger
2025-10-12 15:16 ` [RFC v3 21/21] hw/arm/smmuv3: Add secure migration and enable secure state Tao Tang
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:15 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Initialize the secure register bank (SMMU_SEC_SID_S) with sane default
values during the SMMU's reset sequence.
This change ensures that key fields, such as the secure ID registers,
GBPA reset value, and queue entry sizes, are set to a known-good state.
The SECURE_IMPL attribute of the S_IDR1 register will be introduced
later via device properties.
This is a necessary step to prevent undefined behavior when secure SMMU
features are subsequently enabled and used by software.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index b44859540f..0b366895ec 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -331,6 +331,15 @@ static void smmuv3_init_regs(SMMUv3State *s)
bk->gerrorn = 0;
s->statusr = 0;
bk->gbpa = SMMU_GBPA_RESET_VAL;
+
+ /* Initialize Secure bank */
+ SMMUv3RegBank *sbk = &s->bank[SMMU_SEC_SID_S];
+
+ memset(sbk->idr, 0, sizeof(sbk->idr));
+ sbk->idr[1] = FIELD_DP32(sbk->idr[1], S_IDR1, S_SIDSIZE, SMMU_IDR1_SIDSIZE);
+ sbk->gbpa = SMMU_GBPA_RESET_VAL;
+ sbk->cmdq.entry_size = sizeof(struct Cmd);
+ sbk->eventq.entry_size = sizeof(struct Evt);
}
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 20/21] hw/arm/smmuv3: Initialize the secure register bank
2025-10-12 15:15 ` [RFC v3 20/21] hw/arm/smmuv3: Initialize the secure register bank Tao Tang
@ 2025-12-02 16:36 ` Eric Auger
2025-12-03 15:48 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-02 16:36 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 10/12/25 5:15 PM, Tao Tang wrote:
> Initialize the secure register bank (SMMU_SEC_SID_S) with sane default
> values during the SMMU's reset sequence.
>
> This change ensures that key fields, such as the secure ID registers,
> GBPA reset value, and queue entry sizes, are set to a known-good state.
> The SECURE_IMPL attribute of the S_IDR1 register will be introduced
> later via device properties.
>
> This is a necessary step to prevent undefined behavior when secure SMMU
> features are subsequently enabled and used by software.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index b44859540f..0b366895ec 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -331,6 +331,15 @@ static void smmuv3_init_regs(SMMUv3State *s)
> bk->gerrorn = 0;
> s->statusr = 0;
> bk->gbpa = SMMU_GBPA_RESET_VAL;
> +
> + /* Initialize Secure bank */
> + SMMUv3RegBank *sbk = &s->bank[SMMU_SEC_SID_S];
> +
> + memset(sbk->idr, 0, sizeof(sbk->idr));
> + sbk->idr[1] = FIELD_DP32(sbk->idr[1], S_IDR1, S_SIDSIZE, SMMU_IDR1_SIDSIZE);
> + sbk->gbpa = SMMU_GBPA_RESET_VAL;
> + sbk->cmdq.entry_size = sizeof(struct Cmd);
> + sbk->eventq.entry_size = sizeof(struct Evt);
what about prod, cons, base? Don't they need to initialized as for NS.
Also I am surprised only one IDR field is set. No need for some others?
Eric
> }
>
> static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [RFC v3 20/21] hw/arm/smmuv3: Initialize the secure register bank
2025-12-02 16:36 ` Eric Auger
@ 2025-12-03 15:48 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-03 15:48 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/12/3 00:36, Eric Auger wrote:
> Hi Tao,
>
> On 10/12/25 5:15 PM, Tao Tang wrote:
>> Initialize the secure register bank (SMMU_SEC_SID_S) with sane default
>> values during the SMMU's reset sequence.
>>
>> This change ensures that key fields, such as the secure ID registers,
>> GBPA reset value, and queue entry sizes, are set to a known-good state.
>> The SECURE_IMPL attribute of the S_IDR1 register will be introduced
>> later via device properties.
>>
>> This is a necessary step to prevent undefined behavior when secure SMMU
>> features are subsequently enabled and used by software.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 9 +++++++++
>> 1 file changed, 9 insertions(+)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index b44859540f..0b366895ec 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -331,6 +331,15 @@ static void smmuv3_init_regs(SMMUv3State *s)
>> bk->gerrorn = 0;
>> s->statusr = 0;
>> bk->gbpa = SMMU_GBPA_RESET_VAL;
>> +
>> + /* Initialize Secure bank */
>> + SMMUv3RegBank *sbk = &s->bank[SMMU_SEC_SID_S];
>> +
>> + memset(sbk->idr, 0, sizeof(sbk->idr));
>> + sbk->idr[1] = FIELD_DP32(sbk->idr[1], S_IDR1, S_SIDSIZE, SMMU_IDR1_SIDSIZE);
>> + sbk->gbpa = SMMU_GBPA_RESET_VAL;
>> + sbk->cmdq.entry_size = sizeof(struct Cmd);
>> + sbk->eventq.entry_size = sizeof(struct Evt);
> what about prod, cons, base? Don't they need to initialized as for NS.
>
> Also I am surprised only one IDR field is set. No need for some others?
>
> Eric
I’ll add the missing initializations in the next version, and will try
to keep the secure and non-secure banks as close to mirrored as
possible, and only diverging where a field doesn’t exist on one side.
Thanks,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread
* [RFC v3 21/21] hw/arm/smmuv3: Add secure migration and enable secure state
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (19 preceding siblings ...)
2025-10-12 15:15 ` [RFC v3 20/21] hw/arm/smmuv3: Initialize the secure register bank Tao Tang
@ 2025-10-12 15:16 ` Tao Tang
2025-12-02 16:39 ` Eric Auger
20 siblings, 1 reply; 67+ messages in thread
From: Tao Tang @ 2025-10-12 15:16 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Introduce a bool secure_impl field to SMMUv3State and expose it as
a secure-impl device property. The introduction of this property is the
culminating step that activates the entire secure access data path,
tying together all previously merged logic to provide full support for
secure state accesses.
Add live migration support for the SMMUv3 secure register bank.
To correctly migrate the secure state, the migration logic must know
if the secure functionality is enabled. To facilitate this, a bool
secure_impl field is introduced and exposed as the secure-impl device
property. This property is introduced at the point it is first
required—for migration—and serves as the final piece of the series.
The introduction of this property also completes and activates the
entire secure access data path, tying together all previously merged
logic to provide full support for secure state accesses.
Usage:
-global arm-smmuv3,secure-impl=true
When this property is enabled, the capability is advertised to the
guest via the S_IDR1.SECURE_IMPL bit.
The migration is implemented as follows:
- A new vmstate_smmuv3_secure_bank, referenced by the smmuv3/bank_s
subsection, serializes the secure bank's registers and queues.
- A companion smmuv3/gbpa_secure subsection mirrors the non-secure
GBPA handling, migrating the register only if its value diverges
from the reset default.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 75 +++++++++++++++++++++++++++++++++++++++++
include/hw/arm/smmuv3.h | 1 +
2 files changed, 76 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 0b366895ec..ce41a12a36 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -337,6 +337,7 @@ static void smmuv3_init_regs(SMMUv3State *s)
memset(sbk->idr, 0, sizeof(sbk->idr));
sbk->idr[1] = FIELD_DP32(sbk->idr[1], S_IDR1, S_SIDSIZE, SMMU_IDR1_SIDSIZE);
+ sbk->idr[1] = FIELD_DP32(sbk->idr[1], S_IDR1, SECURE_IMPL, s->secure_impl);
sbk->gbpa = SMMU_GBPA_RESET_VAL;
sbk->cmdq.entry_size = sizeof(struct Cmd);
sbk->eventq.entry_size = sizeof(struct Evt);
@@ -2452,6 +2453,53 @@ static const VMStateDescription vmstate_smmuv3_queue = {
},
};
+static const VMStateDescription vmstate_smmuv3_secure_bank = {
+ .name = "smmuv3_secure_bank",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .fields = (const VMStateField[]) {
+ VMSTATE_UINT32(features, SMMUv3RegBank),
+ VMSTATE_UINT8(sid_split, SMMUv3RegBank),
+ VMSTATE_UINT32_ARRAY(cr, SMMUv3RegBank, 3),
+ VMSTATE_UINT32(cr0ack, SMMUv3RegBank),
+ VMSTATE_UINT32(irq_ctrl, SMMUv3RegBank),
+ VMSTATE_UINT32(gerror, SMMUv3RegBank),
+ VMSTATE_UINT32(gerrorn, SMMUv3RegBank),
+ VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3RegBank),
+ VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3RegBank),
+ VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3RegBank),
+ VMSTATE_UINT64(strtab_base, SMMUv3RegBank),
+ VMSTATE_UINT32(strtab_base_cfg, SMMUv3RegBank),
+ VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3RegBank),
+ VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3RegBank),
+ VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3RegBank),
+ VMSTATE_STRUCT(cmdq, SMMUv3RegBank, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
+ VMSTATE_STRUCT(eventq, SMMUv3RegBank, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
+ VMSTATE_END_OF_LIST(),
+ },
+};
+
+static bool smmuv3_secure_bank_needed(void *opaque)
+{
+ SMMUv3State *s = opaque;
+
+ return s->secure_impl;
+}
+
+static const VMStateDescription vmstate_smmuv3_bank_s = {
+ .name = "smmuv3/bank_s",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .needed = smmuv3_secure_bank_needed,
+ .fields = (const VMStateField[]) {
+ VMSTATE_STRUCT(bank[SMMU_SEC_SID_S], SMMUv3State, 0,
+ vmstate_smmuv3_secure_bank, SMMUv3RegBank),
+ VMSTATE_END_OF_LIST(),
+ },
+};
+
static bool smmuv3_gbpa_needed(void *opaque)
{
SMMUv3State *s = opaque;
@@ -2472,6 +2520,25 @@ static const VMStateDescription vmstate_gbpa = {
}
};
+static bool smmuv3_gbpa_secure_needed(void *opaque)
+{
+ SMMUv3State *s = opaque;
+
+ return s->secure_impl &&
+ s->bank[SMMU_SEC_SID_S].gbpa != SMMU_GBPA_RESET_VAL;
+}
+
+static const VMStateDescription vmstate_gbpa_secure = {
+ .name = "smmuv3/gbpa_secure",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .needed = smmuv3_gbpa_secure_needed,
+ .fields = (const VMStateField[]) {
+ VMSTATE_UINT32(bank[SMMU_SEC_SID_S].gbpa, SMMUv3State),
+ VMSTATE_END_OF_LIST()
+ }
+};
+
static const VMStateDescription vmstate_smmuv3 = {
.name = "smmuv3",
.version_id = 1,
@@ -2506,6 +2573,8 @@ static const VMStateDescription vmstate_smmuv3 = {
},
.subsections = (const VMStateDescription * const []) {
&vmstate_gbpa,
+ &vmstate_smmuv3_bank_s,
+ &vmstate_gbpa_secure,
NULL
}
};
@@ -2519,6 +2588,12 @@ static const Property smmuv3_properties[] = {
* Defaults to stage 1
*/
DEFINE_PROP_STRING("stage", SMMUv3State, stage),
+ /*
+ * SECURE_IMPL field in S_IDR1 register.
+ * Indicates whether secure state is implemented.
+ * Defaults to false (0)
+ */
+ DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
};
static void smmuv3_instance_init(Object *obj)
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
index e9012fcdb0..8fec3f8edb 100644
--- a/include/hw/arm/smmuv3.h
+++ b/include/hw/arm/smmuv3.h
@@ -69,6 +69,7 @@ struct SMMUv3State {
qemu_irq irq[4];
QemuMutex mutex;
char *stage;
+ bool secure_impl;
};
typedef enum {
--
2.34.1
^ permalink raw reply related [flat|nested] 67+ messages in thread* Re: [RFC v3 21/21] hw/arm/smmuv3: Add secure migration and enable secure state
2025-10-12 15:16 ` [RFC v3 21/21] hw/arm/smmuv3: Add secure migration and enable secure state Tao Tang
@ 2025-12-02 16:39 ` Eric Auger
2025-12-03 15:54 ` Tao Tang
0 siblings, 1 reply; 67+ messages in thread
From: Eric Auger @ 2025-12-02 16:39 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 10/12/25 5:16 PM, Tao Tang wrote:
> Introduce a bool secure_impl field to SMMUv3State and expose it as
> a secure-impl device property. The introduction of this property is the
> culminating step that activates the entire secure access data path,
> tying together all previously merged logic to provide full support for
> secure state accesses.
>
> Add live migration support for the SMMUv3 secure register bank.
>
> To correctly migrate the secure state, the migration logic must know
> if the secure functionality is enabled. To facilitate this, a bool
> secure_impl field is introduced and exposed as the secure-impl device
> property. This property is introduced at the point it is first
> required—for migration—and serves as the final piece of the series.
>
> The introduction of this property also completes and activates the
> entire secure access data path, tying together all previously merged
> logic to provide full support for secure state accesses.
>
> Usage:
> -global arm-smmuv3,secure-impl=true
>
> When this property is enabled, the capability is advertised to the
> guest via the S_IDR1.SECURE_IMPL bit.
>
> The migration is implemented as follows:
>
> - A new vmstate_smmuv3_secure_bank, referenced by the smmuv3/bank_s
> subsection, serializes the secure bank's registers and queues.
>
> - A companion smmuv3/gbpa_secure subsection mirrors the non-secure
> GBPA handling, migrating the register only if its value diverges
> from the reset default.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 75 +++++++++++++++++++++++++++++++++++++++++
> include/hw/arm/smmuv3.h | 1 +
> 2 files changed, 76 insertions(+)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 0b366895ec..ce41a12a36 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -337,6 +337,7 @@ static void smmuv3_init_regs(SMMUv3State *s)
>
> memset(sbk->idr, 0, sizeof(sbk->idr));
> sbk->idr[1] = FIELD_DP32(sbk->idr[1], S_IDR1, S_SIDSIZE, SMMU_IDR1_SIDSIZE);
> + sbk->idr[1] = FIELD_DP32(sbk->idr[1], S_IDR1, SECURE_IMPL, s->secure_impl);
> sbk->gbpa = SMMU_GBPA_RESET_VAL;
> sbk->cmdq.entry_size = sizeof(struct Cmd);
> sbk->eventq.entry_size = sizeof(struct Evt);
> @@ -2452,6 +2453,53 @@ static const VMStateDescription vmstate_smmuv3_queue = {
> },
> };
>
> +static const VMStateDescription vmstate_smmuv3_secure_bank = {
> + .name = "smmuv3_secure_bank",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .fields = (const VMStateField[]) {
> + VMSTATE_UINT32(features, SMMUv3RegBank),
> + VMSTATE_UINT8(sid_split, SMMUv3RegBank),
> + VMSTATE_UINT32_ARRAY(cr, SMMUv3RegBank, 3),
> + VMSTATE_UINT32(cr0ack, SMMUv3RegBank),
> + VMSTATE_UINT32(irq_ctrl, SMMUv3RegBank),
> + VMSTATE_UINT32(gerror, SMMUv3RegBank),
> + VMSTATE_UINT32(gerrorn, SMMUv3RegBank),
> + VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3RegBank),
> + VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3RegBank),
> + VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3RegBank),
> + VMSTATE_UINT64(strtab_base, SMMUv3RegBank),
> + VMSTATE_UINT32(strtab_base_cfg, SMMUv3RegBank),
> + VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3RegBank),
> + VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3RegBank),
> + VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3RegBank),
> + VMSTATE_STRUCT(cmdq, SMMUv3RegBank, 0,
> + vmstate_smmuv3_queue, SMMUQueue),
> + VMSTATE_STRUCT(eventq, SMMUv3RegBank, 0,
> + vmstate_smmuv3_queue, SMMUQueue),
> + VMSTATE_END_OF_LIST(),
> + },
> +};
> +
> +static bool smmuv3_secure_bank_needed(void *opaque)
> +{
> + SMMUv3State *s = opaque;
> +
> + return s->secure_impl;
> +}
> +
> +static const VMStateDescription vmstate_smmuv3_bank_s = {
> + .name = "smmuv3/bank_s",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .needed = smmuv3_secure_bank_needed,
> + .fields = (const VMStateField[]) {
> + VMSTATE_STRUCT(bank[SMMU_SEC_SID_S], SMMUv3State, 0,
> + vmstate_smmuv3_secure_bank, SMMUv3RegBank),
> + VMSTATE_END_OF_LIST(),
> + },
> +};
> +
> static bool smmuv3_gbpa_needed(void *opaque)
> {
> SMMUv3State *s = opaque;
> @@ -2472,6 +2520,25 @@ static const VMStateDescription vmstate_gbpa = {
> }
> };
>
> +static bool smmuv3_gbpa_secure_needed(void *opaque)
I don't think you need that subsection. You can directly put this in the
secure subsection one. This was needed for NS to avoid breaking
migration but here you shall not need it.
Thanks
Eric
> +{
> + SMMUv3State *s = opaque;
> +
> + return s->secure_impl &&
> + s->bank[SMMU_SEC_SID_S].gbpa != SMMU_GBPA_RESET_VAL;
> +}
> +
> +static const VMStateDescription vmstate_gbpa_secure = {
> + .name = "smmuv3/gbpa_secure",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .needed = smmuv3_gbpa_secure_needed,
> + .fields = (const VMStateField[]) {
> + VMSTATE_UINT32(bank[SMMU_SEC_SID_S].gbpa, SMMUv3State),
> + VMSTATE_END_OF_LIST()
> + }
> +};
> +
> static const VMStateDescription vmstate_smmuv3 = {
> .name = "smmuv3",
> .version_id = 1,
> @@ -2506,6 +2573,8 @@ static const VMStateDescription vmstate_smmuv3 = {
> },
> .subsections = (const VMStateDescription * const []) {
> &vmstate_gbpa,
> + &vmstate_smmuv3_bank_s,
> + &vmstate_gbpa_secure,
> NULL
> }
> };
> @@ -2519,6 +2588,12 @@ static const Property smmuv3_properties[] = {
> * Defaults to stage 1
> */
> DEFINE_PROP_STRING("stage", SMMUv3State, stage),
> + /*
> + * SECURE_IMPL field in S_IDR1 register.
> + * Indicates whether secure state is implemented.
> + * Defaults to false (0)
> + */
> + DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
> };
>
> static void smmuv3_instance_init(Object *obj)
> diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
> index e9012fcdb0..8fec3f8edb 100644
> --- a/include/hw/arm/smmuv3.h
> +++ b/include/hw/arm/smmuv3.h
> @@ -69,6 +69,7 @@ struct SMMUv3State {
> qemu_irq irq[4];
> QemuMutex mutex;
> char *stage;
> + bool secure_impl;
> };
>
> typedef enum {
^ permalink raw reply [flat|nested] 67+ messages in thread* Re: [RFC v3 21/21] hw/arm/smmuv3: Add secure migration and enable secure state
2025-12-02 16:39 ` Eric Auger
@ 2025-12-03 15:54 ` Tao Tang
0 siblings, 0 replies; 67+ messages in thread
From: Tao Tang @ 2025-12-03 15:54 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 2025/12/3 00:39, Eric Auger wrote:
>
> On 10/12/25 5:16 PM, Tao Tang wrote:
>> Introduce a bool secure_impl field to SMMUv3State and expose it as
>> a secure-impl device property. The introduction of this property is the
>> culminating step that activates the entire secure access data path,
>> tying together all previously merged logic to provide full support for
>> secure state accesses.
>>
>> Add live migration support for the SMMUv3 secure register bank.
>>
>> To correctly migrate the secure state, the migration logic must know
>> if the secure functionality is enabled. To facilitate this, a bool
>> secure_impl field is introduced and exposed as the secure-impl device
>> property. This property is introduced at the point it is first
>> required—for migration—and serves as the final piece of the series.
>>
>> The introduction of this property also completes and activates the
>> entire secure access data path, tying together all previously merged
>> logic to provide full support for secure state accesses.
>>
>> Usage:
>> -global arm-smmuv3,secure-impl=true
>>
>> When this property is enabled, the capability is advertised to the
>> guest via the S_IDR1.SECURE_IMPL bit.
>>
>> The migration is implemented as follows:
>>
>> - A new vmstate_smmuv3_secure_bank, referenced by the smmuv3/bank_s
>> subsection, serializes the secure bank's registers and queues.
>>
>> - A companion smmuv3/gbpa_secure subsection mirrors the non-secure
>> GBPA handling, migrating the register only if its value diverges
>> from the reset default.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 75 +++++++++++++++++++++++++++++++++++++++++
>> include/hw/arm/smmuv3.h | 1 +
>> 2 files changed, 76 insertions(+)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index 0b366895ec..ce41a12a36 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -337,6 +337,7 @@ static void smmuv3_init_regs(SMMUv3State *s)
>>
>> memset(sbk->idr, 0, sizeof(sbk->idr));
>> sbk->idr[1] = FIELD_DP32(sbk->idr[1], S_IDR1, S_SIDSIZE, SMMU_IDR1_SIDSIZE);
>> + sbk->idr[1] = FIELD_DP32(sbk->idr[1], S_IDR1, SECURE_IMPL, s->secure_impl);
>> sbk->gbpa = SMMU_GBPA_RESET_VAL;
>> sbk->cmdq.entry_size = sizeof(struct Cmd);
>> sbk->eventq.entry_size = sizeof(struct Evt);
>> @@ -2452,6 +2453,53 @@ static const VMStateDescription vmstate_smmuv3_queue = {
>> },
>> };
>>
>> +static const VMStateDescription vmstate_smmuv3_secure_bank = {
>> + .name = "smmuv3_secure_bank",
>> + .version_id = 1,
>> + .minimum_version_id = 1,
>> + .fields = (const VMStateField[]) {
>> + VMSTATE_UINT32(features, SMMUv3RegBank),
>> + VMSTATE_UINT8(sid_split, SMMUv3RegBank),
>> + VMSTATE_UINT32_ARRAY(cr, SMMUv3RegBank, 3),
>> + VMSTATE_UINT32(cr0ack, SMMUv3RegBank),
>> + VMSTATE_UINT32(irq_ctrl, SMMUv3RegBank),
>> + VMSTATE_UINT32(gerror, SMMUv3RegBank),
>> + VMSTATE_UINT32(gerrorn, SMMUv3RegBank),
>> + VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3RegBank),
>> + VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3RegBank),
>> + VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3RegBank),
>> + VMSTATE_UINT64(strtab_base, SMMUv3RegBank),
>> + VMSTATE_UINT32(strtab_base_cfg, SMMUv3RegBank),
>> + VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3RegBank),
>> + VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3RegBank),
>> + VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3RegBank),
>> + VMSTATE_STRUCT(cmdq, SMMUv3RegBank, 0,
>> + vmstate_smmuv3_queue, SMMUQueue),
>> + VMSTATE_STRUCT(eventq, SMMUv3RegBank, 0,
>> + vmstate_smmuv3_queue, SMMUQueue),
>> + VMSTATE_END_OF_LIST(),
>> + },
>> +};
>> +
>> +static bool smmuv3_secure_bank_needed(void *opaque)
>> +{
>> + SMMUv3State *s = opaque;
>> +
>> + return s->secure_impl;
>> +}
>> +
>> +static const VMStateDescription vmstate_smmuv3_bank_s = {
>> + .name = "smmuv3/bank_s",
>> + .version_id = 1,
>> + .minimum_version_id = 1,
>> + .needed = smmuv3_secure_bank_needed,
>> + .fields = (const VMStateField[]) {
>> + VMSTATE_STRUCT(bank[SMMU_SEC_SID_S], SMMUv3State, 0,
>> + vmstate_smmuv3_secure_bank, SMMUv3RegBank),
>> + VMSTATE_END_OF_LIST(),
>> + },
>> +};
>> +
>> static bool smmuv3_gbpa_needed(void *opaque)
>> {
>> SMMUv3State *s = opaque;
>> @@ -2472,6 +2520,25 @@ static const VMStateDescription vmstate_gbpa = {
>> }
>> };
>>
>> +static bool smmuv3_gbpa_secure_needed(void *opaque)
> I don't think you need that subsection. You can directly put this in the
> secure subsection one. This was needed for NS to avoid breaking
> migration but here you shall not need it.
>
> Thanks
>
> Eric
Thanks for the clarification. I'll drop the smmuv3/gbpa_secure
subsection and just migrate the secure GBPA as part of the smmuv3/bank_s
secure subsection, since we don't have any existing migration ABI to
preserve for the secure state.
Also, many thanks for all your review work on this series. I'll prepare
and send a v4 shortly, and if you have some time to look at it as well,
that would be greatly appreciated.
Yours,
Tao
^ permalink raw reply [flat|nested] 67+ messages in thread